Hacker Newsnew | past | comments | ask | show | jobs | submit | bicepjai's commentslogin

lol. OpenChannel could have been a good name. I think the author was trying to make it more accessible and cover users from different channel. When I tried openclaw I didn’t have a good experience setting things up. It was long and windy and was not a good experience. I love the idea though.

I am on similar path and it’s fun to build an agentic loop with all the capabilities we want

Google adopted "Don't be evil" shortly after founding and held onto it for about 15 years before Alphabet quietly dropped it in 2015. (Google the subsidiary technically kept it until 2018).

Anthropic's Responsible Scaling Policy, the hard commitment to never train a model unless safety measures were guaranteed adequate in advance, lasted roughly 2.5 years (Sept 2023 to Feb 2026).

The half-life of idealism in AI is compressing fast. Google at least had the excuse of gradualism over a decade and a half.


Totally true. It’s hard for me to stop a project, I keep piling feature after feature for no reason. I literally stop only when Claude Max Pro hits the hourly limit.


This is the experience of many of us. But just like with social media, it doesn't give deep satisfaction and always leaves me a bit frustrated.


I hope Anthropic understands the irony. Also I have to say, some of these memes are phenomenal.

https://x.com/saquib0509/status/2026085877219549305?s=46

https://x.com/fbsloxbt/status/2026028440759996432?s=46


Please be courteous to other drivers on the road, we all share it. Just make sure you’re the one in charge, not the software. This isn’t to put your argument down, but to offer the perspective of people involved in accidents. Loss of life is bad, but surviving accidents is also equally bad.


This reminds me of a story from India’s space program that I think about whenever I see orgs blame engineers for systemic failures. In 1979, India’s first satellite launch vehicle (SLV-3) failed on its maiden flight. The project director was a young A.P.J. Abdul Kalam (who later became India’s President). He was devastated and ready to face the media. But ISRO chairman Prof. Satish Dhawan stepped in front of the press himself and took full responsibility for the failure, shielding Kalam and the entire team. A year later, SLV-3 launched successfully. This time, Dhawan didn’t show up at the press conference. He sent Kalam instead, letting the team take all the credit. Kalam said this was the greatest leadership lesson he ever received. Now contrast that with Amazon pointing fingers at engineers for AI agent mistakes. These are tools the org chose to adopt, workflows the org designed and guardrails (or lack thereof) the org is responsible for. If your AI coding agent is producing errors that make it to production, that’s a process failure, not an individual engineer failure. Good leaders absorb blame downward and reflect credit upward. What we’re seeing here is the exact opposite. That’s not engineering culture. That’s cover.


At this point, I just operate under the assumption that every bad actor out there already has my data. Six months of exposure is an eternity. It really makes you question the entire trade off: we hand over our personal information in exchange for ‘free’ or convenient services and this is what we get in return. The product-is-you model only works if the company holding your data actually bothers to protect it.


In the era of agents, just create your own website. Also it is insane that this is happening.


Yes. Then, you only have to convince Bing Copilot (et al.) to eventually list that website of yours.


Are you saying we need our website to be shown in search results ? Can you elaborate on your comments ? Genuinely curious


Newton’s gravity vs Relativity is a matter of precision. Newtonian mechanics is a limiting case of general relativity that works excellently within known bounds. PRNGs, by contrast, can fail categorically, not just in precision. A PRNG with subtle correlations doesn’t just give you a slightly less accurate answer, it can produce systematically biased results that look perfectly fine until they don’t. The failure modes are qualitatively different.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: