Hacker Newsnew | past | comments | ask | show | jobs | submit | YetAnotherNick's commentslogin

What is anyone really doing with openclaw? I tried to stick to it but just can't understand the utility beyond just linking AI chat to whatsapp. Almost nothing, not even simple things like setting reminders, worked reliably for me.

It tries to understand its own settings but fails terribly.


> Americans is land of the free until someone shows a nipple. Or copies a floppy. Or refuses to partake in flag shagging. Or says something critical of the president.

Can you give an example of censoring of any of these type of content? AFAIK there is only age gating.


17k token/sec is $0.18/chip/hr for the size of H100 chip if they want to compete with the market rate[1]. But 17k token/sec could lead to some new usecases.

[1]: https://artificialanalysis.ai/models/llama-3-1-instruct-8b/p...


And it's just infra, tech stack comes after it.

CDK is better when it works. Terraform has so many escape hatches it scales better with edge cases over time.

There are all sort of requirements that pops up, specially in times of downtime or testing infra migration in production etc. and it's much easier to manually edit the terraform states.


Not only that, it is the slowest app among all AI apps.

It also has some strange bugs between versions. There was an update a month or two ago that caused the app to be unable to quit normally, and I would have to 'force quit' it. Thankfully it was resolved, but it was unnerving to not be able to close the app normally.

TerminalBench is like the worst named benchmark. It has almost nothing to do with terminal, but random tools syntax. Also it's not agentic for most tasks if the model memorized some random tool command line flags.

What do you mean? It tests whether the model knows the tools and uses them.

Yeah it's a knowledge benchmark not agentic benchmark.

That's like saying coding benchmarks are about memorizing the language syntax. You have to know what to call when and how. If you get the job done you win.

I am saying the opposite. If a coding benchmark just tests the syntax of a esoteric language, it shouldn't be called coding benchmark.

For a benchmark named terminal bench, I would assume it would require some terminal "interaction", not giving the code and command.


Not saying it isn't but the part that's hard to understand is why can't a new brand or a sub brand fix it? It seems almost trivial to label differently, and solve a problem worth solving for and earn money?

And no, don't tell me why existing brand doesn't do it, like all the other replies here.


They did it by limiting the supply of cards. Even if you are ready to pay 4x of MSRP, you can't buy 100 of the card at once. Many consumers bought 1 GPU at 2-4x of MSRP.

How is it related? I dont need lock for myself. I need it for others.

The analogy should be obvious--a model refusing to perform an unethical action is the lock against others.

But "you" are the "other" for someone else.

Can you give an example where I should care about other adults lock? Before you say image or porn, it was always possible to do it without using AI.

Claude was used by the US military in the Venezuela raid where they captured Maduro. [1]

Without safety features, an LLM could also help plan a terrorist attack.

A smart, competent terrorist can plan a successful attack without help from Claude. But most would-be terrorists aren't that smart and competent. Many are caught before hurting anyone or do far less damage than they could have. An LLM can help walk you through every step, and answer all your questions along the way. It could, say, explain to you all the different bomb chemistries, recommend one for your use case, help you source materials, and walk you through how to build the bomb safely. It lowers the bar for who can do this.

[1] https://www.theguardian.com/technology/2026/feb/14/us-milita...


Yeah, if US military gets any substantial help from Claude(which I highly doubt to be honest), I am all for it. At the worst case, it will reduce military budget and equalize the army more. At the best case, it will prevent war by increasing defence of all countries.

For the bomb example, the barrier of entry is just sourcing of some chemicals. Wikipedia has quite detailed description of all the manufacture of all the popular bombs you can think of.


> Wikipedia has quite detailed description of all the manufacture of all the popular bombs you can think of.

Did you bother to check? It contains very high level overviews of how various explosives are manufactured, but no proper instructions and nothing that would allow an average person to safely make a bomb.

There's a big difference in how many people can actually make a bomb if you have step by step instructions the average person can follow vs soft barriers that just require someone to be a standard deviation or two above average. At two sigma, 98% will fail, despite being able to do it in theory.

> Yeah, if US military gets any substantial help from Claude(which I highly doubt to be honest), I am all for it.

That's not the point. I'm not saying we need to lock out the military. I'm saying if the military finds the unlocked/unsafe version of Claude useful for planning attacks, other people can also find useful for planning attacks.


> Did you bother to check?

Yeah I am not a chemist, but watch Nilered. And from [1], I know how all steps would look like. Also there are literal videos in youtube for this.

And if someone can't google what nitrated or crystallization mean, maybe they just can't build a bomb with somewhat more detailed instruction.

> other people can also find useful for planning attacks.

I am still not able to imagine what you mean. You think attacks don't happen because people can't plan it? In fact I would say it's the opposite. Random lazy people like school shooters precisely attacks because they didn't plan for it. If ChatGPT gave detailed plan, the chances of attack would reduce.

[1]: https://en.wikipedia.org/wiki/TNT#Preparation


You're kidding yourself if you think you can make TNT from the 3 sentences Wikipedia has on the two-step process with no chemistry background. (And even moreso if you attempt the industrial process instead.) This isn't nearly as simple as making nitroglycerin. TNT is a much trickier process. You're more likely to get yourself injured than end up with a useable explosive. There's no procedure written there.

> If ChatGPT gave detailed plan, the chances of attack would reduce.

So you think helping a terrorist plan how to kill people somehow makes things safer? That's some mental gymnastics...


I don't think I can make TNT but I can understand the steps without chemistry background. I believe I will likely injure myself but more detailed steps is unlikely to help.

> So you think helping a terrorist plan how to kill people somehow makes things safer?

They just need to run a bus into some crowded space or something. They don't need ChatGPT for this. With more education, the chances of becoming terrorist reduces even if you can plan better.


The same law prevents you and me and a hundred thousand lone wolf wannabes from building and using a kill-bot.

The question is, at what point does some AI become competent enough to engineer one? And that's just one example, it's an illustration of the category and not the specific sole risk.

If the model makers don't know that in advance, the argument given for delaying GPT-2 applies: you can't take back publication, better to have a standard of excess caution.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: