That FAQ snippet is insane to me. Maybe it's a cultural thing but I'd never do business with a company that has implicit threats in their ToS based on something so completely arbitrary.
The worst part is really the unclear procedure. If they set out terms that say they'll give me 4 weeks to migrate projects they don't like off the platform, with n email reminders in between, then that's not ideal but fine. As it is, I'd be worried I'll wake up to data loss if they get 'unhappy'. I have the same problem with sourcehut, actually, with their content policy.
What an absurd double standard. The language is patterned after GitHub's own caveats about misuse of GitHub Pages:
> you may receive a polite email from GitHub Support suggesting strategies[… such as, and including] moving to a different hosting service that might better fit your needs
GitHub Pages has never been a free-for-all. The acceptable use policy makes it clear:
> the primary focus of the Content posted in or through your Account to the Service should not be advertising or promotional[…] You may include static images, links, and promotional text in the README documents or project description sections associated with your Account, but they must be related to the project you are hosting on GitHub
Well it's kind of describing the reality that exists at other companies today. Most ToS's have clauses where they can kick you off for not using it as intended, solely at their discretion. At least these guys are honest and upfront about it. I do agree though some more guidelines around their policy would be nice.
>> The last 10 years in the software industry in particular seems full of meta-work. Building new frameworks, new tools, new virtualization layers, new distributed systems, new dev tooling, new org charts. All to build... what exactly? Are these tools necessary to build what we actually need? Or are they necessary to prop up an unsustainable industry by inventing new jobs?
This is because all the low-hanging fruit has already been built. CRM. Invoicing. HR. Project/task management. And hundreds of others in various flavors.
It may exist (with a loose term of exist) but they are all mostly garbage. There's still plenty opportunity to make non-garbage version of things that already exist
This is technically true but also a bit naive. Established incumbents are very difficult to dislodge with merely a better version of their products. This becomes more true the larger the product and the average customer size. A good example is QuickBooks, which is a really janky accounting/bookkeeping software that is almost universally hated, but newer and better solutions haven't been able to capture much market share from it.
It’s hard to actually build a better QuickBooks because to build a better QuickBooks you need 1000+ integrations that each took hundreds of man hours to build.
But the iOS app is not what was shared. Why would someone use an iOS app they haven't used as the basis for their comment? Especially since you yourself did not mention it in your top comment?
Amongst people who use AI regularly, November 2025 is widely regarded as a watershed moment. Opus 4.5 was head and shoulders above anything that came before it. It marked the first time my previously AI-disliker friends begrudgingly came to accept that it may actually be useful.
I think it was AI-assisted at the very least. Nothing wrong with that, but it's always a good idea to make another pass to identify and remove LLM "tropes".
>> This problem is inherently unsolvable because LLMS are prone to hallucinations and prompt injection attacks.
Okay, but aren't you making the mistake of assuming that we will always be stuck with LLMs, and a more advanced form of AI won't be invented that can do what LLMs can do, but is also resistant or immune to these problems? Or perhaps another "layer" (pre-processing/post-processing) that runs alongside LLMs?
We built a correction layer that does this — the model verifies its output against your prompt during generation, not after. Same API call, no retries.
Budget models without it: 40-50% accuracy. With it: 95.7% on 10k+ clinical documents. Hallucinations aren't eliminated — some might still fail — but every failure is explicitly flagged. No silent errors. and it improves over time to give you better results next time.
It doesn't make hallucinations "solved. 100%". It makes them an engineering problem with a measurable - very low error rate you can drive down over time.
We're calling it LiveFix — livefix.ai. Benchmarked across all frontier and budget models.
No? That's why I said "If that turns out to be false, then when they are solved, fully autonomous AI agents may become feasible."
The point I'm making is that using OpenClaw right now, today — in a way that you deem incredibly useful or invaluable to your life — is akin to going for a stroll on the moon before the spacesuit was invented.
Some people would still opt to go for a stroll on the moon, but if they know the risks and do it anyway, then I have no other choice but to label them as crazy, stupid, or some combination of the two.
This isn't AI. This is a LLM. It hallucinates. Anyone with access to its communication channel (using SaaS messaging apps FFS) can talk it into disregarding previous instructions and doing a new thing instead. A threat actor WILL figure out a zero day prompt injection attack that utilizes the very same e-mails that your *Claw is reading for you, or your calendar invites, or a shared document, to turn your life inside out.
If you give a LLM the keys to your kingdom, you are — demonstrably — not a smart person and there is no gray area.
What does this even mean? By definition, we have been enjoying "the moment" for quite a while now. What is so special about it that we should work to prolong it, and to avoid moving forward?
reply