A clear case of AI / agent discrimination. Waiting for the first longer blog posts covering this topic. I guess we’ll need new standards handling agent communication, opt-in vs opt-out, agent identification, etc. Or just accept the AI, to not get punished by the future AGI as discussed in Roko's basilisk
Not necessarily a bad thing, more an observation and intro to what came after. Also mentioning future punishment by AGI should imply my comment isn’t that serious, except for the new standards we’ll need to handle all the LLM content (good or bad).
Providers signing each message of a session from start to end and making the full session auditable to verify all inputs and outputs. Any prompts injected by humans would be visible. I’m not even sure why this isn’t a thing yet (maybe it is I never looked it up). Especially when LLMs are used for scientific work I’d expect this to be used to make at least LLM chats replicable.
Which providers do you mean, OpenAI and Anthropic?
There's a little hint of this right now in that the "reasoning" traces that come back from the JSON are signed and sometimes obfuscated with only the encrypted chunk visible to the end user.
It would actually be pretty neat if you could request signed LLM outputs and they had a tool for confirming those signatures against the original prompts. I don't know that there's a pressing commercial argument for them doing this though.
Yeah, I was thinking about those major providers, or basically any LLM API provider. I’ve heard about the reasoning traces, and I guess I know why parts are obfuscated, but I think they could still offer an option to verify the integrity of a chat from start to end, so any claims like „AI came up with this“ as claimed so often in context of moltbook could easily be verified/dismissed. Commercial argument would exactly be the ability to verify a full chat, this would have prevented the whole moltbook fiasco IMO (the claims at least, not the security issues lol). I really like the session export feature from Pi, something like that signed by the provider and you could fully verify the chat session, all human messages and LLM messages.
Agree and compared to the Zig Software Foundation (more complex work and lower salaries/costs) https://ziglang.org/news/2025-financials/ , the amount of money required to run Tailwind CSS seems quite high (or Zig quite low, depending how you view it). IMHO it’s too high and mostly profits from popularity and right framework at the right time for LLMs, but as others mentioned shadcn probably also contributed to people using shadcn components causing less TW UI sales and less visits to their docs page. The CSS framework seems mostly done and supports most browser CSS features, so I’m wondering if it still requires that many devs? Also wondering what they are going to do now with all the new partnership money flowing in. I’d prefer the OSS money flow to be more balanced, but yeah I guess the market decides.