Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m kind of in stitches over this. Claude’s “skills” are dependent upon developers writing competent documentation and keeping it up to date…which most seemingly can’t even do for actual code they write, nevermind a brute-force black box like an LLM.

For those few who do write competent documentation and have well-organized file systems and the risk tolerance to allow LLMs to run roughshod over data, sure, there’s some potential here. Though if you’re already that far in, you’d likely be better off farming that grunt work to a Junior as a learning exercise than an LLM, especially since you’ll have to cleanup the output anyhow.

With the limited context windows of LLMs, you can never truly get this sort of concept to “stick” like you can with a human, and if you’re training an agent for this specific task anyway, you’re effectively locking yourself to that specific LLM in perpetuity rather than a replaceable or promotable worker.

Just…it makes me giggle, how optimistic they are that stars would align at scale like that in an organization.



LLMs reward developers who can write. Maybe that's one of the reasons so many developers are pushing back against them!


The classic "you're doing it wrong" response to criticism.


The classic "the only thing LLM proponents ever say is "you're doing it wrong"" response!


I, for one, appreciate that you don't let the haters get you down.

Keep up the good work, Simon. I admire your boundless optimism and curiosity--and your willingness to educate us all.


I think this can’t be overstated and I see it my day-to-day working with developers on AI enablement.

If you are good a writing, documenting, planning? etc. - basically all the stuff in the SDLC that isn’t writing code, you’ll probably be much more effective at using LLMs for coding.


I generally agree with you, but this is a poor take. Developers, in general, like to write code. Writing prose is incidental. If the job becomes writing prose instead of code, it's easy to see why there's pushback.


>and if you’re training an agent for this specific task anyway, you’re effectively locking yourself to that specific LLM in perpetuity rather than a replaceable or promotable worker.

That's ONE of the long games that are currently played, and is arguably their fallback strategy: The equivalent of vendor lock-in but for LLM providers.


From my IT POV, that’s what this is all about. It’s why none of these major players produce locally-executable LLMs (Mistral, Llama, and DeepSeek being notable exceptions), it’s why their interfaces are predominantly chat-based (to reduce personal skills growth and increase dependency on the chatbot), it’s why they keep churning out new services like Skills and Agents and “Research”, etc.

If any of these outfits truly cared about making AI accessible and beneficial to everyone, then all of them would be busting hump to distill models better to run on a wider variety of hardware, create specialized niches that collaborate with rather than seek to replace humans, and promote sovereignty over the AI models rather than perpetual licensing and dependency forever.

No, not one of these companies actually gives a shit about improving humanity. They’re all following the YC playbook of try everything, rent but never own, lock-in customers, and hope you get that one lucrative bite that allows for an exit strategy of some sort while promoting the hell out of it and yourself as the panacea to a problem.


"It’s why none of these major players produce locally-executable LLMs (Mistral, Llama, and DeepSeek being notable exceptions)"

OpenAI have gpt-oss-20b and 120b. Google have the Gemma 3 models. At this point the only significant AI lab that doesn't provide a locally executable model are Anthropic!


Fair point, I’d forgotten those recent-ish releases from OpenAI and Google both - but my larger point still stands that the entire industry is maximizing potential vectors for lock-in and profit while spewing lies about “benefitting humanity” in public.

None of the present AI industry is operating in an ethical or responsible way, full stop. They know it, they admit to it when pressed, and nobody seems to give a shit if it means they can collapse the job market and make money for themselves. It’s “fuck you got mine” taken to a technological extreme.


When decent docs (and various other kinds of pro-developer infrastructure listed by simonw here https://simonwillison.net/2025/Oct/7/vibe-engineering/) are required for LLMs to work well, it's a very tangible incentive to do them better and ironically makes for an easier sell to management.


Just went to the comments searching for a comment like yours and I'm surprised it seems to be the only one calling this out. My take on this is also that "Skills" is just detailed documentation, which like you correctly point out, basically never exist for any project. Maybe LLM skills will be the thing that finally makes us all write detailed documentation but I kind of doubt it.


I think part of the reason developers are resistant to writing docs is because the perceived value is very low.

This perceived value would be much higher if the docs were to tangibly become part of a productive tool chain


I generally find the aversion to documentation comes from one of three places:

* A belief that sufficient documentation means their job is at risk (which, to be fair, is 100% correct in this Capitalist hellscape - ask me how I know first-hand)

* It’s irrelevant since the code will change again in a short amount of time

* A fierce protection over one’s output, sometimes manifesting as a belief that nobody but you could ever understand what you created

Sure, sometimes there’s wholly incompetent developers who can’t even tell you their own dependencies, but I’d like to believe they’re still the exception rather than the rule. As for the value proposition, collaborators and cooperators understand the immense value of good, thorough documentation; those who don’t see the value, at least in my experience, are often adversarial instead of cooperative.


I always find it hilarious and painfully ironic that Anthropic can't even keep Claude Code's docs up to date. I don't know how much to read into it, but it is a modern marvel of process failure.

The team is obviously doing a lot of cool things very rapidly, so I don't want to be too negative, but ... please just ask Claude to review your own docs before you merge a change.


Not saying you’re wrong, but can you cite a couple of examples?


Looking through the issues tagged "documentation" provides many examples (https://github.com/anthropics/claude-code/issues?q=label%3Ad...). It's so common they have an issue template for "Missing documentation (feature not documented)".

Here are a few recent open ones: - "Documentation missing for new 'Explore' subagent" - https://github.com/anthropics/claude-code/issues/9595 - "Missing documentation for modifying tool inputs in PreToolUse hooks" - https://github.com/anthropics/claude-code/issues/9185 - "Missing Documentation for Various Claude Code Features (CLI Flags, Slash Commands, & Tools)" - https://github.com/anthropics/claude-code/issues/8584




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: