I'm just pointing out "we don't need this right now" isn't necessarily an argument against "we don't need this".
There is a saying that isn't perfect but may apply: better to have it and not need it then to need it and not have it.
Here is another way of looking at it. Let's say agents don't meet the hyped up expectations and we build all of this robust tooling for nothing. So we have all of this work towards creating autonomous testing systems but we don't have the autonomous agents. That still seems like a decent outcome.
When we plan around optimistic views of the future, we tend to build generally useful things.
It must burn a little blogging about an LLM-driven latency analysis _internal demo_ only to have Datadog launch a product in the same space a day later. https://www.datadoghq.com/blog/bits-ai-sre/
"A correlation between people who use a language and the previous languages they have used does not imply a correlation (or intellectual/philosophical inheritance) between the language under discussion and the previous languages used."
And yet, that is exactly the basis for Yegge's point: attributing a philosophical label based on a perceived (and incorrect) background for a group. Goose, gander. Kettle, pot.
I think there is some hazard in assuming a seemingly exponential curve has no asymptotes, otherwise known as faith.
reply