Nobody seems to be questioning that he was involved in Unix. Given that he didn't write it, what did he do for the project? Quality assurance? Support? Marketing? Court jester?
I’m starting to think this take is legitimately insane.
As said in the article, a conservative estimate is that Gen AI can currently do 2.5% of all jobs in the entire economy. A technology that is really only a couple of years old. This is supposed to be _disappointing_? That’s millions of jobs _today_, in a totally nascent form.
I mean I understand skepticism, I’m not exactly in love with AI myself, but the world has literally been transformed.
I think this is less about guarantees and more about understanding behavioral characteristics in response to different loads.
I personally could care less about proving that an endpoint always responds in less than 100ms say, but I care very much about understanding where various saturation points are in my systems, or what values I should set for limits like database connections, or how what the effect of sporadic timeouts are, etc. I think that's more the point of this post (which you see him talk about in other posts on his blog).
I am not sure that static analysis is ever going to give answers to those questions. I think the best you can hope to do is surface knowledge about the tacit assumptions about dependencies in order to explore their behaviors through simulation or testing.
I think it often boils down to "know when you're going to start queuing, and how you will design the system to bound those queues". If you're not using that principle at design stage then I think you're already cooked.
I mean, the fundamental premise of formal methods is that assurance of correctness is achieved through unambiguous specification/modeling and mathematical proof. The extent to which you're dependent on dynamic testing of actual code to achieve assurance does speak to the extent to which you're really relying on formal methods.
That’s literally what the post is about. I don’t see your point. The post is saying that formal tools currently do not handle performance and reliability problems. No one said otherwise.
This is the single most impactful blog post I've read in the last 2-3 years. It's so obvious in retrospect, but it really drove the point home for me that functional correctness is only the beginning. I personally had been over-indexing on functional correctness, which is understandable since a reliable but incorrect system isn't valuable.
But, in practice, I've spent just as much time on issues introduced by perf / scalability limitations. And the post thesis is correct: we don't have great tools for reasoning about this. This has been pretty much all I've been thinking about recently.
There could be more linear and "resource-aware" type systems coming down the pipes through research. These would allow the type checker to show performance / resource information. Check out Resource Aware ML.
Super interesting, but I think this will be very difficult in practice due to the gigantic effect of nondeterminism at the hardware level (caches, branch prediction, out of order execution, etc.)
no ! a _design_ document. how this new thing will fit together with other things that are already existing in the system. what it’s interactions are going to look like, what are the assumptions, what are the limitations etc etc.
Honestly? I usually look at the previous implementation and try to make some changes to fix an issue that I discovered during testing. Rarely an actual bug - usually we just changed our mind about what the intent should be.
I think we've become used to the complexity in typical web applications, but there's a difference between familiar and simple (simple vs. easy, as it were). The behavior of most business software can be very simply expressed using simple data structures (sets, lists, maps) and simple logic.
No matter how much we simply it, via frameworks and libraries or whatever have you, things like serialization, persistence, asynchrony, concurrency, and performance end up complicating the implementation. Comparing this against a simpler spec is quite nice in practice - and a huge benefit is now you can consult a simple in-memory spec vs. worrying about distributed system deployments.