This metaphor totally gets muddied once you consider some of the most optimized programs are run on a single thread in an event loop. Communication between threads is expensive, epolling many io streams is less so. Not quite sure what implications this has in life but you could probably ascribe some wisdom to it.
I have a 16 core M4 Max and running at a fraction of the potential maximum speed just isn't very optimal on modern CPUs like that.
Threading is hard, especially if they share a lot of state. Memory management with multiple threads sharing stuff is hard and ideally minimized. What is optimal very much depends on the type of workload as well. Not all workloads are IO dependent, or require sharing a lot of state.
Using threads for blocking IO on server requests was popular 20 years ago in e.g. Java. But these days non blocking IO is preferred both for single and multi threaded systems. E.g. Elasticsearch uses threading and non blocking IO across CPU cores and cluster nodes to provide horizontal scalability for indexing. It tends to stick to just one indexing thread per CPU core of course. But it has additional thread pools and generally more threads than CPU cores in total.
A lot of workloads where the CPU is the bottleneck that have some IO benefit from threading by letting other threads progress while one is waiting for IO. And if the amount of context switching can be limited, that can be OK. For loads that are embarrassingly parallel with little or no IO and very limited context sharing, a 1 thread per CPU core tends to be the most optimal. It's really when you start having more than threads than cores that context switching becomes a factor. What's optimal there is very much dependent on how much shared state there is and whether you are IO or CPU limited.
In general, concurrency and parallelism tend to be harder in languages that predate when threading and multi core CPUs were common and lack good primitives for this. Python only recently started addressing the GIL obstacle and a big motivation for creating Rust was just how hard doing this stuff is in C/C++ without creating a lot of dead locks, crash bugs, and security issues. It's not impossible with the right frameworks, a lot of skill and discipline of course. But Rust is getting a well deserved reputation for being very optimal and safe for this kind of thing. Likewise functional languages like Elixir are more naturally suited for running on systems with lots of CPUs and threads.
> I have a 16 core M4 Max and running at a fraction of the potential maximum speed just isn't very optimal on modern CPUs like that.
To further muddy the waters: if your process is not bottlenecked at the CPU a modern unit might be more optimal in terms of power draw (directly and through secondary effects for increased cooling needs) running at a fraction of its speed. Moving at a low clock but fast enough not to become the bottleneck compared to other factors, instead of bursting to full speed for a bit then waiting, can be optimal.
Of course there are a bunch of chip specific optimisations here if you like complexity. Some chips are better off running all cores slowly, and others that can completely power down idle cores better off running a few faster, to optimise power use while getting the same job done in the same amount of wall-clock time.
>"just how hard doing this stuff is in C/C++ without creating a lot of dead locks, crash bugs, and security issues"
In my opinion this is probably problem for novice. Or people who only know how to program inside very limited and restricting environment. I write multithreaded business backends in modern C++ that accept outside http requests for processing, do some heavy math lifting. Some requests that expected to take short time are processed immediately, some long running one are going to a separate thread pools which also manage throttling of background tasks etc. etc.
I did not find it any particularly hard. All my "dangerous" stuff is centralized, debugged to death years ago and used and reused across multiple products. Stuff runs for years and years without single hick-up. To me it is a non issue.
I do realize that the situation is much tougher for those who write OS kernels but this is very specialized skill and they would know better what to do.
A key difference is that it sounds like you need to create and otherwise interact with that sort of code on a regular basis.
Most devs spend most of their time, all of it even, on tasks that are either naturally sequential or don't benefit from threading enough over the safer option of multiple independent processes, so when they do come across a problem that is inherently parallelizable and needs the highest performance it is not a familiar situation for them. Familiarity can make some rather complex processes feel simple.
The same can be said for event loop driven concurrency, for those who don't work that way often the collection of potential race conditions there can feel daunting so they appreciate their chosen platform holding their hand a bit.
Event loops are great but composition is hard. This is due to the fact that the OS (e.g. Linux) provides event loops with custom event types (eventfd()) but the performance is worse than if you built it yourself.
The bad performance leads to a proliferation of everyone building their own event loops, which don't mesh together, which in turn leads to people standardizing on large async frameworks like tokio.
Looking at their website it seems they're trying to target a slightly less tech savvy audience which are interested in checking on agents while away. Someone willing to blow cash on overpriced AI subscriptions, I could see justifying blowing money on this.
As someone who posts blogs and projects out of my own enjoyment, no AI for code generation, handed edited blog, I still have no idea how to signal to people that I actually know what I’m talking about. Every step of the process could’ve been done by an LLM, albeit worse, so I don’t have a way of signifying my projects as something different. Considering putting a “No LLMs used in this project” tag at the start but that feels a little tacky.
I had a similar thought way back when. It goes back to what is important to the person reviewing it be it the style, form or just whether it works for their use case. In the case of organic food, I did not even know I was living living a healthy lifestyle until I came to US. But now organic is just another label, played by marketing people just like anything else.
As I may have noted before, humans are the problem.
Communicating that you know what you are talking about and that you're different is a lot of work. I think being visibly "anti-AI" makes you look as much of an NPC as someone who "vibe coded XYZ." It takes care, consistency and most of all showing people something they've never seen before. It also helps to get in the habit of doing in person demos, if you want to win hackathons it really helps to be good at (1) giving demos on stage and (2) have a sense of what it takes to make something that is good to demo.
I have two projects right now on the threshold of "Show HN" that I used AI for but could have completed without AI. I'm never going to say "I did this with AI". For instance there is this HR monitor demo
which needs tuning up for mobile (so I can do an in-person demo to people who work on HRV) but most all being able to run with pre-recorded data so that people who don't have a BTLE HR monitor can see how cool it is.
Another thing I am tuning up for "never saw anything like this" impact is a system of tokens that I give people when I go out as-a-foxographer
I am used to marketing funnels having 5% effectiveness and it blows my mind that at least 75% of the tokens I give out get scanned and that is with the old conventional cards that have the same back side. The number + suit tokens are particularly good as a "self-working demo" because it is easy to talk about them, when somebody flags me down because they noticed my hood I can show them a few cards that are all different and let them choose one or say "Look, you got the 9 of Bees!"
You’re not actually at risk of being labeled as LLM user until someone comes and make that claim about your work. So my advice is to not try to fight a preemptive battle on your tone and adjust when/if that day comes.
Side note: I’d think installing Anubis over your work would go a long way to signaling that but ymmv.
> I still have no idea how to signal to people that I actually know what I’m talking about.
presumably if this is true, it should be obvious by the quality of your product. If it isnt, then maybe you need to need to rethink the value of your artisanal hand written code.
I think that the problem is that LLMs are good at making plausible-looking text and discerning if a random post is good or bad requires effort. And it's really bad when signal-to-noise ratio is low, due to slop being easier to make.
I added the following at the top of the blog post that I wrote yesterday: "All words in this blog post were written by a human being."
I don't particularly care if people question that, but the source repo is on GitHub: they can see all the edits that were made along the way. Most LLMs wouldn't deliberately add a million spelling or grammar mistakes to fake a human being... yet.
As for knowing what I'm talking about. Many of my blog posts are about stuff that I just learned, so I have many disclaimers that the reader should take everything with a grain of salt. :-) That said: I put a ridiculous amount of time in these things to make sure it's correct. Knowing that your stuff will be out there for others to criticize if a great motivator to do your homework.
You don’t. A JS dev isn’t going to catch an uninitialized variable in C and probably doesn’t even know the damage nasal demons can cause. You either throw more LLMs at it or learn the language.
Yeah not awesome, heavy irony in this paragraph. Been looking at some other providers recently with comparable prices, my infrastructure isn’t to complicated to migrate just haven’t had the chance to make the jump.
Eventually I always get to a problem I can't solve by just throwing an LLM at it and have to go in and properly debug things. At that point knowing the code base helps a hell of a lot, and I would've been better off writing the entire thing by hand.
Because the question almost always comes with an undertone of “Can this replace me?”. If it’s just code search, debugging, the answer’s no because a non-developer won’t have the skills or experience to put it all together.
That undertone is overt in the statements of CEOs and managers who salivate at “reducing headcount.”
The people who should fear AI the most right now are the offshore shops. They’re the most replaceable because the only reason they exist is the desire to carve off low skill work and do it cheaply.
But all of this overblown anyway because I don’t see appetite for new software getting satiated anytime soon, even if we made everyone 2x productive.
Yeah as I've dabbled with AI models more and more it's become clear to me how much my mental model is valuable to the programming process. It's easier to debug code I wrote myself then to comb through some AI's mistakes when it eventually gets to a problem too hard for the model to debug.