As an M$ hater from last life I've to disagree it's more expensive. You numerate the instance where they've lost value, but can you even count the value it produced over the years by lowering the entry bar? I don't even excel, but it unarguably produced way more value than it's taken away. I tend to believe history speaks for itself, solely unethical practices won't undermine truly superior products. 50% of the population aren't stupid by definition, they just specalize on different things.
Those work not done by specialist, would not have been done by a specialist nicely, it simply won't get done at all, we just don't have the scale. Of course there's a fine line in some cases it produces negative value, but more often than not it's some value discounted by maintenance versus zero.
We’re agreeing. Excel produced massive value because it accepted catastrophic failures. That’s my point.
The problem isn’t Excel. It’s trying to get Excel’s accessibility in infrastructure whilst demanding engineering reliability. You cannot have both. Kubernetes won’t accept Excel-style disasters, so it still needs specialists; now specialists who must learn the abstraction and the fundamentals.
You’re right: work not done by specialists often wouldn’t happen at all. That’s the choice. Accept Excel-esque failures for democratisation, or accept expertise is required.
My point is that currently available tools promise both, deliver neither.
We're mostly agreeing except I'm optimistic about current generation of tools being closer to assembly > C than C > VB.
There are good signs AI would eliminate whole classes of costly human errors, whether the new classes of machine only problems would cost more as models iterate is remain to be seen, which I think would be lower. I'm not super optimistic about the social economical future coming from this but from a pure tech standpoint I'm optimistic about building cost.
Edit: also to address reliability, I think a lot of things are net positive to this world without five 9s, heck even two 9s.
I managed to get around ~7W idle on a 2024 dgpu/igpu laptop, with room to further optimize. From my limited casual checks (nowhere near proper benchmark), it's better than windows.
But yes it's an area that still requires tweaking, which is a cost I don't want to incur. Also just within this year I got a regression (later fixed) because of a bug in nvidia-open driver so it stopped going into low power state giving me a toaster on the go. These are still very obscure to root cause and fix.
Kudos to the high velocity action. Given it has to at least go through decision makers, finance and legal, I bet they made the decision almost immediately.
Curious how we would solve this class of wealth distribution problem in the future. All these critical libraries supply chain hit the bottom line of tech companies directly, but to extrapolate, all knowledge / work creators who used to live a comfortable living now have all their hard work scrapped by aggregators. Yeah I understand the genie is out of the bottle, all that and there will be (is?) systemic change to viable businesses. But people still have to live during the transition. It's also in the best interest of these aggregators, who's there to feed them new free works if it's no longer viable?
As much as we meme about it internally, one of my favourite things about AWS was the leadership principles. I always worried I've became cult like biased. Seeing how these converge to similar great ideas is a relief.
IMO the most common denominator among all these is trust, in order for many of these to work. From policy setting at strategic level, hiring, to tactical process refinement, the invariant must always be building an environment and culture of trust. Which isn't trivial to scale.
Most of the time I find the pros of not mutating variables out weight any potential memory / performance gain, of course it depends on what you're doing, but I find it rare other than perhaps scientific related code.
Blog author company's runner detects anomalies in them, but we shouldn't need a product for this.
Detecting outbound network connection during an npm install is quite cheap to implement in 2025. I think it comes down to tenant and incentives, if security is placed as first priority as it should, for any computing service and in particular for supply chain like package management, this would be built in.
One thing that comes to mind that would make it a months long deabte is the potential breakage of many packages. In that case as a first step just make an eye catching summary post install, with gradual push to totally restriction with something like a strict mode, we've done this before.
Which, reminds me of another long standing issue with node ecosystem toolings, information overload. It's easy to bombard devs with thesis character count then blame them for eventually getting fatigue and not reading the output. It takes effort to summarize what's most important with layered expansion of detail level, show some.
Trust is hard, it all comes down to trust no matter what you do. The more general idea is sandboxed build, it doesn't eliminate all problems but one class.
Given the 2.1kg after detaching the graphic module and the seemingly large battery capacity for on the go sessions, it's so close to a laptop that fits all my use cases.
Although from what I've read 8GB of VRAM seems insufficiently near-future proof, so I've always been eyeing 5070ti+ laptops. I wonder if there's any technical blocker that prevents offering 5070ti or the amd equivalent.
Whatever is making plain HTTP requests in 2025 should be a cause of concern. Wouldn't it be nice to have a low resource daemon watching for common pitfalls alerting users so we eliminate or minimise classes of problems like this?
I think lots of windows antivirus come with features like this? Perhaps with vast crystalized kno eledge nowadays we can afford to create OSS system level package that offers some level of protection.
If they use any form of filtering / evaluation along the line of STAR, the positive way you chose to deal with it plus the outcome of it being a top post on HN should score you half the position already, good luck :)
Those work not done by specialist, would not have been done by a specialist nicely, it simply won't get done at all, we just don't have the scale. Of course there's a fine line in some cases it produces negative value, but more often than not it's some value discounted by maintenance versus zero.