Right? Have any of the execs making these decisions ever ridden in an EV? They are so much better that the experience I've seen is no one will ever go back to preferring ICE after spending time with an EV. My family currently has 2 ICE vehicles (one is a PHEV). I really doubt we'll buy another.
The week I spent renting an EV (an Ioniq 5, so not even a high-end one) convinced me. Enjoyable to drive. Having to figure out where/how to charge it was sufficient to chase away the fears around that.
> I have a secret fear about AI - that at one point when AI models get good enough, AI companies will no longer give you the source these tools generate - you'll get the artifacts (perhaps hosted on a subscription website), but you won't get the code.
This is a likelier outcome than the various utopian promises (no more cancer!) that AI boosters have been making.
> AI as it is being developed is likely to centralize it
Depends on how you see it.
I know many people building oss, local alternatives to enterprise software for specific industries that cost thousands of dollars all thanks to AI.
If everyone can produce software now and at a much complex and bigger scale, it's much easier to create decentralized and free alternatives to long-standing closed projects.
You do understand that the above comment is talking about how the use and reliance on LLMs is what centralizes power right? It's great people can build these tools, but if the means to build these tools are controlled by three central companies where does that leave us?
I agree with you. One counterargument is that producing software was never a path to adoption unless you had distribution and the big companies (OpenAI, Anthropic) have distribution on a scale that individuals will not.
> - OSS is valuable for decentralizing power and influence
That was the intention and hope, but I think the past twenty years has shown that it largely had the opposite effect.
Let's say I write some useful library and open source it.
Joe Small Business Owner uses it in his application. It makes his app more useful and he makes an extra $100,000 from his 1,000 users.
Meanwhile Alice Giant Corporate CEO uses it in her application. It makes her app more useful by exactly the same amount, but because she has a million users, now she's a billion dollars richer.
If you assume that open source provides additive value, then giving it to everyone freely will generally have an equalizing effect. Those with the least existing wealth will find that additive value more impactful than someone who is already rich. Giving a poor person $10,000 can change their life. Give it to Jeff Bezos and it won't even change his dinner plans.
But if you consider that open source provides multiplicative value, then giving it to everyone is effectively a force multiplier for their existing power.
In practice, it's probably somewhere between the two. But when you consider how highly iterative systems are, even a slight multiplicative effect means that over time it's mostly enriching the rich.
Seven of the ten richest people in the world got there from tech [1]. If the goal of open source was to lead to less inequality, it's clearly not working, or at least not working well enough to counter other forces trending towards inequality.
> AI as it is being developed is likely to centralize it
The access to AI is centralized, but the ability to generate code and customized tools on demand for whatever personal project you have certainly democratizes Software.
And even though open source models are a year behind, they address your remaining criticism about the AI being centralized.
That's a fair question, so I'll try as best I can. And maybe this will serve as a meta-example for me because it is hard to explain.
In a real discussion, the messiness is an important signal. The mistakes that you made and _didn't_ catch, the clunky word choices, etc, give insight actually show what you are thinking and how clearly you are thinking about it. If you have edited something for clarity, that's an important signal. LLM editing destroys that signal.
And it gets worse because LLMs destroy that signal in one direction - towards homogeneity. They create the illusion of "what you were actually thinking, but better than you could express it" but what they are delivering is "generic, professional-sounding ideas phrased in a way to convince you they are your own".
I get what you are saying, but I disagree on the last part, "[...] way to convince you they are your own". If it managed to convince the author that it is their own, chances are, it is their own. Specially so if the author does review and edit the output prior to posting it.
The messiness may show glimpses of the process, but, in isolation, will likely distort and corrupt the desired message via partial framing.
> And it gets worse because LLMs destroy that signal in one direction - towards homogeneity.
Oh, right, yes, if you're not careful they can definitely do that.
But look at what julius_eth_dev is actually saying they're doing:
> "rubber-ducking architecture decisions, pressure-testing arguments before I post them."
That's more like using the LLM as a sparring partner; they're not having the LLM write their comments for them.
I thought you were going to go somewhere really interesting actually, like maybe 'the LLM convinces you that their arguments are better than yours, and now you're acting like a meat puppet.' Or something equally slightly alarming and cool like that! ;-)
I've wondered the same. Back when Antrhopic seemed like a niche alternative to OpenAI, I signed up for an account. Now that my company is using it heavily, I tried to change the account owner to on of the executives, and apparently that's not possible! It's also not possible to create separate work/personal accounts unless you have two different phone numbers.
There's a confusing disconnect between "we have this magic box that can write all the software we'd ever want" and their lack of basic account management functionality.
(Not really. That disconnect is because of something mature software engineers have known for decades - the bottleneck has never been the code)
Perhaps once AI destroys all the livelihoods of educated and disciplined white collar workers, it'll be easier for the TSA to find people who can follow basic instructions and show a normal amount of empathy.
You have to deal with pissed people all day who don’t listen at all. And if you make the tiniest mistake you could be the person who failed to stop the next 9/11.
Doing the same three or four things screening people all day long has got to be mind numbingly boring. Unless you’re at an airport that isn’t constantly busy where instead you get to stand around doing nothing, which can be worse.
It honestly sounds like a terrible job to have. Aren’t they paid pretty bad too? I can see why a lot of people would want to move out of it, leaving only those who are stuck or like the power.
None of this is excuse what happened in the article.
Yep, just think of those people in front of you, that pass the same several large signs telling them to make sure that all their liquids are in a 1 litre ziplock bag and have that bag ready for inspection (this is in Europe) … and then doing a surprised Pikachu when the security personell asks them about why their perfume isn’t in said bag. Then starting to repack their hand luggage while the whole queue has to wait and watch.
And you only experience those few people in front of you. The security staff has them all day long.
There are people at airports all over the world doing very similar jobs to this. I have no experience with the TSA but noticed the attitude varies widely country to country and even airport to airport. I guess my point is it can be done with empathy and respect, even if the rules are strict.
One example - I'm doing research for some fiction set in the late 19th century, when strychnine was occasionally used as a stimulant. I want to understand how / when it would have been used and dosages, and ChatGTP shut down that conversation "for safety".
Yeah. I'm a stickler for accountability falling on drivers, but this really can be an impossible scenario to avoid. I've hit someone on my bike in the exact same circumstance - I was in the bike lane between the parked cars and moving traffic, and someone stepped out between parked vehicles without looking. I had nowhere to swerve, so squeezed my brakes, but could not come to a complete stop. Fortunately, I was going slow enough that no one was injured or even knocked over, but I'm convinced that was the best I could have done in that scenario.
The road design there was the real problem, combined with the size and shape of modern vehicles that impede visibility.
Building on my own experience I think you have to own that if you crash with someone you made a mistake. I do agree that car and road design for bicycles(?) makes it almost impossible to move around if you do not risk things like that.
The week I spent renting an EV (an Ioniq 5, so not even a high-end one) convinced me. Enjoyable to drive. Having to figure out where/how to charge it was sufficient to chase away the fears around that.
reply