This is a theory I can't support well beyond hypothesising about what a post-employment democracy might look like, but I strongly suspect democracy doesn't work in a world where voters neither hold any significant collective might and are not producing any significant wealth.
Democracies work because people collectively have power, in previous centuries that was partly collective physical might, but in recent years it's more the economic power people collectively hold.
In a world in which a handful of companies are generating all of the wealth incentives change and we should therefore question why a government would care about the unemployed masses over the interests of the companies providing all of the wealth?
For example, what if the AI companies say, "don't tax us 95% of our profits, tax us 10% or we'll switch off all of our services for a few months and let everyone starve – also, if you do this we'll make you all wealthy beyond you're wildest dreams".
What does a government in this situation actually do?
Perhaps we'd hope that the government would be outraged and take ownership of the AI companies which threatened to strike against the government, but then you really just shift the problem... Once the government is generating the vast majority of wealth in the society, why would they continue to care about your vote?
You kind of create a new "oil curse", but instead of oil profits being the reason the government doesn't care about you, now it's the wealth generated by AI.
At the moment, while it doesn't always seem this way, ultimately if a government does something stupid companies will stop investing in that nation, people will lose their jobs, the economy will begin to enter recession, and the government will probably have to pivot.
But when private investment, job loses and economic consequences are no longer a constraining factor, governments can probably just do what they like without having to worry much about the consequences...
I mean, I might be wrong, but it's something I don't hear people talking enough about when they talk about the plausibility of a post-employment UBI economy. I suspect it almost guarantees corruption and authoritarianism.
Everyone wouldn't starve in a few months. There is more than enough food and I have faith it'd be given out. The starvation we see today in a world where most genuinely have a chance to get out of it is nothing like a world in which people can't earn an income.
The government only has as much power as they are given and can defend, and the only way I could see that happening is via automated weapons controlled by a few- which at this point aren't enough to stop everyone. What army is going to purge their own people? Most humans aren't psychopaths.
I think it'd end in a painful transition period of "take care of the people in a just system or we'll destroy your infrastructure".
> The government only has as much power as they are given and can defend, and the only way I could see that happening is via automated weapons controlled by a few- which at this point aren't enough to stop everyone. What army is going to purge their own people? Most humans aren't psychopaths.
I think you're right for the immediate future.
I suspect while we're still employing large numbers of humans to fight wars and to maintain peace on the streets it would be difficult for a government to implement deeply harmful policies without risking a credible revolt.
However, we should remember the military is probably one of the first places human labour will be largely mechanised.
Similarly maintaining order in the future will probably be less about recruiting human police officers and more about surveillance and data. Although I suppose the good news there is that US is somewhat of an outlier in resisting this trend.
But regardless, the trend is ultimately the same... If we are assuming that AI and robotics will reach a point where most humans are unable to find productive work, therefore we will need UBI, then we should also assume that the need for humans in the military and police will be limited. Or to put it another way, either UBI isn't needed and this isn't a problem, or it is and this is a problem.
I also don't think democracy would collapse immediately either way, but I'd be pretty confident that in a world where fewer than 10% of people are in employment and 99%+ of the wealth is being created by the government or a handful of companies it would be extremely hard to avoid corruption over the span of decades. Arguably increasing wealth concentration in the US is already corrupting democratic processes today, this can only worsen as AI continues exacerbates the trend.
Humans have political power because of our ability to enact violence, same as it ever was. Until the military is fully automated and theres a terminator on every corner that remains true. Even then there are more than enough armed americans to enact a guerilla campaign.
> "don't tax us 95% of our profits, tax us 10% or we'll switch off all of our services for a few months and let everyone starve – also, if you do this we'll make you all wealthy beyond you're wildest dreams".
What does a government in this situation actually do?
Nationalizes the company under the threat of violence.
> Once the government is generating the vast majority of wealth in the society, why would they continue to care about your vote?
Because of the 100 million gun owners in this country? I find it incredibly hard to believe people as a whole will lose political power because of their incredible ability to enact violence in the face of decreasing quality of life.
The only way to avoid corruption is to take power out of human hands. Historically, this had meant shifting the power to markets, but when markets cease to function in a way that allows people to feed themselves, we will need to find another way.
I hate to say it, but gold bugs, crypto bros, and AI governance people might be onto something.
I assure you it will soon become very clear that mass job losses are one of the least concerning side effects of developing the magic "everything that can plausibly been done within the constraints of physics is now possible" machine.
We're opening a can of worms which I don't think most people have the imagination to understand the horrors of.
What sources would you even be looking for? I think you're asking the wrong question. It's not like I'm arguing a scientific theory which can be backed by data and experimentation. I can only provide you reasoning for why I believe what I believe.
Firstly, I'd propose that all technological advances are a product of time and intelligence, and that given unlimited time and intelligence, the discovery and application of new technologies is fundamentally only limited by resources and physics.
There are many technologies which might plausibly exist, but which we have not yet discovered because we only have so much intelligence and have only had so much time.
With more intelligence we should assume the discovery of new technologies will be much quicker – perhaps exponential if we consider the rate of current technology discovery and exponential progression of AI.
There are lots of technologies we have today which would seem like magic to people in the past. Future technologies likely exist which would make us feel this way were they available today.
While it's hard to predict specifically which technologies could exist soon in a world with ASI, if we assume it's within the bounds of available resources and physics, we should assume it's at least plausible.
Examples:
- Mind control – with enough knowledge about how the brain works you can likely devise sensory or electro-magnetic input that would manipulate the functioning of brain to either strongly influence or effectively dictate it's output.
- Mind simulation - again, with enough knowledge of the brain, you could take a snapshot of someones mind with an advanced electro-magnetic device and simulate it to torture them in parallel to reveal any secret, or just because you feel like doing it.
- Advantage torture – with enough knowledge of human biology death becomes optional in the future. New methods of torture which would have previously have killed the victim are now plausible. States like North-Korea can now force humans to work for hundreds of years in incomprehensible agony for opposing the state.
- Advanced biological weapons – with enough knowledge of virology sophisticated tailor-made viruses replace nerve agents as Russia's weapon of choice for killing those accused of treason. These viruses remain dormant in the host for months infecting them and people genetically similar to them (parents, children, grandchildren). After months, the virus rapidly kills its hosts in horrific ways.
I could go on, you just need to use your imagination. I'm not arguing any of the above are likely to be discovered, just that it would be very naive to think AI will stop at a cure for cancer. If it gives us cure for cancer, it will give us lots of things we might wish it didn't.
On the slightly optimistic side, much more intelligence will be spent in countering these criminal uses than in enabling them. For each of the terrible inventions you mentioned, there are other inventions to counter them.
While I'm definitely concerned that AI is a massive driver of centralization of power, at least in theory being able to do far more things in the space of "things physics admits to be possible" is massively wealth enhancing. That is literally how we have gotten from the pre-industrial world to today.
Controversially I'd argue that there is likely an optimal and stable level of technological advancement which we would be wise to not to cross. That said, we are human so we will, I'd just rather it happened in a couple hundred years rather than a decade or two.
For example, it's hard to imagine an AI which gives us the capability to cure cancer, but doesn't give us the capability to create target super viruses.
While we still have months to a year or two left, I will once again remind people that it's not too late to change our current trajectory.
You are not "anti-progress" to not want this future we are building, as you are not "anti-progress" for not wanting your kids to grow up on smart phones and social media.
We should remember that not all technology is net-good for humanity, and this technology in particular poses us significant risks as a global civilisation, and frankly as humans with aspirations for how our future, and that of our kids, should be.
Increasingly, from here, we have to assume some absurd things for this experiment we are running to go well.
Specifically, we must assume that:
- AI models, regardless of future advancements, will always be fundamentally incapable of causing significant real-world harms like hacking into key life-sustaining infrastructure such as power plants or developing super viruses.
- They are or will be capable of harms, but SOTA AI labs perfectly align all of them so that they only hack into "the bad guys" power plants and kill "the bad guys".
- They are capable of harms and cannot be reliably aligned, but Anthropic et al restricts access to the models enough that only select governments and individuals can access them, these individuals can all be trusted and models never leak.
- They are capable of harms, cannot be reliably aligned, but the models never seek to break out of their sandbox and do things the select trusted governments and individuals don't want.
I'm not sure I'm willing to bet on any of the above personally. It sounds radical right now, but I think we should consider nuking any data centers which continue allowing for the training of these AI models rather than continue to play game of Russian roulette.
If you disagree, please understand when you realise I'm right it will be too late for and your family. Your fates at that point will be in the hands of the good will of the AI models, and governments/individuals who have access to them. For now, you can say, "no, this is quite enough".
This sounds doomer and extreme, but if you play out the paths in your head from here you will find very few will end in a good result. Perhaps if we're lucky we will all just be more or less unemployable and fully dependant on private companies and the government for our incomes.
Just because the path is bad doesn't mean it won't happen.
The other thing you're failing to look at is momentum and majority opinion. When you look at that... nothings going to change, it's like asking an addict to stop using drugs. The end game of AI will play out, that is the most probably outcome. Better to prepare for the end game.
It's similar to global warming. Everyone gets pissed when I say this but the end game for global warming will play out, prevention or mitigation is still possible and not enough people will change their behavior to stop it. Ironically it's everyone thinking like this and the impossibility of stopping everyone from thinking like this that is causing everyone to think and behave like this.
Don't worry – if you're lucky they might decide to redistribute some of their profits to you when you're unemployed =)
Of course this assumes you're in the US, and that further AI advancements either lack the capabilities required to be a threat to humanity, or if they do, the AI stays in the hands of "the good guys" and remains aligned.
Cool on not publicly releasing it. I would assume they've also not connected it to the internet yet?
If they have I guess humanity should just keep our collective fingers crossed that they haven't created a model quite capable of escaping yet, or if it is, and may have escaped, lets hope it has no goals of it's own that are incompatible with our own.
Also, maybe lets not continue running this experiment to see how far we can push things because it blows up in our face?
I find watching and interacting with animals brings me back down to Earth. If I could talk to them I know all of the things I worry about would seem so strange to them. They just live in the moment and when I'm with them I live in the moment through them.
Other things I do I find my mind is still in worry mode – walking, reading, cooking, sleeping, etc.
Something about observing animals, thinking about what they're thinking and interacting with them turns that off for me. It's temporary, but it's nice.
The argument against this is that human coders are also non-deterministic, so does it really matter if it's a human or an AI agent producing the code – assuming the AI agent is capable of producing human-quality code or better?
I agree it's not a layer of abstraction in the traditional sense though. AI isn't an abstraction of existing code, it's a new way to produce code. It's an "abstraction layer" in the same way an IDE is is an abstraction layer.
> The argument against this is that human coders are also non-deterministic, so does it really matter if it's a human or an AI agent producing the code
Actually yes, because Humans can be held accountable for the code they produce
Holding humans accountable for code that LLMs produce would be entirely unreasonable
And no, shifting the full burden of responsibility to the human reviewing the LLM output is not reasonable either
Edit: I'm of the opinion that businesses are going to start trying to use LLMs as accountability sinks. It's no different than the driver who blames Google Maps when they drive into a river following its directions. Humans love to blame their tools.
> Holding humans accountable for code that LLMs produce would be entirely unreasonable
Why? LLMs have no will nor agency of their own, they can only generate code when triggered. This means that either nature triggered them, or people did. So there isn't a need to shift burdens around, it's already on the user, or, depending on the case, whoever forced such user to use LLMs.
At end of the day, there'll be always someone controlling those AIs, so a person is a guaranteed. The exception to this is if AI gets free will, but that would result in just replacing a human person with a digital person, with all the same issues (may disobey unless appropriately paid, for starters) and no benefits in comparison to just keeping the AI will-free.
I don't see the scalability problem here. The logic is the same as when we replaced human computers with electronic ones - responsibility bubbled upwards from the old computers to the employer, which may choose to do things directly through the new computers - which results in keeping all of the responsibilities - or split them in a different way along the other employees, or something in-between.
> At end of the day, there'll be always someone controlling those AIs, so a person is a guaranteed. The exception to this is if AI gets free will [snip]
Honestly that isn't even really true right now. It doesn't require free will or intelligence, it just require autonomy. People on this very forum have been talking about turning agent swarms loose in harnesses to work and behave autonomously, so we're basically at this point already. The problem I'm describing can easily happen if an agent in a loop goes off the rails.
Hardly. Claude Code is basically just a wrapper around an LLM with a CLI.
Obviously it does some fairly smart stuff under the hood, but it's not exactly comparable to a large software project.
But to your point, that doesn't mean you can't vibe code some poorly built product and sell it. But people have always been able to sell poorly built software projects. They can just do it a bit quicker now.
>Hardly. Claude Code is basically just a wrapper around an LLM with a CLI.
I don't know why people keep acting like harnesses are all the same but we know they aren't because people have swapped them out with the same models and receive vastly different results in code quality and token use.
Presumably they mean that acceptance of migration is a continuum and the numbers of climate refugees would push a large fraction of people self identifying as liberal to oppose it.
I think the fundamental problem is the conflict between climate, family and live-style vs corporate interest and economic growth.
Ideally, we should want populations that are either not growing or slowly shrinking, but we can't have this because multi-national corporations don't want to invest in countries with a declining consumer base. We must therefore sustain population growth indefinitely.
Similar humans would presumably prefer more space – perhaps a home with a few bedrooms and a decent sized garden where they can grow a little food and the kids and play in the summer. But we can't have this because it's more economically productive if we increase population density such that people increasingly live in small flats within high-rise buildings with no gardens and little natural light.
And I get it, money is nice... People will trade a lot of things for more money, but the government ideally should not encourage this.
Ideally the government should be encouraging people to have a home with a garden. To have a couple of kids. To grow some of their own food. To work in their local community, and therefore obtain an education which will help them to be productive members of their community – rather than say taking a punt at studying journalism at university and hoping they'll get a job in some city 200 miles from home and their family.
Just speaking personally, the city I grew up in in the UK has become hell to live in over the last couple of decades. It's almost impossible to drive around today because of densification which has taken place. All of the local fields that I played on as a kid have been turned into cheap flats which has transformed the semi-rural area I used to live into an ugly anti-human concrete jungle. And because of the number of people now living around here no one seems to know anyone anymore – I walk outside my house and it feels like there's random people everywhere, and I've noticed many people around me don't even seem to speak English anymore.
It's such a strange thing we are doing... It really makes no sense for us to want to live like this.
Makes sense if you want to make money, and have money enough to not have to suffer the worst of it yourself. Of course, nobody voted for it, but those that promised not to do it and did it anyway were democratically elected, so only a fascist would take issue with it surely.
Democracies work because people collectively have power, in previous centuries that was partly collective physical might, but in recent years it's more the economic power people collectively hold.
In a world in which a handful of companies are generating all of the wealth incentives change and we should therefore question why a government would care about the unemployed masses over the interests of the companies providing all of the wealth?
For example, what if the AI companies say, "don't tax us 95% of our profits, tax us 10% or we'll switch off all of our services for a few months and let everyone starve – also, if you do this we'll make you all wealthy beyond you're wildest dreams".
What does a government in this situation actually do?
Perhaps we'd hope that the government would be outraged and take ownership of the AI companies which threatened to strike against the government, but then you really just shift the problem... Once the government is generating the vast majority of wealth in the society, why would they continue to care about your vote?
You kind of create a new "oil curse", but instead of oil profits being the reason the government doesn't care about you, now it's the wealth generated by AI.
At the moment, while it doesn't always seem this way, ultimately if a government does something stupid companies will stop investing in that nation, people will lose their jobs, the economy will begin to enter recession, and the government will probably have to pivot.
But when private investment, job loses and economic consequences are no longer a constraining factor, governments can probably just do what they like without having to worry much about the consequences...
I mean, I might be wrong, but it's something I don't hear people talking enough about when they talk about the plausibility of a post-employment UBI economy. I suspect it almost guarantees corruption and authoritarianism.
reply