It'll be amazing if anyone can request any basic program they want. Totally amazing if they can request any complex program.
I cannot really envision a more empowering thing for the common person. It should really upset the balance of power.
I think we'll see, soon, that we've only just started building with code. As a lifelong coder, I cannot wait to see the day when anyone can program anything.
From my experience, most people have only the vaguest idea of what they want, and no clue about the contradictions or other problems inherent in their idea. That is the real value that a good software engineer provides - finding and interpreting the requirements of a person who doesn't understand software, so that someone who does can build the right thing.
How would this anyone be able to evaluate whether the program they requested is correct or not?
Automatic program generation from human language really feels like the same problem with machine translation between human languages. I have an elementary understanding of French and so when I see a passage machine translated into French (regardless of software, Google Translate or DeepL) I cannot find any mistakes; I may even learn a few new words. But to the professional translator, the passage is full of mistakes, non-idiomatic expressions and other weirdness. You aren't going to see publishers publishing entirely machine translated books.
I suspect the same thing happens for LLM-written programs. The average person finds them useful; the expert finds them riddled with bugs. When the stakes are low, like tourists not speaking the native language, machine translation is fine. So will many run-once programs destined for a specific purpose. When the stakes are high, human craft is still needed.
We’re already using ChatGPT at work to do machine translation because it takes weeks to get back translations for the 10 languages our application supports.
It’s not a work of literature, it’s quite technical language and feedback we’ve had from customers is that it’s quite good. Before this, we wouldn’t have ever supported a language like Czech because the market isn’t big enough to justify the cost of translation, and Google Translate couldn’t handle large passages of text in the docs well enough.
"Our business model can't afford to pay enough translators so we have been replacing them with chatGPT, and enough of our users haven't complained that we consider it a success"
"Always" correct is a very high bar and likely unattainable. It seems much more likely that the amount of errors will trend downwards but never quite reach zero. How could it be otherwise? AIs are not magic god-machines, they have a limited set of information to work with just like the rest of us (though it might be larger than humans could handle) and sometimes the piece of information is just not known yet.
Let's say that in a few years the amount of correct code becomes 99% instead of ~80%. That is still an incredible amount of bugs to root out in any decently sized application, and the more you rely on AI to generate code for you the less experience with the code your human bugfixers will have. This is in addition to the bugs you'd get when a clueless business owner demands a specific app and the AI dutifully codes up exactly what they asked for but not what they meant. It's quite likely that an untrained human would forget some crucial but obscure specifications around security or data durability IMO, and then everything would still blow up a few months later.
Requesting a basic or complex program still requires breaking down the problem into components a computer can understand. At least for now, I haven’t seen evidence most people are capable of this. I’ve been coding for ~15 years and still fail to describe problems correctly to LLMs.
actually i find it easier to debug other people's code than my own, because most bugs really only exist in your mind
a bug is an inconsistency between what you intended a piece of code to do and the logical results of your design choices: for example, you thought for (i=0;i<=n;i++) would iterate n times, but actually it iterates n+1 times, as you can ascertain without ever touching a computer. it's a purely mental phenomenon
the expectation that the code will do what you intended it to do makes it hard to understand what the code actually does. when i'm looking at someone else's code, i'm not burdened by a history of expecting the code to do anything
this is why two people working on two separate projects will get less done than if they work together on one project for a week and then on the other project for a week: most bugs are shallow to anybody else's eyes
This is a really good point -- once you import somebody else's code into your head. Which I think imposes hard constraints on the size of code we're taking about..
I cannot really envision a more empowering thing for the common person. It should really upset the balance of power.
I think we'll see, soon, that we've only just started building with code. As a lifelong coder, I cannot wait to see the day when anyone can program anything.