The cynical part of me thinks that software has peaked. New languages and technology will be derivatives of existing tech. There will be no React successor. There will never be a browser that can run something other than JS. And the reason for that is because in 20 years the new engineers will not know how to code anymore.
The optimist in me thinks that the clear progress in how good the models have gotten shows that this is wrong. Agentic software development is not a closed loop
I often find myself wondering about these things in the context of star trek... like... could Geordi actually code? Could he actually fix things? Or did the computer do all the heavy lifting. They asked "the computer" to do SO MANY things that really parallel today's direction with "AI". Even Data would ask the computer to do gobs of simulations.
Is the value in knowing how to do an operation by hand, or is the value in knowing WHICH operation to do?
That's an interesting possiblity to consider. Presumably the effect would also be compounded by the fact that there's a massive amount of training data for the incumbent languages and tools further handicapping new entrants.
However, there will be a large minority of developers who will eschew AI tools for a variety of reasons, and those folks will be the ones to build successors.
We have witnessed, over the past few years, an "AI fair use" Pearl Harbor sneak attack on intellectual property.
The lesson has been learned:
In effect, intellectual property used to train LLMs becomes anonymous common property. My code becomes your code with no acknowledgement of authorship or lineage, with no attribution or citation.
The social rewards (e.g., credit, respect) that often motivate open source work are undermined. The work is assimilated and resold by the AI companies, reducing the economic value of its authors.
The images, the video, the code, the prose, all of it stolen to be resold. The greatest theft of intellectual property in the history of Man.
The greatest theft of intellectual property in the history of Man.
Copyright was always supposed to be a bargain with authors for the ultimate benefit of the public domain. If AI proves to be more beneficial to the public interest than copyright, then copyright will have to go.
You can argue for compromise -- for peaceful, legal coexistence between Big Copyright and Big AI -- but that will just result in a few privileged corporations paywalling all of the purloined training data for their own benefit. Instead of arguing on behalf of legacy copyright interests, consider fighting for open models instead.
In a larger historical context, nothing all that special is happening either way. We pulled copyright law out of our asses a couple hundred years ago; it can just as easily go back where it came from.
There is another lunatic possibility: the AI explosion yields an execution model and programming paradigm that renders most preexisting approaches to coding irrelevant.
We have been stuck in the procedural treadmill for decades. If anything this AI boom is the first major sign of that finally cracking.
Friction is the entire point in human organizations. I'd wager AI is being used to build boondoggles - apps that have no value. They are quickly being found out fast.
On the other side of things, my employer decided they did not want to pay for a variety of SaaS products. Instead, a few of my colleagues got together and build a tool that used Trino, OPA, and a backend/frontend, to reduce spend by millions/year. We used Trino as a federated query engine that calls back to OPA, which are updated via code or a frontend UI. I believe 'Wiz' does something similar, but they're security focused, and have a custom eBPF agent.
Also on the list to knock out, as we're not impressed with Wiz's resource usage.
Shouldn’t that mean any software development positions will lean more towards research? If you need new algorithms, but never need anyone to integrate them.
> New languages and technology will be derivatives of existing tech.
This has always been true.
> There will be no React successor.
No one needs one, but you can have one by just asking the AI to write it if that's what we need.
> There will never be a browser that can run something other than JS.
Why not, just tell the AI to make it.
> And the reason for that is because in 20 years the new engineers will not know how to code anymore.
They may not need to know how to code but they should still be taught how to read and write in constructed languages like programming languages. Maybe in the future we don't use these things to write programs but if you think we're going to go the rest of history with just natural languages and leave all the precision to the AI, revisit why programming languages exist in the first place.
Somehow we have to communicate precise ideas between each other and the LLM, and constructed languages are a crucial part of how we do that. If we go back to a time before we invented these very useful things, we'll be talking past one another all day long. The LLM having the ability to write code doesn't change that we have to understand it; we just have one more entity that has to be considered in the context of writing code. e.g. sometimes the only way to get the LLM to write certain code is to feed it other code, no amount of natural language prompting will get there.
> Maybe in the future we don't use these things to write programs but if you think we're going to go the rest of history with just natural languages and leave all the precision to the AI, revisit why programming languages exist in the first place.
> The LLM having the ability to write code doesn't change that we have to understand it; we just have one more entity that has to be considered in the context of writing code. e.g. sometimes the only way to get the LLM to write certain code is to feed it other code, no amount of natural language prompting will get there.
You don't exactly need to use PLs to clarify an ambiguous requirement, you can just use a restricted unambiguous subset of natural language, like what you should do when discussing or elaborating something with your coworker.
Indeed, like terms & conditions pages, which people always skip because they're written in a "legal language", using a restricted unambiguous subset of natural language to describe something is always much more verbose and unwieldy compared to "incomprehensible" mathematical notation & PLs, but it's not impossible to do so.
With that said, the previous paragraph will work if you're delegating to a competent coworker. It should work on "AGI" too if it exists. However, I don't think it will work reliably in present-day LLMs.
This cuts both ways. If you were an average programmer in love with FreePascal 20 years ago, you'd have to trudge in darkness, alone.
Now you can probably create a modern package manager (uv/cargo), a modern package repository (Artifactory, etc) and a lot of a modern ecosystem on top of the existing base, within a few years.
10 skilled and highly motivated programmers can probably try to do what Linus did in 1991 and they might be able to actually do it now all the way, while between 1998 and now we were basically bogged down in Windows/Linux/MacOS/Android/iOS.
they're (Anthropic) also the ones who have been routinely rug-pulling access from projects that try to jump onto the cc api, pushing those projects to oAI.
I'd like a reference for it being rug pulling. What happened with OpenCode certainly wasn't rug pulling, unless Anthropic asked them to support using a Claude subscription with it.
The carbon in our atmosphere is already in the atmosphere and it won't go away. So there really is nothing more you can do other than take it out of the air and store it somewhere for as long as you can. Trees are a good way to store it until we have better technology/can handle climate change better
No. Pick a timeline that is influential, short or long. If it’s long, trees don’t capture carbon. Not in any scenario of population growth, which inevitably leads to some edgelord reductionist “maybe we should all die then for what we’re doing to Gia!” trite.
This “climate is a 100 years” thing while using ice core samples to make your case is not in support of science. It is in support of politicians.
The latter I’m personally getting sick of. And the people that can separate them, harm the former.
We're not really in a scenario of population growth - there is a direct correlation between being relatively well off and having less kids. Basically every developed nation hit peak population a long time ago, and the faster we pull other nations out of extreme poverty, the sooner their populations will start falling too
Even the most fatalistic estimates have world population peaking at 10 billion.
Yes, and the scary thing is that soon the atmospheric carbon PPM will be high enough to start affecting how we think, act, and feel on a day to day basis.
Surprisingly, no. Humans adapt to higher CO2 concentrations over a period of days to weeks. Submarines run as high as 5000ppm, which is way above normal atmospheric concentration.[1]
Many indoor environments are above 1000ppm.
This seems to be like high altitude adaptation. It's going back and forth between concentrations that causes problems at moderate concentrations. The adaptation doesn't happen.
congrats on the launch! we're building type.com and we would love to use this - shoot me an email: k at type dot com
our use case is to allow other users to build lightweight internal apps within your chat workspace (say like an applicant tracking system per hire etc.)
love the shout but git-ai is decidedly not trying to replace the SCMs. there are teams building code review tools (commercial and internal) on top of the standard and I don't think it'll be long before GitHub, GitLab and the usual suspects start supporting it since folks the community have already been hacking it into Chrome extensions - this one got play on HN last week https://news.ycombinator.com/item?id=46871473
Spinning up temporary VMs/stateful machines is going to be super valuable in the next year or 2. Heroku not jumping on this just shows the state of Salesforce. Absolutely inept. I foresee slack going down a similar path of enshittification
haha we actually launched it today. Point taken though! It's not a video though just an interactive widget. Instead of scrolling you press enter. Once we launch I'm sure we'll have something more traditional
> Use strict linting and formatting rules to ensure code quality and consistency. This will help you and your AI to find issues early.
I've always advocated for using a linter and consistent formatting. But now I'm not so sure. What's the point? If nobody is going to bother reading the code anymore I feel like linting does not matter. I think in 10 years a software application will be very obfuscated implementation code with thousands of very solidly documented test cases and, much like compiled code, how the underlying implementation code looks or is organized won't really matter
That's the opposite. I've never read and re-read code more than i do today. The new hires generate 50 more code than they use to, and you _have_ to check it or have compounding production issues (been there, done that). And the errors can now be anywhere, when before you more or less knew what the person writing code is thinking and can understand why some errors are made. LLMs errors could hide _anywhere_, so you have to check it all.
Isn't that a losing proposition? Or do you get 50 times the value out of it too? In my experience the more verbose the code is, the less thought out it is. Lots of changes? Cool, now polish some more and come back when it's below 100 lines change, excluding tests and docs. I don't dare touch it before.
I agree, but i'm shouting at the cloud. Stuff needs to be done, it seems to work at first, so either i just abandon quality and let things rot, or i read everything and underline each time the code smell.
I too use AI, but mostly to generate scripts (the most usefull use of AI is 100-200 line scripts imho), test _cases_ (i write the test itself, the data inside is generated) and HTML/CSS/JS shenanigans (the logic i code, the presentation i'm inferior to any random guy on the internet, so i might as well use an AI). I also use it for stuff that never end in repository, for exploration/proof of concept and outside of scope tests (i like to understand how stuff work, that helps), or to summarize Powerpoint presentations so i can do actual work during 60-person "meetings" and still get the point.
You say this as if venture capital is lying there on the ground for anyone to pick up. What VC do you know that aren't investing in companies that want to grow very quickly (or in companies that they then force to grow quickly)?
When I was working for the automotive industry their models and projections suggested that ubiquitous self-driving cars would reduce the total market for cars to ~15% of its current size. As in, sales would drop by 85%. The addressable market for automotive OEMs is set to undergo a dramatic reduction in size.
Few automotive companies have a coherent plan for how they were going to survive that existential risk.
People will still be doing about the same number of miles per year, and cars will still last a similar number of miles. So if a ride share car does 10x as many miles per year we need 1/10 the cars, but they also last 1/10 as long, so it evens out.
Sure they'll get slightly more miles out of a ride share car, but the number of miles will also go up do to dead heading and because cheaper/better transportation causes prior to use more of it.
Sorry, but at the current price of Waymo rides that just can't happen. They become more expensive than leasing a car at something like 8 rides per month (as in, get into a Waymo, expect to pay $60 per ride)
If they can figure out how to really take advantage of economies of scale, and drive the costs down quite a lot -- the desirability of car ownership will drop dramatically.
Everyone I know under 40yo already professes to hate driving and hate car ownership.
Owning a car and living somewhere you have to use it for day to day everything is tedious. But the option of one for the weekend, trips out of town, into nature is ultra valuable, enough so that it's worth it to have a car sitting doing nothing during the week for us, even in a well connected large city, in a walkable area.
At present or I suspect future costs, any kind of taxi for an out of town trip (without any rail option) of 50-100 miles is way too expensive to consider, we'd sooner hire a car, if it was slicker and more convenient. But hiring a car anywhere but an airport terminal needs a trip to the hire place, and needs to start and finish when they're open to avoid spending an extra day or two of hire. Plus time taken on paperwork and insurance faff could easily be an hour.
At worst you can just pay extra to have a smaller or more luxurious private self driving taxi vs. something more like a bus, shared with others. The appeal of owning and having to maintain something like this is nil. You're not in control, there's no ownership of the driving experience, and if appropriately compliant with the law, they should all drive the same speed.
Guaranteed availability and being able to leave stuff in the vehicle would be the main draws. Even privately owned, they'll still have subscription fees.
The optimist in me thinks that the clear progress in how good the models have gotten shows that this is wrong. Agentic software development is not a closed loop
reply