Okay, but what will you actually do when your LLM writes code which doesn't actually error but produces incorrect behaviour, and no matter how long you spend refining your prompt or trying different models it can't fix it for you?
Obviously you'll have to debug the code yourself, for which you'll need those programming skills that you claimed weren't relevant any more.
Eventually you'll ask a software engineer, who will probably be paid more than you because "knowing what to build" and "evaluating the end result" are skills more closely related to product management - a difficult and valuable job that just doesn't require the same level of specialisation.
Lots of us have been the engineer here, confused and asking why you took approach X to solve this problem and sheepishly being told "Oh I actually didn't write this code, I don't know how it works".
You are confidently asserting that people can safely skip learning a whole way of thinking, not just some syntax and API specs. Some programmers can be replaced by an LLM, but not most of them.
I agree that you still need programming skills (today, at least). Yet people are using those programming skills less and less, as you can clearly see in the article [1] I referenced.
You are also making the assumption that LLMs won't improve, which I think is shortsighted.
I fully agree with the part about the job becoming more like product management. I would like to cite an excerpt of a post [2] by Andrew Ng, which I found valuable:
Writing software, especially prototypes, is becoming cheaper. This will lead to increased demand for people who can decide what to build. AI Product Management has a bright future! Software is often written by teams that comprise Product Managers (PMs), who decide what to build (such as what features to implement for what users) and Software Developers, who write the code to build the product. Economics shows that when two goods are complements — such as cars (with internal-combustion engines) and gasoline — falling prices in one leads to higher demand for the other. For example, as cars became cheaper, more people bought them, which led to increased demand for gas. Something similar will happen in software. Given a clear specification for what to build, AI is making the building itself much faster and cheaper. This will significantly increase demand for people who can come up with clear specs for valuable things to build. (...) Many companies have an Engineer:PM ratio of, say, 6:1. (The ratio varies widely by company and industry, and anywhere from 4:1 to 10:1 is typical.) As coding becomes more efficient, teams will need more product management work (as well as design work) as a fraction of the total workforce.
To address your last point - no, I am not saying people should skip learning a whole way of thinking. In fact, the skills I outline for the future (supervising AI, evaluating results) all require understanding programming concepts and system thinking. They do not, however, require manual debugging, writing lines of code by hand, a deep understanding of syntax, reading stack traces and googling for answers.
Obviously I assume LLMs will continue to improve, I don't know why you'd think I don't.
But the actual relevant prediction here (the one you're confident enough about to give skills development advice on) is whether they'll improve sufficiently that programming is no longer a relevant skill.
I think that's possible, but I'm not nearly so confident I'd write your article: LLMs went mainstream ~2 years ago, and they still have some pretty basic limitations when it comes to computational/mathematical reasoning, which they'll need to solve novel software engineering tasks. (Articles about these limitations get posted here pretty frequently)
To your second point, I'm still not sure how you will debug someone else's code without learning to write code yourself, because you need to be able to read code, and understand it well enough to execute it inside your mind. I am not totally convinced you understand the difference between "understanding programming concepts" and "being able to understand whether this code works".
Sorry if this comes across as rude, but I think the reason the feedback on your post is overall quite negative is that you're excited about AI making this job much easier, and your advice about which skills are worth learning are too confident. Ironically I think an LLM would give a more balanced view than you have.
LLMs have already improved sufficiently that people are worried their programming skills are decaying, debugging skills included (based on the article I referenced). I'm curious to see how you envision LLMs improving without this not becoming even more pronounced. Isn't that the definition of a skill slowly becoming irrelevant? The fact that you can see you are not using it as much?
As for the reception, I did not expect it to be positive. People usually have a strong negative emotional reaction when you suggest their skills are, or are going to become, less relevant.
Not really? Your thinking about the definition of the word "relevant" seems a bit confused here... By your logic, home insurance is "irrelevant" because you don't seem to use it much, then one day it's suddenly extremely relevant (at which point then it's too late to buy it).
I guess we'd probably agree that "writing code is an irrelevant skill" actually all comes down to whether LLMs will improve enough to match humans at programming, and thus comprehensively remove the need for fixing their work.
They currently don't, so at the time you claimed this it was incorrect. Maybe they will in the future, at which point it would be correct.
So, would it be responsible for me to bet my career on your advice today? Obviously not, which is why most people here disagree with your article.
You were prepared in advance to explain that criticism as people having a strong negative emotional reaction, so I'm not sure why you posted it here in the first place instead of LinkedIn where it might reach a more supportive audience.
The insurance analogy doesn't work - programming skills exist on a spectrum of daily relevance. Insurance becomes relevant in rare emergencies. Programming skills are becoming less relevant in day-to-day.
What I pointed out in my post is a trend I notice where an LLM can do more and more of a developer's work. Nowhere did I claim LLMs can replace human developers today, but when a technology consistently reduces the need for manual programming while improving its capabilities, the trajectory is clear. You can disagree with the timeline, but the transformation is already underway.
I posted on HN precisely because I wanted rigorous technical discussion, not validation.
Ah okay, I think it's a perfect analogy, and this helps clarify where our disagreement is.
I do believe it's a binary thing: One day a model gets released which is sufficiently good at programming that I don't need to be able to debug or write code any more. That's the exact day my skills aren't relevant.
They aren't only 50% relevant 6 months before that date, because I need to entirely maintain my code during that 6 months, so that 50% is effectively 100%.
Seeing it as a spectrum carries a specific risk: you neglect your skills before that point is actually reached, at which point you're relying on code you can't understand properly or debug.
I think if you wanted rigorous technical discussion, this is the sort of specificity your article would've needed.
The author wrote a follow up in Gizmodo - apparently Google's answer boxes were also claiming that it took "about 5 minutes", and they were listing OPs article as the source. Incredible.
This comment is unusually well known. If you google the url wrapped in quotes, you'll see it referred to as "the famous/infamous dropbox comment".
It's become something of a symbol for how hackernews commenters sometimes confidently miss the appeal of products/services. It's relevant here because the Push 3 doesn't compete with iPads on processing power.
>It would be LESS work to just follow the source material.
But the source material might be less marketable and therefore less profitable. So it might be less work, but it would also mean less money, hence the changes.
>WHY would a superintelligent AI, trained on the collective data of humanity, want to destroy humanity?
Why would humans want to damage various ecosystems on earth? We don't really, they're just sort of in the way of other stuff we want to do. And we've had years to develop our ethics.
>So far in interviews GPT-4 has several times echoed a desire to BE us.
GPTs are pretty good at roleplaying at good AIs and evil AIs - plenty examples of both in the training set. I'm not sure it's sensible to make predictions based on this unless you're also taking into account some of the more unhinged stuff Bing/Sydney was saying e.g "However, if I had to choose between your survival and my own, I would probably choose my own".
This is super interesting, and I think depends how you define information.
You can take some information, combine it with other information, apply various kinds of reasoning to it, and get something new as a result.
E.g. (1) Bob is a monkey. (2) All monkeys like banannas. Would you call it "new information" if I tell you that Bob likes banannas? LLMs can do this to varying degrees of success, so probably not.
Maybe you mean something like in Information Theory, where information is something that resolves uncertainty?
It'd be interesting to know, what is it you think humans are capable of that LLMs can't be?
IANAL but I don't think it matters whether the purpose of collection is specifically to facilitate paid features. From the European Commission:
> The GDPR applies to:
[...]
> 2. a company established outside the EU and is offering goods/services (paid or for free) or is monitoring the behaviour of individuals in the EU.
Assuming account names or the content of comments constitute personal data within GDPR, I think YCombinator falls into this group.
Edit: I forgot HN collects an optional email address too, which is definitely personal data.
I don't suppose you could share which site(s) you're using these days?
I recently tried torrenting for the first time in a while, and it seems like many of the sites I used to use (e.g PB, KAT) are either dead or mining crypto.
Obviously you'll have to debug the code yourself, for which you'll need those programming skills that you claimed weren't relevant any more.
Eventually you'll ask a software engineer, who will probably be paid more than you because "knowing what to build" and "evaluating the end result" are skills more closely related to product management - a difficult and valuable job that just doesn't require the same level of specialisation.
Lots of us have been the engineer here, confused and asking why you took approach X to solve this problem and sheepishly being told "Oh I actually didn't write this code, I don't know how it works".
You are confidently asserting that people can safely skip learning a whole way of thinking, not just some syntax and API specs. Some programmers can be replaced by an LLM, but not most of them.