Hacker Newsnew | past | comments | ask | show | jobs | submit | dgellow's commentslogin

The “figuring out” is the struggle IMHO. I don’t know if struggle is the perfect term but that’s the part that IS the active learning. Doing a query and getting the response is very passive and not engaging. Trying to actively understand how the thing work is the process of learning that is slowly being replaced by AI. As the author says, it is pretty hard to continue to do that process when you know you could just query Claude and have something. But something is lost here, you didn’t actually acquire the knowledge to the same extent. In the worst case you learnt nothing.

You can implement a full HTTP server from scratch without learning one bit of the HTTP spec by just asking the AI tool you’re using to correct itself until tests pass. At the end you have an HTTP server, you didn’t grow doing so.

HN front page has a story about Bun being rewritten to Rust. How much of rust did the author of that PR learn by doing that process? I would say very little. If they were doing that process without AI they would very likely be Rust expert once done given the complexity and size of the codebase


I do have an actual diagnostic and I had the same experience over the past year with early coding harness at the beginning of the year, then Claude code since its release date. But after 1+year going that direction I really don’t want to continue. The novelty is gone, dealing with AI now feels frustrating and boring, I miss engaging deeply with the actual lower level technical challenges. I do not want to manage fleets of agents. I do not want to rediscover for the hundredth time that in fact all this time an agent took shortcuts for acceptance tests I rely upon and didn’t catch. Or once again get the agent to understand why and what I want it to do after its context got bloated and it start to drift completely. While I got artifacts I can use (libraries, tools, docs), including some things that I’m pretty confident are SoA I do not feel satisfied anymore knowing that I used a model to generate them, even if I was the one designing every part of it. I do feel that I’m lying anytime I come to a colleague to share a new cool tool I have made. And I do not feel that relying on AI actually helped me improve with dealing with my executive function issues.

YMMV but I’m personally feeling burnt out with AI coding agents and ready to go back to the old ways for my next personal project


I have done a lot of introspection on this and realized that I'm very much driven by intrinsic rewards moreso than extrinsic.

I got into coding over a decade before it was my career because of the exploration, learning, and puzzle/challenge aspect.

Every time I have tried to be extrinsically driven (career or OSS wise) it's never worked out anyway. I could have done more to make it successful but I never cared about getting validation or getting users for my stuff (and the stress that brings).

I've been lucky that up until this point, the intrinsic rewards I have gotten from my job have aligned with company goals.

LLMs take all the intrinsic wins and leaves only the extrinsic ones. That makes me sad, but it is what it is I guess.

I have been thinking about a tool for months but didn't have the time. I finally gave in and built it at work in a week with LLM tokens. It worked fantastically. But I felt no accomplishment. It felt just the same as if I downloaded the tool from someone else's repo (and who had an overly eager maintainer that would implement my GitHub issue requests).

The hard part for me is ignoring LLMs in my free time to try and keep some of the intrinsic rewards to myself, without being annoyed that I could do it faster if I just "gave in".


Hard agree about the intrinsic motivation. The intrinsic/extrinsic distinction is an unspoken assumption in a lot of conversations about AI and work in general. Not everyone is motivated by money and status.

I do believe you can use LLMs while maintaining the intrinsic rewards of programming. For me, right now that means writing code by hand and using LLMs primarily for research, documentation, and brainstorming. Sometimes I ask it to write a piece of code just so that I can see what it comes up with and maybe learn something from it. I'm also planning on experimenting with coding agents, but I will probably have it work in its own parallel repo and hand-pick the changes I want to keep.

I think a "late adopter" mindset is actually beneficial. It allows you to focus on fundamental skills that will never be outdated, and you get the benefits of new technologies once they mature.


> But I felt no accomplishment. It felt just the same as if I downloaded the tool from someone else's repo (and who had an overly eager maintainer that would implement my GitHub issue requests).

I get that. I recently watched a "talking head" style video by javid9x, where he said something along the lines of disconnecting from the code emotionally [0]. He has to get into the code to understand that. I get the same feeling, however, for me, it feeds my curiosity and my need of exploration. At least for now, I might add.

[0]: https://youtu.be/1qjn1QRxlng?si=_75-J51UnZ0eJyb7&t=705


That’s exactly it! There is no feeling of accomplishment whatsoever, because we aren’t really accomplishing anything. The LLM is doing all the work. Out pops an application, but it might as well have been written by someone else, because it was, but also it wasn’t!

It’s great that an application now exists where there wasn’t one before, but it’s hollow because I didn’t make it. Nobody made it! It just exists now with nothing actually accomplished by anyone. It’s a very spooky way to conjure things up.


If that's how you feel, maybe the applications you're making are too simple to need any of your unique contribution.

The answer could be to just push further, and try solving harder problems.


> LLMs take all the intrinsic wins and leaves only the extrinsic ones.

Not for me, but that's because I like playing around with software. So in web apps, the UX is done by me.

I'm not here to invalidate your experience. I get what you're saying and I feel it. I just also want to show the point of view where intrinsic motivation can still pop up (for some people at least).

But yea, it sucks if all of it is taken away from you. I'm sorry to hear that.


I have found the opposite to be true. I really like getting stuff done for people and struggled for years with all of the specific syntax and details of solving any particular problem. I have a relatively in-depth knowledge of computers and how they work and algorithms and the like but always struggled with the exact details of how to do something so it feels like a blessing to be able to spit ball some conceptual understanding and get back real code. I always struggled with making my ideas real before the novelty of the inspiration wore off unless I happened to get hyper focused on solving a particular problem.

Now I can step through everything in a way that it feels like a super power. I have enough sense and knowledge to I think intuit whether the solution being provided is bloated or perhaps even unnecessary and I can reiterate over it. I've just been using Cursor for work as I adopted a personal restriction to only use AI I can run on my own devices for personal use, but if I'm getting paid and the tools are provided I'm going to do my best to solve the problems that I'm confronted with and so far the LLM connected IDE has been helpful.

It's best in my experience when I use it as a tool to augment trouble shooting and brainstorming but when you are fixing one liner bugs in other people's side it's not like me typing the fix is very different from a machine auto completing it.

It might feel like cheating on a crossword puzzle but that is also something I do if I get stuck and the fun of solving the problem has become a time sink.

I think the real risk is if you don't understand conceptually what you are commiting anymore and I've tried to make sure that I always understand what and how the code is working and also understanding the pitfalls of being able to propose bullshit hypothesis that the agreeability of the LLM will go along with.

I've yet to seriously use a LLM for a personal project and when I tried to use Devstral that ran on my Nvidia 4090 it hallucinated so much that it wasn't super helpful but it still shot out boiler plate code that I could then spend time fixing and helped me overcome my own task paralysis regarding initiating.


Yeah and that's totally fair!

We are all motivated by different things and being extrinsically motivated isn't a bad thing at all.

But being more interested in the problems rather than the solutions (and not wanting to "productize the solutions") is why LLMs are demotivating for me.


> I have done a lot of introspection on this and realized that I'm very much driven by intrinsic rewards moreso than extrinsic. > I got into coding over a decade before it was my career because of the exploration, learning, and puzzle/challenge aspect. .. > LLMs take all the intrinsic wins and leaves only the extrinsic ones.

I'm not sure I understand this. For me, programming was at first a tool to use to satisfy my curiousity. When I first started coding I knew nothing about software patterns, how I should be naming my variables, length of functions, DOD vs OOP, functional vs imperative, single responsibility principle, and on and on.

I wrote a mess of a program and got it to do very cool things (for me). I loved it.

Then I got taught more, got my first jobs, learned why programming large systems needs standards, patterns, etc. I became good at that, and have had a long lucrative career out of it.

But I cannot wait for the day when I no longer need to earn money from programming and I can go back to using it just to do "cool shit". At that point, whether I am hacking and slashing myself, or working with an LLM to do something, I don't care. It is the intrinsic goal of solving a puzzle and programming just happens to be the tool I use.

Thinking more deeply about your words, is it that you enjoy figuring out the instructions to use to solve a problem? In other words, figuring out the algorithm and writing the code out to create something? Would you feel if you just tell the LLM what you want to create and it does it, you've lost the enjoyment?


> Thinking more deeply about your words, is it that you enjoy figuring out the instructions to use to solve a problem? In other words, figuring out the algorithm and writing the code out to create something? Would you feel if you just tell the LLM what you want to create and it does it, you've lost the enjoyment?

So there's a lot of nuance to the "is it that you enjoy figuring out the instructions to use to solve a problem".

At the surface I don't enjoy typing, I don't enjoy fighting syntax checkers, rust's borrow checker, or manual memory management in my personal C projects, typing out the HDL for nand2tetris problems, etc...

However, there have been studies done decades before this LLM boom about the psychological concept called the Generation Effect. While everyone is different and it's not completely black and white, the studies have found that people learn more by actual practice (the act of doing) than just by reading material. That's 100% the case for me.

I can read blogs and resources till the cows come home and I'll have a very surface understanding of a concept. Then I'll go to write the code to implement it and it rarely works right away because there are demonstrable gaps in my understanding. I'll debug it and iterate on it until it works, and that is what actually solidifies the mental model of what I was trying to learn in my mind. Not only can I say for sure that I remember it better, that seems to form connections in my brain that allows me to apply it in other use cases, or build fascinating technical tangents.

I not only get my high from that initial "Aha!" moment when I really feel like I understand a concept enough to actually apply it in other scenarios, but I also get my high from tangents that spawn off of that concept.

In many cases, I can map a direct line of my personal projects to a set of root projects that spawned them off because of ideas I came up with while actually implementing the projects. Since I tried real hard to optimize a C# game engine for an embedded platform, I realized where limitations were and it solidified my knowledge of how old game consoles worked.

This led me down to the interest of creating a GPU out of embedded device that I can pair with I/O constrained embedded devices. This taught me soooo much about the embedded space, and while I heavily improved my C writing abilities it also made me wish I could write C# on embedded.

Since I had learned C for the embedded project (and I knew MSIL from previous deep dives), I realized I can just translate MSIL into C and that would allow me to run C# anywhere (got C# working on an SNES, the linux kernel, and on an ESP32S3).

By implementing that by hand and coming face to face with many small decisions I had to make, that solidified a bunch of concepts in my head around intermediate representations and why they are a massive benefit. Those aha moments (among others) then led me down the path to implementing a just-in-time compilation engine for NES games and the C64 OS into the .net runtime.

The learnings from that have already spawned some other ideas in my mind, which is why I'm now learning Verilog and FPGA development.

None of these projects solved any useful problem (as in nothing was created that I or anyone else would use). The satisfaction and the high I got from them was having the curiosities of a problem, ideas of a solution, and persevering (partially due to being stubborn) through it and actually accomplishing it. The satisfaction that I actually understand the concepts at a foundational level, which actually ends up breeding excitement for a whole other tangent/problem.

These learnings have indirectly helped me in my day job as well. While I'm not working on anything that sophisticated or cool, all of these actual implementations I've learned have given me direct learnings I have been able to successfully use to create better software in other domains.

So it's not the actual typing I enjoy, but the whole picture of what comes out of the end through that typing. LLMs take most of that away. It lets me ideate on a vague solution and then it goes ahead and implements it for me. Even if I'm specific on the details of the algorithm it uses, it subtly fills in the blanks and the missing pieces that I haven't cemented in my brain yet thus making me miss out of the opportunity to do so.

And it steals the accomplishment of the final thing existing. I don't feel an accomplishment by typing in google "I need a C# to C transpiler" and just downloading it. That's what LLMs feel like, even if I'm trying to steer them at a lower architectural level. I don't have the aha moments, I don't have the learnings, and I'm disconnected from the code.

Thus it feels like it's stealing all the intrinsic rewards from me, only leaving the extrinsic ones. And those are not rewards I am particularly motivated by.


Almost a decade ago, I moved my career into the management track. I am a director by now and have two more management levels between myself and individual contributors.

I can strongly relate to what you‘re writing, because I share that same sentiment often in my daily (non-AI) work. In fact, coming from that background, the switch from coding to working with agents feels eerily similar to moving into management. You encounter the same challenges minus the „human people and emotions“ part: having to explain properly, the agents doing something different than what you intended, feeling detached from the actual work, only focusing on the bigger picture and so on

To me it feels very natural, it is what I do every day. But then again, I made that choice and it wasn‘t forced on me. So I understand frustration.


I feel lucky to have been promoted to a management position recently, just as I was starting to feel less excited about dev work because of AI. I still enjoy building systems, but I have to admit that the loss of challenge made the work much less enjoyable for me.

Now I have a team of interns to mentor. They're sharp and use AI constantly, so my guidance is less about code and more about UI/UX, understanding what the client actually wants, good work practices, well-documented tickets, thorough reviews, and so on. Thankfully, I like this work, it has been very rewarding.


Agentic harnesses go in the exact opposite direction to what I'd want to get from LLMs. I don't want another black box to (poorly) work on a black box for me, I want to be better at reaching into and understanding boxes that I already have in front of me. I don't want tools to autocompact contexts and store generated memories to facilitate long runs I have barely any control over, I want tools that allow me to painlessly craft a more relevant context for short ones. I don't want agents to author commits, I want them to use Git (or other tools) to get the information that I'm looking for when it's tedious to do it myself. I don't need them to do the fun and beneficial part of the job for me, I want them to do the boring parts that I already know how to do which block me from proceeding because my brain just isn't interested. Some of those things you can script yourself relatively easily, but the current tooling for LLM coding is absolutely atrocious and disconnected from programmer's needs.

The main output of my work is gaining a better mental model of systems I work with. That's what lets me grow and that's what makes people want to pay me rather than someone else to work on these things. Anything else, including the produced code itself, is secondary to that. In general I find it pretty hard, although not impossible, to use LLMs in a way that doesn't diminish my output, especially with this tooling that seems explicitly designed to make it hard. After all, reviewing things is so much harder than writing them yourself, and you can't feel accomplished by something you haven't done.


I wonder if this is a new thing or if it is a repeat of the past.

Like ...

When I was young, I wrote this REALLY tight assembly code - loops that were measurably better than C or other high-level languages.

Then obviously assembly was minimized, then forgotten.

Then years later, I found I was happy using even interpreted languages, not even using a compiler.

When first using perl and having a data structure not be as useful to the final output, in a line of code I used a different data structure and sorted the output exactly like I wanted. Too much effort if it had been C, and very much so for assembly language. But I got what I reall

Is AI a repeat of this? instead of assembly language, instead of C, instead of python, do we become high-level-english-language tech folks? Will AI just let us hand off our code and physical design to a fab, and will it make us happier?

I also wonder if SoA to you is how it behaves or how it is, and does it matter if you stop looking at the code just like I stopped comparing the code the C compiler generated to the assembly language I wrote. And what about years later with -O3, will AI have -O3?


I feel like this is where it's going -- it's not where we are, the tools are not reliable enough that it makes sense to step back quite this far, but it feels like where we are going to arrive really soon.

If you look at agile processes one of the biggest criticisms is that there's always a magic "customer" role that needs to prioritize existing work, do acceptance testing for completed tasks, and give requirements deep enough to create real specifications. This often required a lot of attention to detail and very fine grained judgment typically lacking in those that are eager to have a job title of "customer".

And now if you look at dark software factories, these pieces are also basically everything they're missing. The person/people responsible for this role were never seen as being engineers/programmers in those processes, but I think that's where most SWEs will end up, because as these tools mature to the point they manage the code all on their own that's what's going to be left to the SWE in the chair.

The SWE of course won't be the actual customer/stakeholder, they'll be the proxy, the one that has to navigate meetings in meatspace and make soothing noises to the actual customers. Will they be happy doing this? That's a big group of "they" so some will, sure. But I think a lot of people who got into this career consider this the worst part of it, and it's now going to be the whole job.


> I do not want to rediscover for the hundredth time that in fact all this time an agent took shortcuts for acceptance tests I rely upon and didn’t catch. Or once again get the agent to understand why and what I want it to do after its context got bloated and it start to drift completely.

100% agree, neither do I, but I see this as an opportunity to think "how can we gain trust in the outputs AI produced for us?"

Is it about tests, reviews, some methodology? Better observability? Formal specification? It's really interesting to think how you can relieve this pain. I think the answer to this question will show the path ahead for agentic coding.


>The novelty is gone, dealing with AI now feels frustrating and boring, I miss engaging deeply with the actual lower level technical challenges.

Honestly I've had the opposite experience.

If I can leave the boring crap to the LLMs, I can focus more on the deep important bits. The bits where the LLM accuracy is spotty because there's a ton of moving pieces and the "how/what" of the code becomes crucial for auditability and debuggability. The code that I've written bugs in, that Opus has written bugs in that code, where the design around it to make that less catastrophic when it happens is often system-specific and unique.

If I can spend 5 minutes delegating all the tedious plumbing updates around it, then I have more time to put towards the core.

The system design challenge becomes making sure that they are well separated.

Managing fleets of agents hasn't entered into the picture because the needle-moving things there tend to be successive and cumulative, not easily parallelizable. (I believe this is true on the product side as well - 10 crappy MVP features in a week would be way less interesting to me as a user than 1 new feature released in a 3x-more-fleshed-out-way than it would've been three years ago.)


I'm also diagnosed and I'm the complete opposite.

For the first time I can not only compete with normal people's work loads but now with AI I can supersede them. I've never been more excited.


> I really don’t want to continue. The novelty is gone, dealing with AI now feels frustrating and boring, I miss engaging deeply with the actual lower level technical challenges. I do not want to manage fleets of agents.

I've tried to stay away for a variety of reasons (not approving of the way the tech was developed hovering up everyone's data for commercial gain, high amongst them), but the company I'm now part of (due to them buying us) is drinking deep from the GenAI water fountain, so I will very soon have no choice but to engage or be pushed out¹. I get it, I see the benefits, but it feels like turning into a manager (for GenAI agents rather than people, but still…) which is something I've always avoided because I want to tinker, I got into programming and database work because I like to play with the nitty-gritty details and I'm going to have to let that go.

To be frank, there is a sizable part of me that has wanted to be out of tech for a while² for various reasons³ and that part of me would prefer to go waiting tables if that is what it takes to escape! Maybe then I can reclaim tinkering as a hobby.

--------

[1] Redundancy would be nice, with 26 years service the statutory minimums would be more than enough to tide me by for a while, but I expect they'd not do that. I'd instead be put on a PIP for not performing (assuming they can make a case for not engaging with GenAI making me less efficient), and if I still don't play ball that'll be grounds for dismissal.

[2] Or at least take a fairly long sebatical.

[3] Not liking remote teams being a significant one, and even though I go into the office⁴ I'm still remote because most of everyone else is.

[4] which grants me the home/work separation


People will blame this on 'your org is shaped wrong' but the reality is - until and unless LLMs becomes shaped to match the human it'll lead to pear-shaped outcomes.

Are people forgetting that we needed to make PC's more powerful to enable better experiences and interfaces to make them super intuitive and easy to use for humans...? It's amazing how all these learnings get lost in the midst of disruption.


so Claude is an Adderal substitute in a way, right?

> ready to go back to the old ways for my next personal project

This stood out to me.

Because you shouldn’t, or can’t go back, in your professional projects?


I have never jumped on the train but I am writing a project that uses v4l2 or libcamera. I have been experimenting with both and spent 4 hours reading linux kernel docs, libcamera docs, and not writing any code. I’m okay with that and the project has still moved ahead even though I only have written v4l2 sample code.

I generally agree with you message but the UK isn’t the EU anymore, and starting a company in Germany, France is definitely more expensive and involved than in the UK or the US. Regulation also has a responsibility, but not in the way people on HN generally think. The EU single market is messy, lots of things haven’t actually been consolidated, you have to take in account all the differences between each member state regulations if you want to get access to the whole EU. And if not you’re limited to mostly one region. If the EU can complete the single market we should be in a way better position to compete.

I really hope we can see The 28th Regime[0] becomes reality soon, that would be such an improvement

[0]: https://www.europarl.europa.eu/thinktank/en/document/EPRS_BR...


That‘s something I believed 10 years ago, I honestly don’t see how that position can still be defended. What happened is the fascists benefited so much more from anonymity than any opposition.

But I also don’t expect that removing anonymity would in itself improve the current world, things are at a point where people living in democracies are openly advocating for the destruction of every single liberal ideals. Sure that’s in part astroturfed by anonymous accounts but way too many people couldn’t care less if they real identity would be linked to those claims


> would fall apart if their real names were shown

I don’t think that’s true, unfortunately. You have lots of cases of major propaganda accounts found to be foreign actors and pretty much nothing happened to them


I am talking about the psychological effect, not the accounts being banned. Accounts pretending to be e.g. bona-fide Red State MAGA Americans are not going to successfully manipulate the American populace or move MAGA merchandise if the name "Ramesh Sharma" or "Goodluck Ngozi" or whatever is shown on every one of the account's posts.

Wouldn't "Ramesh Sharma" just file a name change form with the government and hence be known as "John Smith" when they create their account?

And even that is assuming they need the same person to be writing the posts as lending their name. They could also pay a homeless person or food service worker in Kentucky to sign up for the account and still have a troll farm in another country writing the posts.


The astroturfing relies mostly on anonymous users. The vast majority of trolling and shilling on Twitter and similar platforms is done with fake identities. So you have a few open shills who are using their real names, with massive campaigns enabled by anonymous/fake users

What part of that requires anonymity? You pay some broke college students or unemployed dog washers to shill (or let someone else shill) for the big accounts under their name.

There is not only a massive supply of such people, they have high turnover as the seniors graduate but the new freshmen are broke again and the unemployment rate is fairly stable but the specific people distressed enough to sign their name for a buck are constantly in flux, so it doesn't even matter if they get banned.


I'm not sure it is too unusual to be honest. I feel that we have that type of content from time to time

In economics productivity is generally the outputs divided by the inputs used for production. Are you talking specifically about capital productivity?

> If you’re 10x more productive, someone is willing to pay you 10x as much as they were last year, because you’re producing 10x as much value as before. Has your salary increased 10x?

That's too simplistic because the rest of the economy isn't static. Everyone is getting access to AI tooling, if the whole field gets a productivity increase then the baseline changes, you don't just become 10x more valuable. The previous work is now way less valuable than it was before. It's also not clear to me that the productivity gains from AI convert 1:1 into profit gains


> but people are very weary of these things because of the (correct) belief that their appropriation is guided by unaccountable bureaucrats.

People believe this because every single member state is using EU institutions as a punching bag whenever they have issues locally. The people have no idea how the EU work, they only hear about it when used as a bogeyman


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: