Hacker Newsnew | past | comments | ask | show | jobs | submit | mc-0's commentslogin

I just moved to a new team in my company that prides itself on being "AI-First". The work is a relatively new project that was stood up by a small team of two developers (both of whom seem pretty smart) in the last 4 months. Both acknowledged that some parts of their tech stack, they just don't at all understand (next.js frontend). The backend is a gigantic monorepo of services glued together.

The manager & a senior dev on my first day told me to "Don't try to write code yourself, you should be using AI". I got encouraged to use spec-driven development and frameworks like superpowers, gsd, etc.

I'm definitely moving faster using AI in this way, but I legitimately have no idea what the fuck I am doing. I'm making PRs I don't know shit about, I don't understand how it works because there is an emphasis on speed, so instead of ramping up in a languages / technologies I've never used, I'm just shipping a ton of code I didn't write and have no real way to vet like someone who has been working with it regularly and actually has mastered it.

This time last year, I was still using AI, but using it as a pair programming utility, where I got help learn to things I don't know, probe topics / concepts I need exposure to, and reason through problems that arose.

I can't control the direction of how these tools are going to evolve & be used, but I would love if someone could explain to me how I can continue to grow if this actually is the future of development. Because while I am faster, the hope seems to be AI / Agents / LLMs will only ever get better and I will never need to have an original thought or use crtical thinking.

I have just about 4 years of professional experience. I had about 10 - 12 months of the start of my career where I used google to learn things before LLMs became sole singular focus.

I wake up every day with existential dread of what the future looks like.


A new way of operating is forced down your throat due to expectations of how the technology will evolve. What actually happens is highly variable - on the spectrum between a huge positive and negative surprise.

The people forcing it down you do not care about the long-term ramifications.


This app sounds destined for total disaster.


I feel like no one talks about how people who are "supposed to be reviewing the LLM outputs, guiding the agents, etc.", actually acquire the knowledge to do a decent job. When I see discussion on LLMs making SWEs more productive, I assume they mean compared to someone who knows significantly less than they do.

Here is a real scenario encountered in corporate America:

A new CS grad in their first job after college is given a task in a domain they're unfamiliar with to solve a problem they've never seen. They ask an agent to implement it in code in a language they've never used, and have it give a breakdown of it's process, the tradeoffs encountered, things to consider in the future, etc.

They have no time to actually learn anything, because they should be moving faster, being more productive, AI has already solved all of our problems.

So they submit a 700+ line PR to which whoever reviews it just pushes it through because they don't have time as they need to be moving just as fast and do not have the cognitive capacity to sit through and comprehend what's actually happening.


I recently read an article acknowledging the issue of this kind of debt, but then suggesting, “you need more review, not less AI”. I laughed at the naïveté of the author. More review is simply not going to happen, and as you’ve pointed out, who is going to do the reviewing?


> It still requires a technical person to use these things effectively.

I feel like few people critically think about how technical skill gets acquired in the age of LLMs. Statements like this kind of ignore that those who are the most productive already have experience & technical expertise. It's almost like there is a belief that technical people just grow on trees or that every LLM response somehow imparts knowledge when you use these things.

I can vibe code things that would take me a large time investment to learn and build. But I don't know how or why anything works. If I get asked to review it to ensure it's accurate, it would take me a considerable amount of time where it would otherwise just be easier for me to actually learn the thing. Feels like those most adamant about being more productive in the age of AI/LLMs don't consider any of the side effects of its use.


That's not something that will affect the next quarter, so for US companies it might as well be something that happens in Narnia.


My experience working at a large F500 company:

A non-technical PM asked me (an early career SWE) to develop an agentic pipeline / tool that could ingest 1000+ COBOL programs related to a massive 30+ year old legacy system (many of which have multiple interrelated sub-routines) and spit out a technical design document that can help modernizing the system in the future.

- I have limited experience with architecture & design at this point in my career.

- I do not understand business context of a system that old and any of the decisions that occurred in that time.

- I have no business stakeholders or people capable of validating the output.

- I am the sole developer being tasked with this initiative.

- My current organization has next to no engineering standards or best practices.

No one in this situation is interested in these problems except me. My situation isn't unique with everyone high on AI looking to cram LLMs & agents into everything without any real explanation of what problem it solves or how to measure the outcome.

I admire you for thinking about this kind of issue, I wish I could work with more individuals who do :(


This resonates a lot, and I think your example actually captures the core failure mode really well.

What your PM asked for isn’t an “agentic pipeline” problem - it’s an organizational knowledge and accountability problem. LLMs are being used as a substitute for missing context, missing ownership, and missing validation paths.

In a system like that (30+ years, COBOL, interdependent routines), the hardest parts are not parsing code — they are understanding why things exist, which constraints were intentional, and which tradeoffs are still valid. None of that lives in the code, and no model can infer it reliably without human anchors.

This is where I have seen LLMs work better as assistive tools rather than autonomous agents: helping summarize, cluster, or surface patterns — but not being expected to produce “the” design document, especially when there is no stakeholder capable of validating it.

Without determinism around inputs, review, and ownership, the output might look impressive but it’s effectively unverifiable. That’s a risky place to be, especially for early-career engineers being asked to carry responsibility without authority.

I don’t think the problem is that LLMs are not powerful enough — it is that they are often being dropped into systems where the surrounding structure (governance, validation, incentives) simply isn’t there.


Your task is certainly doable though.

You can ask AI to focus on the functional aspects and create a design-only document. It can do that in chunks. You don't need to know about COBOL best practices now, that's an implementation detail. Is the plan to modernize the COBOL codebase or to rewrite in a different language?

See what this skill does in Claude Code, you want something similar: https://github.com/glittercowboy/get-shit-done/blob/main/get...


First off, what you shared is cool, thank you. Especially considering it captures problems I need to address (token limitations, context transfer, managing how agents interact & execute their respective tasks).

My challenge specifically is that there is no real plan. It feels like this constant push to use these tools without any real clarity or objective. I know a lot of the job is about solving business problems, but no one asking me to do this has any idea or defined acceptance criteria to say the outputs are correct.

I also understand this is an enterprise / company issue, not that the problem is impossible or the idea itself is bad. Its just a common theme I am seeing where this stuff fails in enterprises because few are actually thinking how to apply it... as evidenced by the fact that I got more from your comment than I otherwise get attempting to collaborate in my own organization


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: