Hacker Newsnew | past | comments | ask | show | jobs | submit | ken47's commentslogin

I have to wonder if there is a relation to the rising prevalence of coding LLMs.


Coincidence is God’s way of remaining anonymous.


And don't forget that physical loneliness, that is, actually being alone, eliminates one major feedback source that something could be wrong with your health, or a source of immediate aid if e.g. you go into cardiac arrest.

Maybe the researcher above touches on these things, but more generally, there should be a standardized probability and statistics exam for ALL aspiring scientific researchers, and a high score should be the minimum cutoff. The influence that a statistically flawed study can have over our collective futures is too dangerous.


> The influence that a statistically flawed study can have over our collective futures is too dangerous.

An even bigger danger: with all of the flawed / p-hacked / over-hyped studies, the public (and the legislature) will start to believe that NO science is real.

It worries me how much argument there is over things I consider to be facts. And how much effort goes into undermining science when it is not in the corporate interest (eg cigarette manufacturers funding “inconclusive” studies).


The main premise of this article is that SPA frameworks are primarily about transitions. This is yet another passionate argument built on a false foundation.


The overarching problem is that signal:noise ratio in contemporary discussions of AI is absurdly low. This is thanks in large part to a media complex whose primary goal seems to be the pumping of AI investments and stocks, and the associated cacophony of mendacious exaggerations. If only we could have a reasonable discussion without the "contributions" of the non-technical and/or salespeople...


Occam’s Razor applies.


Those wealthy enough to invest or speculate in real estate incur a write-down they can likely afford - as Warren Buffett once said, don’t swim naked. The person of average or below-average means gets a chance at a standard life.


We don't need AGI in order for AI to destroy humanity.


> rise of IDEs, or Google search, or AWS.

None of these things introduced the risk of directly breaking your codebase without very close oversight. If LLMs can surpass that hurdle, then we’ll all be having a different conversation.


A human deftly wielding an LLM can surpass that hurdle. I laugh at the idea of telling Claude Code to do the needful and then blindly pushing to prod.


This is not the right way to look at it. You don't have to have the LLMs directly coding your work unsupervised to see the enormous power that is there.

And besides, not all LLMs are the same when it comes to breaking existing functions. I've noticed that Claude 3.7 is far better at not breaking things that already work than whatever it is that comes with Cursor by default, for example.


Literally everything in this list, except AWS, introduces risk of breaking your code base without close oversight. Same people who copy paste LLM code into IDEs are yesterday’s copy paste from SO and random Google searches.


Longer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: