Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As someone who remembers that time very well I disagree.

Smartphones followed a similar path to the internet i.e. completely new paradigm, that had immediate benefit but needed technology to improve to become widely adopted. But where the roadmaps to improve these technologies were clear and well defined.

AI is far more akin to nuclear fusion where each step along the way will require major scientific breakthroughs. And where it's not clear how to get this grand AGI end-game. Especially as OpenAI has said advancements depend on compute capacity that we simply don't have.

[1] https://openai.com/index/learning-to-reason-with-llms/



Even with their current capabilities, these AI systems will dramatically improve productivity. We could have a 20-year AI 'winter' with no new advancements, and they'd still be a big deal.

The thing is that it will take years to integrate them into existing domains and workflows. Honestly I think the most relevant comparison is the rise of desktop computers themselves. Suddenly paper-based processes had to become electronic and in some cases that took 40 years.


Except we've had AI systems for a while now and it hasn't meaningfully impacted productivity across the economy. Maybe in select pockets e.g. knowledge workers and even then it's highly debatable.

You compare this to the internet or smartphones and they have been transformative across every aspect of society.

And as someone who works for a bank which is heavily exploring LLMs there is no complexity in integrating them into existing workflows. The issue is that (a) risk of privacy/security being compromised through prompt exploits and (b) risk of reputational damage if the prompt is biased or hallucinates. Issues that may well be inherent to all transformer based models.


The other main difference that I can’t get past is that the Internet and cell phone were so obviously useful to a layman that you didn’t really have to explain the value proposition.

The ability to type an email for instance, and send it instantly to someone on the other side of the planet for free was something that was in such contrast to the tools of the time it can’t really be understood by someone who didn’t live through that period.

AI to me is really hyped up by some very highly regarded CEOs with strong track records in other domains and tech enthusiasts who seem hell bent on being able to look back and say they called the next Industrial Revolution. In short everyone thinks they’re the counter to Paul Krugman saying the internet would be as useful as the fax machine. Credulity levels are off the charts. It’s gotten to the point where skeptics are automatically assumed to be wrong.

But what’s missing is the obvious amazement that should come to an ordinary person and frankly I still don’t see most people naturally gravitating to these tools.

Perhaps that could be explained by how amazingly fast technology has advanced in recent years but then that in and of itself seems to call into question whether this technology that’s being called AI is truly revolutionary when compared to what’s already available.


> I still don’t see most people naturally gravitating to these tools.

I barely know anyone who doesn't use ChatGPT frequently, to help wording an email or such. I agree though that in instances like this it is not transformative to society and rather one more tool that we use. We will see, IMO the impact of the current AI technology on the world is rather "medium".


ChatGPT feels like a more advanced Grammarly. In our company, they keep creating chatbots tailored to specific domains, but with poor data quality, the ROI remains low. Right now, it's mostly hype, and a true AI revolution seems years off. Even internally, executives are questioning how to measure ROI when they see the costs. I suspect the current hype cycle could lead to the downfall of some companies that focus too heavily on developing AI features for their stakeholders.


I barely know anyone who does.


I think you may have a bit of hindsight bias. The internet was not immediately recognized as useful. One of my first jobs was at an e-commerce startup that folded in the dot com bust because (1) investors did not believe that our target customers would favor online shopping over ordering from physical catalogs, or (2) foresee how online sales might impact business operations or the viability / scalability of business models - such as drop shipping.


You don't have to say "regarded" on HN, I don't think there's censorship like reddit.


Ha in this case that’s actually the word I meant but Reddit has basically made that an impossible word to use in its intended form


I don't understand.


That's because the main use case of AI is for businesses(once we solve hallucinations).


Except we will never solve that for LLMs. It's just how they work. LLM output will always be a "best guess"


I think that you are not appreciating how much of a quantum leap this is in technology. This is like a calculator for verbal reasoning. Humanity has little idea of what is even possible with something like that.

Also, I am old enough to remember - and have experienced in my professional career - a time when the mainstream opinion was that the internet was a tiny niche and that things like online shopping or dating would never gain widespread adoption.


I LOVE it when companies replace their standard customer service with AI. It makes it much easier to get ahold of a real person, faster, by just confusing the AI system until it falls back onto a human.


Since you’re saying “for a while” I presume you mean ML. ML systems have had an enormous impact on productivity. There’s a huge volume of decisions that get made instantly and with far greater precision using models That were done manually before.


I've worked with ML for a while and I wouldnt say that this is the case at all.

ML also didnt replace manual decision making. A lot of previously automated decision making was done with what were basically encoded rules of thumb which didnt overfit much worse than "more advanced" ML models did.


Your personal failure to use ML successfully has no bearing on its wider deployment and success. Overfitting strongly implies that you or whoever was doing it just didn’t know what they were doing.

ML is used everywhere.


Wow, salty :)

How many ML projects for large businesses have you been on?


Dozens. I’m a consultant and it’s what I do. The modeling is very easy. The operational changes are the hard part. Basically any random task of. A sort of maintenance, production, replacement, procurement or scheduling task can save 10-30% with a simple model. There’s a long way to go but this stuff is everywhere.

The world hasn’t even fully realized the productivity value of spreadsheets and email.


Email summarizers and such don't count as revolution, sorry.


where is billions of dollars in revenue coming in from for OpenAI.


The numbers aren't public, but it was widely reported earlier this year that their revenue is in the billions. I would assume this is mostly API use by businesses, not individual ChatGPT subscribers.


> Even with their current capabilities, these AI systems will dramatically improve productivity

It's been 24 months, where is it? And even what you can measure doesn't match the valuation of openai at all


You won’t necessarily notice it as much in end user application. But I already see good adoption for knowledge workers and the next step is in process automation through agentic workflows. Most likely the result will be in the form of efficiency gains, improved profits and worse customer service.


why hasn't all the productivity boost reported in qtrly results from corporates.


> Suddenly paper-based processes had to become electronic and in some cases that took 40 years.

"Took", past tense?


I think it's more like nuclear fission, frankly. I don't think the major breakthroughs will be as difficult to achieve, but we'll have a trail of nasty side effects and bad actors that will be left in its wake.

In 2020 I never thought an AI would code for me. And now it turns out, it's pretty good. But AI Slop is pretty much a toxic mess.


> AI is far more akin to nuclear fusion where each step along the way will require major scientific breakthroughs. And where it's not clear how to get this grand AGI end-game.

We have AI now. We've had it for ages. We just stop using that term for things once we've gotten them working.

We also already have artificial entities that are smarter and more capable than a lone human, and have for a long time. Bureaucracy and writing are incredibly powerful technologies, especially when combined.


Have you checked out arcprize.org? If so, what do you think about it? If not, you might find it interesting.


AI is different than statistics and advanced math though




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: