Right a trendline typically follows a curve before it reaches the apex.
If anything based on achievements we've had a speed up in the trendline. We are seeing acceleration. Predicting a limit like in Moore's law means seeing a slow down before we hit that limit.
You can make analogies but analogies aren't proof. An analogy to Moore's law ending does not mean it is the same thing happening in AI. You need evidence.
I agree that a limit will eventually be hit. That will always be the case but we haven't hit that limit yet. It's only been roughly a year since the release of chatGPT.
Additionally compute isn't the main story here. The main story is the algorithm. Improvements in that dimension are likely not at a limit yet such that a more efficient algorithm in the future will need less compute.
You're begging the question that error rate is a simple metric we can analyze and predict. That there are not other qualitative factors that can vary independently and be more significant for strategic planning. If there's one trend I recognize, it's the near-tautology that increasingly complex systems become increasing complex, as do their failure modes. An accurate predictive model has an expanding cone of unknowns and chaotic risk. Not some curve that paints a clear target.
Look beyond today's generative AI fabrication or confabulation (hallucination is a misnomer), where naive users are already prone to taking text outputs as factual rather than fictive. To my eye, it's closely linked to the current "disinformation" cultural phenomena. People are gleefully conflating a flood of low-effort, shallow engagement with real investigation, learning, and knowledge. And tech entrepreneurs have already been exploiting this for decades, pitching products that seem more capable than they are, depending on the charitable interpretation of mass consumers to ignore errors and omissions.
How will human participants react if AI get more complex and can exhibit more human-like error modes. Imagine future tools capable of gullibility, delusion, or malice. Seeing passionate, blind faith for LLMs today makes me more worried for that future.
I do not expect that AI will effectively replace me in my work. I admit the possibility that the economy could disrupt my employer and hence my career. I worry that our shared socio-technological environment could be poisoned by snake oil application of AI-beyond-its-means, where the disruption could be more negative than positive. And, that upheaval could extend through too much of my remaining lifetime.
But, these worries are too abstract to be actionable. To function, I think we have to assume things will continue much as they are now, with some hedging/insurance for the unpredictable. There could just as easily be a new AI winter as the spring you imagine, if the current hype curve finds its asymptote and there is funding backlash against the unfulfilled dreams and promises.
You're right. It is unpredictable. The amount of information available is too complex to fully summarize into a clear and accurate prediction.
However the brute force simplistic summary that is analyzable is the trendline. If I had to make a bet: improvement, plateau, or regression I would bet on improvement.
Think of it like the weather. Yes the weatherman made a prediction. And yes the chaos surrounding that prediction makes it highly inaccurate. But even so that prediction is still the best one we got.
Additionally your comment about complexity was not fully correct. That was the surprising thing. These LLMs weren't even complex. The model is still a feed forward network that is fundamentally much simpler then anticipated. Douglas hofstadter predicted agi would involve neural networks with tons of feedback and recursion and the resulting LLM is much simpler then that. The guy is literally going through a crisis right now because of how wrong he was.
I'd argue complexity also comes from the scale of the matrices, i.e. the number of terms in the linear combinations. The interactions between all those terms also introduce complexity, much like a weather simulation is simple but can reflect chaotic transitions.
Of course. The complexity is too massive for us to understand. We just understand the overall algorithm as an abstraction.
You can imagine 2 billion people as an abstraction. But you can't imagine all of their faces and names individually.
We use automated systems to build the LLM by simply by describing the abstraction to a machine. The machine takes that description and builds the LLM for us automatically.
This abstraction (the "algorithm") is what's on a trendline for improvement based on the past decade.
Understanding of the system below the abstraction, however, has been at a almost standstill for a much longer timespan then a decade. The trendline for low level understanding points to little future improvement.
Sorry for the late response... In short, I think abstraction can leave too much to chance. So much conflict and social damage comes from the different ways humans interpret the same abstract concepts and talk past one another.
Making babies and raising children is another abstract process---with very complex systems under the covers, yet accessible to naive producers. In some sense, our eons of history is of learning how to manage the outcome of this natural "technology" put to practice. A lot of effort in civilization goes into risk management, defining responsibilities and limited liabilities for the producers, as well as rules for how these units must behave in a population.
I don't have optimism for this idea of AI as a product with unknowable complexity. I don't think the public as bystanders will (nor should) grant producers the same kind of limited liability for unleashing errant machines as we might to parents of errant offspring. And I don't think the public as consumers should accept products with behaviors that are undefined due to being "too complex to understand". If the risk was understood, such products should be market failures.
My fear is the outcome of greedy producers trying to hide or overlook the risks and scam the public with an appearance of quality that breaks down after the sale. Hence my reference to snake-oil cons of old. The worst danger is in these ignorant consumers deploying AI products into real world scenarios without understanding the risks nor having the capacity to do proper risk mitigation.
But none of it changes the pace of development. It is moving at breakneck pace and the trendline points to the worst outcome.
It's similar to global warming. The worst possible outcome is likely inevitable.
The problem is people can't separate truth from the desire to be optimistic. Can you be optimistic without denying the truth? Probably an impossible endeavor. To be optimistic, one must first lie to himself.
If anything based on achievements we've had a speed up in the trendline. We are seeing acceleration. Predicting a limit like in Moore's law means seeing a slow down before we hit that limit.
You can make analogies but analogies aren't proof. An analogy to Moore's law ending does not mean it is the same thing happening in AI. You need evidence.
I agree that a limit will eventually be hit. That will always be the case but we haven't hit that limit yet. It's only been roughly a year since the release of chatGPT.
Additionally compute isn't the main story here. The main story is the algorithm. Improvements in that dimension are likely not at a limit yet such that a more efficient algorithm in the future will need less compute.