Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're right. It is unpredictable. The amount of information available is too complex to fully summarize into a clear and accurate prediction.

However the brute force simplistic summary that is analyzable is the trendline. If I had to make a bet: improvement, plateau, or regression I would bet on improvement.

Think of it like the weather. Yes the weatherman made a prediction. And yes the chaos surrounding that prediction makes it highly inaccurate. But even so that prediction is still the best one we got.

Additionally your comment about complexity was not fully correct. That was the surprising thing. These LLMs weren't even complex. The model is still a feed forward network that is fundamentally much simpler then anticipated. Douglas hofstadter predicted agi would involve neural networks with tons of feedback and recursion and the resulting LLM is much simpler then that. The guy is literally going through a crisis right now because of how wrong he was.



I'd argue complexity also comes from the scale of the matrices, i.e. the number of terms in the linear combinations. The interactions between all those terms also introduce complexity, much like a weather simulation is simple but can reflect chaotic transitions.


Of course. The complexity is too massive for us to understand. We just understand the overall algorithm as an abstraction.

You can imagine 2 billion people as an abstraction. But you can't imagine all of their faces and names individually.

We use automated systems to build the LLM by simply by describing the abstraction to a machine. The machine takes that description and builds the LLM for us automatically.

This abstraction (the "algorithm") is what's on a trendline for improvement based on the past decade.

Understanding of the system below the abstraction, however, has been at a almost standstill for a much longer timespan then a decade. The trendline for low level understanding points to little future improvement.


Sorry for the late response... In short, I think abstraction can leave too much to chance. So much conflict and social damage comes from the different ways humans interpret the same abstract concepts and talk past one another.

Making babies and raising children is another abstract process---with very complex systems under the covers, yet accessible to naive producers. In some sense, our eons of history is of learning how to manage the outcome of this natural "technology" put to practice. A lot of effort in civilization goes into risk management, defining responsibilities and limited liabilities for the producers, as well as rules for how these units must behave in a population.

I don't have optimism for this idea of AI as a product with unknowable complexity. I don't think the public as bystanders will (nor should) grant producers the same kind of limited liability for unleashing errant machines as we might to parents of errant offspring. And I don't think the public as consumers should accept products with behaviors that are undefined due to being "too complex to understand". If the risk was understood, such products should be market failures.

My fear is the outcome of greedy producers trying to hide or overlook the risks and scam the public with an appearance of quality that breaks down after the sale. Hence my reference to snake-oil cons of old. The worst danger is in these ignorant consumers deploying AI products into real world scenarios without understanding the risks nor having the capacity to do proper risk mitigation.


I don't have optimism for AI either.

But none of it changes the pace of development. It is moving at breakneck pace and the trendline points to the worst outcome.

It's similar to global warming. The worst possible outcome is likely inevitable.

The problem is people can't separate truth from the desire to be optimistic. Can you be optimistic without denying the truth? Probably an impossible endeavor. To be optimistic, one must first lie to himself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: