Reading the comments, is it safe to say that LLMs are a digest of the internet which is some update over google search, but with the caveat that you need to double check the results? I mean they basically have some compressed version of almost all the written knowledge and will respond correctly about things that have already been written, and hallucinate (extrapolate) about things not explicitly written. Of course if someone carefully curates the input data to filter out misinformation, it might even be an upgrade over google. Is there a consensus on this?
The author claims to be able to tell AI content. I am wondering, is there any test to help me train to distinguish AI content? Like: this paragraph is written by a human this by an AI and see how well we all do?
But I think it's not going anywhere fast, after reaching feature stability there maybe isn't much to do except the hard work of upgrading dependencies (like Guile and Qt) that maybe isn't appealing to the devs.
I am wondering what will come to save us from this AI storm. It looks like online search and social networks will be more or less dead. Maybe we will start going out again.
The only thing that will stop AI is AI proving less profitable than the alternative.
I see stories that the market is already starting to doubt the hype. Integrating AI into existing workflows or replacing those workflows is often more complex and error-prone than simply having human beings do the thing, and the cost benefit analysis isn't there because the technology just doesn't live up to expectations. So there may be hope.
That is the eventual outcome. The question is how much havoc so-called "tech" companies can wreak before we get there. They have lots of money to burn. This could take a while. The environmental costs are enormous.
We are now at the "They have copied all our output, chopped it into tokens and are regurgitating it back to us" stage.
The internet is more than the web. Perhaps the web must be sacrificed to free ourselves from these cretinous intermediaries.
I think the point is that string theory is a huge model with so many parameters that it can fit everything. This is overfitting. Because of that it does not have predicting power.
The point about stability is that as the curve of stability is not a straight line. That is as the number of nucleons increases, you need proportionally more and more neutron to be stable. So you cannot just smash small nuclei together to form bigger ones. Somehow you need to add some extra neutrons.
reply