We have been using models, and accurate ones at that, for drug discovery, nature simulation, weather forecasting, ecosystem monitoring, etc. well before LLMs with their nondescript chat boxes have arrived. AI was there, the hardware was not. Now we have the hardware, AI world is much richer than generative image models and stochastic parrots, but we live in a world where we assume that the most noisy is the best, and worthy of our attention. It's not.
The only thing is, LLMs look like they can converse while giving back worse results than these models we already have, or develop without conversation capabilities.
LLMs are just shiny parrots which can taunt us from a distance, and look charitable while doing that. What they provide is not correct information, but biased Markov chains.
It only proves that humans are as gullible as other animals. We are drawn to shiny things, even if they harm us.
We have been using models, and accurate ones at that, for drug discovery, nature simulation, weather forecasting, ecosystem monitoring, etc. well before LLMs with their nondescript chat boxes have arrived. AI was there, the hardware was not. Now we have the hardware, AI world is much richer than generative image models and stochastic parrots, but we live in a world where we assume that the most noisy is the best, and worthy of our attention. It's not.
The only thing is, LLMs look like they can converse while giving back worse results than these models we already have, or develop without conversation capabilities.
LLMs are just shiny parrots which can taunt us from a distance, and look charitable while doing that. What they provide is not correct information, but biased Markov chains.
It only proves that humans are as gullible as other animals. We are drawn to shiny things, even if they harm us.