I hear you, and there are even greater concerns than the “you kids get off of my lawn” vibe, e.g. the massive amounts of water required for cooling data centers.
But the “bubble” threat the post mentions is just emotional; things are accelerating so quickly that what is hype one day isn’t hype in a few months. LLM usage is just going to get heavier.
What should get better are filters that prevent bots from easily scraping content. Required auth could work. Yes, it breaks things. Yes, it may kill your business. If you can’t deal with that, find another solution, which might be crowdfunding, subscription, or might be giving up and moving on.
Working towards a solution makes more sense than getting angry and ranting.
The playing fields is very asymmetric. Can you negotiate with these guys? No. They're faceless when it comes to their data operations. Secretive, covert even.
Creating a solution requires cooperation. Making these models fit on smaller systems, optimizing them to work with less resources need R&D engineering, which no company wants to spend on right now. Because hype trains need to be built, monies need to be made, moats need to be dug (which is almost impossible, BTW).
My 10 year old phone can track 8 people in camera app in real time, with no lag. My A7-III can remember faces, prioritize them, focus on mammals' and humans' eyes, track them and keep them in focus at 30FPS with a small DSP.
Building these things is possible, but nobody cares about it right now, because AI Datacenters live in a similar state of ZIRP economy. They're almost free for what they do, even though they harm the environment in irreparable ways just to give you something which is not right.
People’s basic needs are food and water, followed by safety, belonging, esteem, and finally self-actualization. In addition, convenience over non-convenience.
With LLMs come a promise of safety in that they will solve our problems, potentially curing disease and solving world hunger. They can sound like people that care and make us feel respected and useful, making us feel like we belong and boosting our esteem. They do things we can’t do and help us actualize our dreams, in written code and other things. We don’t have to work hard to use them anymore, and they’re readily available.
We have been using models, and accurate ones at that, for drug discovery, nature simulation, weather forecasting, ecosystem monitoring, etc. well before LLMs with their nondescript chat boxes have arrived. AI was there, the hardware was not. Now we have the hardware, AI world is much richer than generative image models and stochastic parrots, but we live in a world where we assume that the most noisy is the best, and worthy of our attention. It's not.
The only thing is, LLMs look like they can converse while giving back worse results than these models we already have, or develop without conversation capabilities.
LLMs are just shiny parrots which can taunt us from a distance, and look charitable while doing that. What they provide is not correct information, but biased Markov chains.
It only proves that humans are as gullible as other animals. We are drawn to shiny things, even if they harm us.
But the “bubble” threat the post mentions is just emotional; things are accelerating so quickly that what is hype one day isn’t hype in a few months. LLM usage is just going to get heavier.
What should get better are filters that prevent bots from easily scraping content. Required auth could work. Yes, it breaks things. Yes, it may kill your business. If you can’t deal with that, find another solution, which might be crowdfunding, subscription, or might be giving up and moving on.
Working towards a solution makes more sense than getting angry and ranting.