I remember when bitcoin was taking off, and everyone instantly started saying "bitcoin may not win but one thing is for sure, the **blockchain** is here to stay." Then 15 years went by and the crypto market is basically bitcoin, which won as a store of value, and then a long tail of shitcoins that were "unlocked by the blockchain as a platform."
AI feels to me like it's in a similar state. ChatGPT was a genuinely exciting breakthrough, and because of the previous example of web, everyone instantly wants to see "LLMs as a platform" take off. This has not happened whatsoever. I literally only use ChatGPT. I don't even use Copilot because it's janky and doesn't solve any real problems for me. I guess I sometimes use the RAG-based applications (like docs pages now support a ChatGPT interface), but these are basically ChatGPT with some extra context injected in-- so, ChatGPT. You talk to any of these AI companies and they all admit they're just using the AI label to fundraise and behind the scenes it's either a CRUD app or the thinnest GPT integration in front. I literally don't use any other AI applications. They're all annoying and flooding the web and it is pure clutter everywhere that adds no benefit ever, all because everyone wants to see "LLMs as a platform."
I grew up being a huge fan of YC, and I would respect them so much more if they would take the contrarian (but in my humble opinion correct) view and say actually, judging by the structural evidence and actual results, it's not clear what exactly AI has to offer right now, and we're going to return to PG's founding philosophy and continue funding unsexy and unpopular but ultimately actually important things.
Apple has heavily integrated AI into it's systems and apps. Every other major tech company is actively adding it to their apps and systems in many different ways. They aren't talking about future potential. We are years past that already. It's here. That's it. We aren't trying to convince people that AI is here to stay. Rather, we are already talking about a post-AI/LLM world. Crypto never really got to that state.
What was crypto's big thing? Ape pictures.
> I literally don't use any other AI applications.
Linux users will get there too. Especially if you start using ANY tech used by any of the major players. You are already using it, and you don't even realize it.
I suppose we'll see. I use a Mac and iPhone and the internet like everyone else and personally I find almost all the AI stuff they're pushing to be annoying, unnecessary and not solving any actual problems I have. Like believe me, I'm the laziest person alive and if AI is going to make my life easier, it's like, where do I sign up? ChatGPT does make my life better and accordingly I happily pay them the twenty bucks every month. But every time I try one of these AI thingies it ends up being a disappointment. Time will tell if the emperor's clothes come off at some point, or if I'm just a grumpy luddite.
"You're already using it, and you don't even realize it."
With all the hype it would be quite difficult to not "realize" where it is allegedly being used. There is a comical effort to claim that "AI" is being used in everything.
I also use Claude Opus for activities that need large context window. But the second a better alternative appears, most people will switch to it. Because - why not?
The market already stabilized somewhat - you can use lower quality models locally for simpler stuff and paid models (better quality in case of ChatGPT-4, larger context window for Claude Opus[0]) when you need something more advanced.
[0] I'm not sure what is the current status of Gemini Pro with 1M context window, but from what I heard it's too expensive for any practical use.
Dismissing the potential of the current wave of AI as overhyped because "I literally only use ChatGPT" is like someone in 1985 dismissing computers as overhyped because "I literally only use WordPerfect". You're missing out on what all the other computers are being used for.
The current wave of model development, sharing, and fine-tuning is creating a technical ecosystem that supports making computer programs that are able to interact with unstructured data in ways that historically have been impossible.
That most people have only seen that used to make a chatbot that can answer unstructured questions with unstructured and occasionally hallucinated answers says nothing about the profound ways those capabilities will shift what kinds of problems we point computers at in the future.
Let's take a classic cataloging problem like managing a small library inventory. Say you're a software company and you have a few bookshelves of programming books people can borrow. And you want to make it possible for people to search the list of books you have in stock to see if there's something they want to borrow before they walk down. This is classic Web 1.0 stuff - a mySQL database with a books table and an authors table, indulge your third normal form fetish. And maybe you integrate with an ISBN catalog by sending it nicely structured XML queries so you can use a scanner to scan the barcodes, pull down and transform structured data about each book, and use it to populate your database. Make an old school HTML form to search it by title, author, publisher, date, etc...
Nowadays, you can take a picture of the spines on the shelves; ask a multimodal AI to figure out what books that means you've got; feed that, plus plain text search access to an online catalog, to another model to get it to build a nice big document describing all the books; feed that as context into a chatbot librarian and let it help users find books. Set up a webcam pointing at the shelf and periodically take new pictures to keep track of who took what books.
Think about the massive amounts of efforts businesses go to to create information technology systems to structure the state of their business, interactions among their employees, data about their customers and suppliers, integration with external systems, and to schematize and constrain the processes of their business. And now start to think about how that all changes when I can skip the structure part.
We are not yet used to thinking about how to solve problems with computers that don't need their inputs to be rigorously structured. When we start realizing what that means, we're in a different game entirely.
I get what the optimism is about in theory. Parsing unstructured data is something we can now do that we couldn't before. It adds a whole new vector to the span of software, a big piece of capability that software now has whose emergent, unforeseen consequences we can't predict.
I'm just not quite sure I buy this. It feels to me like there's a light motte-and-bailey going on, where supposedly AI is going to be a paradigm shift that changes the very notion of what's possible, but the actual proposals are mostly about LLMs being a finite-multiplier enhancement for the existing ability of software to model and optimize processes.
In particular, a big fraction of the concrete proposals seem to be about making business processes more efficient. Are businesses generally constrained by this to begin with? Like businesses don't seem to be sprinting at the edge of software technology to get as much efficiency as they can, buying diminishing returns from existing tech and waiting eagerly for the next wave of improvements. Judging by revealed preferences, it just doesn't seem like a very high priority for them.
Taking your automated library example, that sounds very cool from a hacker/tinkerer perspective and I'm sure it would result in some efficiency improvements, but it just doesn't seem like, no offense to anyone, a problem that needs urgent attention. How does this significantly improve the situation for anyone involved?
Of course it's true that we don't know what we don't know, and I don't disagree that often technology changes the world in unpredictable ways or even that current AI could possibly lead to this. At risk of being the dropbox-is-just-rsync guy, I'm just skeptical about the following pattern:
(1) some new tech gets invented that's supposedly the next internet;
(2) no one can quite explain or plausibly hypothesize how; but
(3) in the meantime, a wave of companies start building "platforms" and selling shovels to people who will supposedly later build the actual useful thing.
One of the things I dislike about the multitude of get-rich-quick crypto scene so much is that many projects disproportionately punished people for trying out new technology and being early adopters. The scar tissue of that is going to be hard to unlearn. I think that “ai” as in generative models right now is under-hyped by a lot of people who dismiss it as another solution in search of a problem similar to blockchains, but it’s difficult to argue it’s under-hyped by the market overall with these insane valuations assuming realistic % chances of ASI right around the corner.
I think a better analogy might be self driving cars: probably going to be as impactful as early hype people guessed eventually, but on much longer timescales than people originally thought.
It's hyped. It's also legitimately useful. I expect to see many more real use cases and integrations into existing technology and into everyday life. And that's just with the models we have at the moment, who knows if newer models can make even bigger leaps.
I also expect to see markets and hype go up and down because these things go in cycles.
If you expect it to completely change the world in the next couple of years, you're probably wrong. And if you think that it's a useless gimmick with a giant hype bubble around it, and that it will disappear as soon as the hype dies down, you're also very wrong.
Yeah the boring answer is the right answer. It’s very clear to see the benefits and I think we’re only going to see more use cases emerge outside of the immediately obvious like a coding copilot. That said, there’s clearly industries where it won’t stick or be too cost prohibitive to implement.
My most “bold” take is that it’s going to end up as a loss leader consolidated at big companies. It’s much too expensive to run at scale and we will see a ton of companies dry up who can’t monetize it. I think we’ll see a general inability to monetize it even at the enterprise level.
> I think we’ll see a general inability to monetize it even at the enterprise level.
Well, it depends. If you already have a lot of your own data, and can rent powerful hardware to train your model (and competent people do actually set up and manage the whole process), you can get a very powerful engine with interesting insights that will be relevant to you (and possibly to you only), with the effect probably proportional to how specific your data is. After the training phase, actually using the model doesn't need to be that expensive.
My opinion is that it's easy to get the impression it's a bubble because of the army of useless stuff that comes out of it (including the horrific cancer chatbots)
But at the same time it is going to drastically change the way people work. In a very close future every single programmer is going to have chatgpt open. Every single marketer, researcher, lawyer, doctor etc...
It's a revolution the same scale of internet itself. Everybody was on google everyday at some point, everybody will be on an AI at some point (if not most of the time they interact with a device)
> In a very close future every single programmer is going to have chatgpt open.
I thought that for a little bit, paid for it for several months, but it's not enough better imo - and the hallucination rabbit holes burn harder than the productive chats make up for.
But something like it probably, it just might be harder to say oh yeah LLMs did that it wasn't over hyped after all. Augmented search, or just improvements to search with a more familiar presentation. Summarise the small amount of information from blogspam and collapse them all, turn a NL question into a few different salient keyword searches, that sort of thing. I haven't tried Kagi's AI yet (I've only used it at all for a few searches while DDG was down recently) but maybe they're doing something interesting or worth watching at least.
If I saw any lawyer or doctor using it in a professional context then I would stop using their services immediately. The amount of bad information it hallucinates is not something that's worth paying someone to be a middleman doing data entry on. Their job is meant to be knowing this stuff, if they need an LLM as a crutch then they're not good at their job.
Precisely! If Google tells people to drink urine for kidney issues or kill themselves, I wouldn't trust any healthcare "professional" that needs an LLM when theres countless resources available for figuring out contraindications et al.
The information resources already exist, if people can't be bothered to access them the old-fashioned way that guarantees accuracy then they're not fit for purpose in their role, and should be fired on the spot.
Also heard about accountants doing maths in ChatGPT—it ain't Wolfram Alpha lol.
you're insane if you think all doctors are competent to the extent they would need a refresher with chatgpt
not to mention rare diseases that are most of the time overlooked by many doctors before one guy finds it, or the sick guy himself by going on internet
> It's a revolution the same scale of internet itself. Everybody was on google everyday at some point, everybody will be on an AI at some point (if not most of the time they interact with a device)
Agreed that chatbots will change things. Hard disagree that it's on the scale of the internet. The internet at large touches way more than the habits of white collar workers
I've said for a bit that "something has been automated, but its on us to figure out what" since the launch of ChatGPT 3. Modern LLMs are really really useful in some domains, fun toys in others, and problem makers in most.
My guess is that we're in for an AI winter in ~12-18mo as most AI startups fail and investors lose their shirts on a bunch of bets, but a few good use cases rise from the ashes.
OpenAI is going to walk away with a tooon of VC money from startups spending on their APIs over the next year though.
ChatBots that try to do too much and do it worse than real human service reps, like that one that wrongly assured a customer that their airline ticket was refundable
Deluge of low-value generated content taking attention and revenue away from high-value content creators
Have you tried writing code or a paper that’s actually factual with ChatGPT? They are so obviously wrong in many cases that it’s often a hindrance rather than helpful when I’ve tried to use it.
I do love using ChatGPT for fun stuff like “write me a recipe for enchiladas that’s also a country western song.” My kids and I find it hilarious.
We had a remote workshop with GitHub for Copilot. The example was to have it create various functions for a game of Rock, Paper, Scissors. The extra exercise for afterward was to have it add the "Lizard and Spock" options. When I tried to have it do that, it spun its wheels for a little and told me the code it generated violated their responsible use guidelines or whatever.
In retrospect it probably detected it generated something it didn't have the IP rights to give me, but ever since then I've described the state of the art as "like talking through the intercom at a McDonald's drive-thru, but every now and then the attendant says 'sorry, can we start over? I got distracted thinking about killing you.'"
I think people mix up "Improper/indecent/harmful/... uses of AI" with "Troubles made by AI". If we exclude my usage of Copilot in VsCode, my most common exposure to AI is one of my colleagues polluting every slack thread with a low effort, low quality content from ChatGPT that he's most probably not even read once.
But Copilot has revolutionized my coding. I have to code in many languages on a daily basis: Typescript, tsx, Css, Html, Dart, config files (like docker[compose], k8s, Ansible, json configs), c#, python. I'm only fluent in c# and ts. The fact that I do not need to remember the syntax for all the other is a big game changer. I was able to be immediately productive in a new language/framework after reading the documents. Previously it took some time before I ramped up, and then it would be lost after some inactivity. I'm not talking about important concepts, or CS fundamentals. I'm talking about specific ways things can be done in each language/framework. Copilot makes me 1000x more productive in this part. I'm still limited by my mental bandwidth, so I'm probably 2x more productive on an average day.
I also use ChatGPT, and run some models locally just to play with them, but all happen much less frequently than my colleague disrupting discussions with ChatGPT content.
I felt similar at the beginning but then I realized the suggestions were suboptimal, and it happened like 50% of the time. Usually not completely wrong, just imitating something that was already written, but sometimes introducing subtle bugs. So in the end it actually made me less productive because I had to stop my flow and start analyzing if there is no catch in the suggestion. It was a bit tiring and in the end I decided it's easier for me to stay with the flow.
I'll give it a try next year, maybe it improves to the point where the number of suboptimal suggestions falls to 20% or so, it would be much easier then.
Sure - I guess I should say "domain" is an incorrect word, when "use case" is a better phrasing.
LLMs have a tendency to hallucinate at a rate that makes them untrustworthy at scale w/o a human in the loop. The more open ended the prompt, the higher the hallucination rate. Here I mean minor things, like swapping a negative, that can fundamentally change a result.
Thus, any place that we trust computer to perform reliable logic, we cannot trust an LLM because it's error rate is too high.
Methods such as RAG can box in the LLM to keep them on track, but this error rate means that they can never be mission critical, a-la business logic, and keeps them to being a toy.
Where LLMs are game changers are ETL pipelines / data scrapers. I used to work at Clearbit where we built thousands of lines of code just to extract the address of a company's HQ or if a company is owed by another org. LLMs just do that... for free. With LLMs data extraction from free form text is now a solved problem, and thats god damn mindblowing for me.
I think your options are a bit of a false choice. The .com bubble was a bubble, but many of the ideas from that era also ended up being successful businesses eventually. Sometimes the market gets out in front of the value - it is accurate in assessing the potential of a technology and wrong about assessing the rate of adoption or who the winners will be.
I'm using AI today to generate real business value in a couple of different industries but also feel that it is simultaneously overhyped for today and underhyped for the future. If the question is just "is it a bubble" the answer is obviously yes, it will come back down. But, how far is down and how long will it take for the world to catch up to the market? Maybe not that long.
LLMs won't lead to "AGI" (which we can't even clearly define). We'll get some amazing tools and automate a lot of low level jobs, but ultimately this round of AI technology will make us cyborgs rather than give us skynet.
When we have developed catalogues of models and other tools and we start training an agent to learn how to create arbitrary graphs of/programs with those tools we might be close. Think of Demis' "play any video game" AI but with successively more complex real world problems and sensor data.
> LLMs won't lead to "AGI" (which we can't even clearly define).
I think one of the major problems is outside technical circles, where I get the impression plenty of people think we're already there. It's not a recent thing either, I remember having this impression last year, and it's just been growing since.
If current LLM were AGI you could just tell it to write a book or make a software project and it would do it end to end on its own, just like a human worker does.
But people tend to conflate knowledge for intelligence, many thinks that Google search is intelligent as well etc.
I think overhyped in the sense of NVDA market cap but not overhyped in the sense of the utility. NVDA is useful for large data centers where companies can be the middle man for queries. I don't think this will be the future of LLM's for very long.
I think as the technology adapts, the utility will be having chips on your existing hardware that can provide this functionality. Will LLMs always need super high end gpu's to process requests or will the algorithms improve enough to allow quick speeds with lesser hardware? I will tell you one thing for certain. NVDA's stock price is not based on any possibility of algorithm improvements and only based on the idea that more and more resources will be needed. It's based on the idea that there will always be the need for massive data centers to process these requests..
We are in the middle of dot.com bubble v2. There's a lot of over hyped AI garbage that will deservedly flame out and die, but it's still sowing the seeds of a future revolution.
while I agree we are in a bubble within a bubble I do not think the future is as rosy. We might have better graphics, more content, faster internet but hardly Jetson-esque/Fifth-element world.
All in all looking at the technological innovations of the past century, I can't help but feel the technological novelty of previous growth spurts are just not achievable anymore.
More importantly, we cannot go one day without being told what a happy and advanced wonderful western enlightened era given us: a technological cargo cult with a global serfdom ready to manifest old powers.
It's being hyped by VCs because it's otherwise very hard to raise money in a high-interest rate environment Why invest in venture when you can just buy treasury bonds or low-risk corporate debt? Well because AI FOMO.
It'll be adopted by companies in some way or another and I am very hopeful for things like AI tutors which can actually significantly improve learning but we're definitely going to see a pullback when revenue never materializes.
Your range of "1-10 years" means predictions here are going to be subject to Amara's law: "we overestimate the impact of technology in the short-term and underestimate the effect in the long run."
Prospects in 1 year? Likely overhyped.
Prospects in 10 years? We are unprepared for how much it will change.
I initially voted a realistic reflection, and then reconsidered upon "near future", and then I saw (1-10 years away), and re-voted for a realistic reflection.
The hype _is_ justified. But we will not have AGI tomorrow. However, the pace of advancements has accelerated (in broad terms), and I believe will continue to accelerate. I would be unsure in 1-5 years. But 10 years is not the near future. To say the world today looks nothing like it did 10 years ago is an understatement.
Collectively we will continue to stack (newly) outsized gains everyday, assisted by advancements in AI. And 10 years from now we won't recognize the constraints of today and instead will have all new expectations and problems.
What does this mean for markets? Well, it doesn't matter _really_ matter for NVDA. We're in a gold rush, and NVDA is the only one selling good shovels, plus they have arguably the best plan for quality excavators, trommels, drills, conveyor belts and dump trucks for those who are serious about this long term.
I bought a large stake in NVDA the day ChatGPT was announced and was adding to it until I started training for a triathlon which is now slurping up all my gold. Just gonna hold for the next couple of decades.
This is the issue I would be examining very closely if I ran a company based on subsidized AI models. It's not necessary for companies like OpenAI to go out of business to cause problems: the costs just need to go up to levels necessary to make AI vendors profitable, like what happened with Uber.
I think I see it as similar to the .com bubble, there's certainly useful and interesting tech involved, just the amount of money being thrown around doesn't look sustainable.
Entire companies are putting huge proportions of their total budget into AI hardware, I cannot see the majority of those being profitable at the end of the day as there's simply not the consumer spending on those products to maintain it.
Most current AI projects seem to be a "value-add" level, meanwhile nvidia's market cap, at $3.2 trillion, is $10k for every man, woman and child in the united states. And that's just one company - even if it may be the largest single representative of the AI-driven market. And all the largest 5 companies in the world (Microsoft, Apple, Google and Amazon) are also likely riding the AI hype valuation train, but maybe to a lesser amount. Summing them gives ~$14 trillion, or over 40k for every man, woman and child in the united states.
How on earth do we expect the revenue that sort of valuation represents be extracted from the market?
I think it will bring about change, but it will be a slow and long road. I don't think we are going to see instant results or changes in society, but rather something similar to the Internet and mobile, where it takes a few years to be adopted by more and more people and the technology improves over time, albeit a bit more slowly than early adopters are hoping for.
NVDA market cap for sure is bonkers but AI is here to stay. Anytime you question the future, think if you can see yourself in "that" future. For me, AI is already a part of my workflow and it's beneficial to my daily life. From coding, explainers, and to random questions in perplexity.
In coding, it made me a lot more productive. No more do I have to scour Google or Stack Overflow. Instead, I get the answer straight from the AI. It might not be working at times but it's definitely better.
Even random questions like "What is the cost to produce the Apple Pro Display XDR?" is being answered more promptly by AI.
This is contrary to the crypto craze where there was expected use but sadly it did not come to fruition. And until now, it's still a speculative industry, a solution looking for a problem.
Massively overhyped. Companies are attaching AI to literally everything they can think of, no matter how useless, and people are already catching on. It'll bubble back down when there are more targeted successful products like summarizing, enhanced, search, etc.
I am not the OP but I've noticed it stuffed into recent plugin updates for Wordpress - WooCommerce, Yoast SEO, WP Bakery, Elementor to name just a few.
It's also now relentlessly pushed in platforms like Adobe, iStock and Mail Chimp to the point that it is intrusive. I would love to use Open AI more but whenever I've asked Chat GP about things that I do know a lot about , the answers have been laughably limited or flat-out wrong.
I personally would like a laundry drone that folds and puts away the clean washing, as well as match clean socks. Or a bot that can make calls to utility companies on my behalf.
I voted “overhyped” because some people are out there saying it is more important than the fire or whatever. These people are out of their mind (or just performing PR stunts). But I don’t think it’s a bubble.
Not in terms of value — I think it is already delivering a lot of value and it will deliver more and more —, nor in terms of finance. Some overspending VC losing their bets does not constitute a bubble. NVidia has a somewhat diversified demand in games, crypto mining and AI. And it’s not negligible the chance of the next software big thing also use GPUs.
I don’t think it will end up being more impactful to humanity than the internet itself, but it will be big and useful.
Imagine being able to program every machine by just telling it what you need. For example, even toasters can have LLMs in them. Give it ability to control temperature and humidity and you have the perfect toaster machine that can toast everyones bread upon request. Maybe every machine in the future will have AI in it that will act both as a controller and UI. Any specialised industry getting wiped out. Doing home security? No more specialised applications or implementation, just tell the door whom should it let in.
And LLMs are just one way of doing computer intelligence.
The way software ate the world, AI will eventually eat software.
> For example, even toasters can have LLMs in them. Give it ability to control temperature and humidity and you have the perfect toaster machine that can toast everyones bread upon request.
My toaster has a little dial on the side that lets me do that already. I don't need to have a conversation with my toaster to get it to toast the bread, I just put the bread in it, push the lever down, and then toast happens.
That’s because you don’t know better toasts. That dial controls just one parameter and in my experience they suck at consistency between a warm batch and cold batch.
To make you amazing toasts, of course. Have you heard of the most expensive toaster? It’s a Mitsubishi toaster that can cook your toast just right.
In the future, all toasters will be able make great toast when practically all devices come with the same control board, something similar to raspberry pi but with ability to run LLMs and the toaster makers will simply connect the heating elements and sensors without needing to understand computers and the AI will control that toaster to cook just right.
> Have you heard of the most expensive toaster? It’s a Mitsubishi toaster that can cook your toast just right.
That's my point. It does that without a language model. Not only is perfect toast a solved problem, but slathering on a layer of LLM bullshit isn't going to make it more affordable than the Mitsubishi.
My point is that it can be possible to bring down all kind of electronics control unit into single standard component. The toaster and your dishwasher can have the exact same electronics since they wouldn’t need specialized computer that needs to have specialized software and wouldn’t need specialized engineering to make the software work with the hardware. Just connect the sensors and the other hardware like the heating unit to the AI board and tell it what it is and tell it to act like a toaster.
I'd still rather not introduce non-determinism into the mix. A toaster should be consistent in executing its expected function[1] and also not prone to hallucinations.
1: even if this means the operator needs experience and expertise from trial and error to know if it's cold, already hot, if we're putting bagels in, etc.
LLM to AGI path is overhyped and unlikely. Machine learning is getting the attention it deserves, but there is only a small number of organizations that have enough high quality data to make it useful.
It's the same as the dot com bubble. A lot of hype, a lot of investment in silly things, but also ultimately a huge amount of real value that continues increasing for decades.
Supporting your view is the retrospective claim: Y2K justified huge spending, and the resulting dotcom bubble, resulted in a much needed economy-wide IT refresh.
FWIW, I'm fine with burning bales of VC cash to upgrade everyone's infra. Which "AI" does in a way that "crypto" didn't.
I voted unsure. If I hadn't had this option, I would have voted overhyped.
Last year I had a subscription for ChatGPT, I had two homegrown apps running on OpenAI API (which have since moved to Groq with LLama), I had Copilot on VSCode, I had Midjourney, I was playing around locally with LMStudio and Stable Diffusion... and I'm pretty sure I'm forgetting some things.
Now I have nothing left. I use ChatGPT, Claude, Gemini (the free versions) from time to time, but it's a hit or miss and I'm extremely skeptical of GPT generated content which has a smell to it. The only people I know that use this constantly and unapologetically are the ex-blockchain grifters and junior (or wannabe) devs that think generating a small encapsulated function for a well known problem domain is a step away from the AI replacing humans.
So yeah, I'm unsure, but leaning towards overhype based on personal experience. We'll see, I'm too dumb to ride this wave I guess.
Generative AI is a technology that moves the frontier of what is possible. This comes with many opportunities to solve problems that were not previously solvable by a reasonably sized team.
Transformers have some interesting properties like in-context meta learning. This is indeed something to be excited about.
But to say transformers can be AGI by simply increasing context windows is a doubtful proposition.
I think it's valuable in a ton of ways, but I also think we are passed the inflection point for growth in novel applications.
But I also think it's being overused or misapplied in so many situations. I've been involved in a couple of projects that were advertised as "AI" which were absolutely not, and I'm suspicious of any company advertising AI products.
AI does have some use-cases already. Low quality text and image generation for example where it is entirely sufficient.
On other hand valuations are insane. And I believe many projects are pushing AI for sake of pushing something. See blockchains previously. So there is some use, but also lot of waste of time and money.
I've seen some tightly focused, targeted uses of AI to eliminate a piece of drudgery, and I'm hopeful that that area will expand and mature over time. I'm not particularly hopeful about or for that matter interested in trying to replicate creativity via AI.
The generative aspect might be slightly oversold, but I’m interested in the practical applications being targeted by DeepMind and similar labs. AlphaFold’s ability to predict protein structures for disease treatment is very exciting.
Imho there is hype and bubble yes but not at the core: at the core is AGI in 5 years and significant change. The bubble/hype is coming from where it seems to always come from - the non-core camp.
the SLTs of all major tech companies and software companies are absolutely hyping AI. these people are have compensation packages with significant ties to stock prices and those go up with hype and inflated expectations.
> Overhyped and likely to lead to disappointment in the near future.
Define near future here.
I think we could look back in 3 or 4 years and say that Nvidia overplayed it's hand back here in 2024 by trying to extract too many $$$ from the market and thus attracting lots of other players in to try to get a piece of the action. Certainly there are a lot of big players trying to bypass the Nvidia tax: Meta, Microsoft, Google, Amazon, Intel are all working on their own AI accelerators (Google's TPU has been around for almost 10 years now). CUDA is Nvidia's market moat because most ML/AI code relies on it. But if some other player can come in and undercut Nvidia's prices by a significant amount while offering a workable software stack that could be really compelling - the market seems to really be wanting something like that.
I think you might right about NVDA overplaying it’s hand here. One reason why nvidia has managed a monopoly is that the market they were in has never appealed to the really big players. With the current pricing that has changed. PyTorch already supports AMD and Arc GPU… not sure how seamlessly though.
I think SW is hardly ever a strong moat. Given the right incentive it’s a moat that can be bridged. In this case avoiding nvda tax is a huge incentive.
Software development. Education. Marketing. Writing. It's already generating millions of dollars of value. Considering the speed things are improving right now, it's only going to go up.
There are a lot of dynamics at play and I'm not sure how it'll turn out overall. I'm especially curious about how it will turn out when a generation of software developers that learned programming/graduated with GPT enters into the workforce.
I expect they will excel at doing the kind of thing your average business can easily replace with a SaaS but nothing of consequence regarding scale, performance or complexity, because they won’t have had to reason those out.
The problem is that everyone is more desperate for money now than ever before. As a result, they indiscriminately associate everything with "AI" because they see it as a quick way to make money. This has led to the term "AI" becoming so overused and diluted that it has lost its meaning. AI toilet. AI coffee machine. AI fridge. AI BBQ etc. it’s just all bullshit which ruins the essence of the actual technology.
We've hit peak consumer/enterprise use with LLMs and not only is the net adoption not only underwhelming they are downright malicious across the board that its not a stretch to suggest the biggest winners from AI bubble are criminals, grifters (shady tech demos aimed at raising $$$ with undeliverables), foreign agents:
- artists/content creators worried
- enterprise/legal worried about implications (legal and production)
- politicians are worried about AI crimes/threats
I feel that we are close to a Minsky moment for this AI space that is currently distorted by capital gains being generated by piggying back off the hype.
Over time, LLMs will run in our browser/phones but with the unchanged perceptions: Its a nifty toy but we need to hire back the people we let go from the initial hype with reduced wages since y'all can generate stuff now
Politicians are concerned because AI has the potential to eradicate middle-class jobs, especially those involving repetitive tasks like data entry. This could create significant socioeconomic issues. Consider how easily AI could replace project managers once integrated with tools like Jira, or how it could automate jobs that involve copying and pasting information between spreadsheets. This level of automation is what many have been anticipating, except for those without marketable skills, who may find themselves at a significant disadvantage.
Yes. It has all the telltale signs of a FOMO-hype bubble across multiple asset classes that's driven by multiple organizations cynically exploiting a combination of Gell-Mann Amnesia effect and human propensity to anthropomorphize things.
Can it be both overhyped and revolutionary? This may actually fit the technical definition of a bubble. There is a good book called "Bubbles and Crashes" that talks about tech bubbles. I think the traditional definition of a bubble is when the price of an asset exceeds its fundamental value. Thats effectively impossible to know (during the bubble) because we can't predict the future but there are other ways to more practically look at bubbles even if you may be in the middle of one. If an asset is more than 2 standard deviations from the trend of the last 7 years, if there are a lot of novice investors effectively 'speculating', and having a 'narrative' that explains the odd 'non-fundamentals based' valuations. At least thats the definition they use in the book iirc. Thats definitely happening right now.
There is also this obnoxious wall street/silicon valley hype train surrounding all of it-- which is by design. This is exactly what these companies want: a huge digital gold rush so they can try their hand at panning for gold using investor money. They try it with any new tech or buzzword they can because it brings in money.
Its unfortunate because this technology feels revolutionary. This feels like the internet did to me as a kid. It wasn't all great. It wasn't perfect. But it was cool and there was something special about it. There are these new tools and services that all do slightly different things and when you combined them you could create really amazing things. AI feels that way. I'm not just talking LLMs either. I think those are impressive and useful, but there is so much else going on in the AI/ML space that can already really move the needle--and this is the worst it will ever be.
I hate the comparisons to Bitcoin/block chain. They are being made because that same obnoxious wall street/silicon valley hype-model was at play there too, but I think anyone who was honest with themselves knew there were severe limitations with the tech and its usefulness. AI tools actually provide value, right now, and has a lot of room to grow.
But back to the hype and "Bubble" question. I think its overhyped because thats whats what silicon valley is good at and wall street is good and amplifying it (and punishing anyone not riding the wave). We probably are in a bubble, but AI is here to stay. Its a new frontier of computing and its going to unlock new and interesting ways to interact with each other and the world around us.
I also want to point out that AR is a perfect complement for all of the AI/ML tools that have been developed. The Meta Ray Bans are really interesting to me because they are marrying the two techs. I dont trust Facebook with that type of thing, but imagine if a privacy respecting company tried something like that? Apple is setting themselves up nicely in this space. They could create a new market with iGlass or Apple Vision with "apple intelligence" built in.
This is the most excited I've been about tech in a long while.
I tell people, if you know how this stuff works, you'll understand its a fugazi. It produces a statistically likely sequence of tokens (words) that resemble data it was built with. There's nothing intelligent about it, it cannot create anything novel. It is truly nothing more than a language model. Everything that comes out of one of these is a hallucination, in the sense we mean when talking about AI.
I think it's a poorly understood form of compression, a very lossy form of compression.
I think it's big corporate adherents know all that, I think, particularly after the internet and the industry it unlocked, smartphone and the internet access that they made ubiquitous, software developers are just looking for things to sell to people. All the innovations have happened already in the software world, but this big industry was built while this happened and they can't just pack up and go home. The truth is, you can't just sell software to users, you have to sell utility, and most people don't have a real need for software beyond what they already have available, so shiny new trinkets it is. They've got video streaming, communication, information retrieval, report generation, and that's it. You can automate stuff, but that's not the end user buying software, that's the end user buying a widget that software is a component of, and a part they really don't care about. A symptom of what I'm talking about can be seen in the ever increasing needless complexity people keep finding in their lives, the software industry as it applies to direct to consumer sales is very little more than hype and noise and marketing. It's empty calories.
AI is just another thing in a long line of these things, yesterday it was augmented reality, day before that it was IoT, the list goes back quite a number of years. The last real big innovations that end consumers bought that improved their lives was smartphones and video streaming. Everything an end user can benefit from software wise they already have. Easy communication, multimedia content and symmetry of distribution, near free information availability, I can't think of anything beyond those 3 broad categories, money maybe if you're into the bitcoin thing. Maybe "AI" as compression can help with local information retrieval when it's more mature, which is very powerful, but the branding and marketing around it as if it is a mind is not conducive to success on that front. Software is well on it's way to being a mostly business to business thing again, like it once was, which necessarily means a big bubble is going to pop. That's where the innovations are happening, but for us regular people it's all marginal improvement of what we already have as a side with a healthy serving of hype and disappointment.
There's a possible fourth option: it's under-hyped and under-appreciated.
I'm both highly critical of much of present-day Silicon Valley (as a metonym for the high-tech industry itself, not merely some geographical region centred near Sunnyvale, CA), and of the view that developments of the current AI boom, even if falling short of or manifesting in ways far different from what advocates claim could be absolutely profound.
There are a couple of items I've been sitting on for a few years which might make interesting HN submissions. One is a Wikipedia list of defunct US automobile companies. There are a lot of them:
Another, which I cannot locate presently, was a list of hundreds of oil companies which had been registered in California.
In both cases:
- A huge number of individual enterprises sprang up.
- A tiny minority of those survived. The rest were acquired or simply went out of business.
- The resulting industries transformed absolutely everything, though not necessarily how early pioneers and backers advertised or anticipated.
The most salient observation though is that these are events from which one can draw a profound boundary, with clear "before" and "after" times. Moreover, the concerns of those in the midst of the transformation often seem quaint because we live in the world shaped by that transition. Thinking from the perspective of those alive at the time, or before, their world DID in fact end.
I'm reasonably certain of a few things:
- AI seems like it will profoundly disrupt much of the present high-tech information technology sector. Particularly programming, data processing, data creation, data interpretation, design, and other elements.
- There will be tremendous business failures. The end result will likely be profoundly monopolistic, with a very few large winners. To the extent that diversity is retained, it's likely to be grounded in legal, cultural, financial, and/or regulatory conditions, and not on the "free market" preserving competition. (The free market rarely preserves competition.)
- There will be many unanticipated, and some anticipated, negative consequences. Growing lack of trustworthiness of information and media, use in mass propaganda and targeted manipulation, general disinformation, etc., are among the more obvious of these (and generally: are anticipated). These are only some of the consequences.
- There will likely be sharp limitations or constraints on certain applications, again, some fairly evident, others not, but which will shape the resulting landscape.
I find the bubble / no-bubble question something of a red herring. Both hugely transformational and stillborne technologies tend to arrive with bubbles. Markets and investment patterns tend to drive this. What really matters is how well the technology ultimately persists after the bodies have fallen.
23% of Americans have used ChatGPT [0], so you’ve got more than a 1 in 5 chance that Joe Public will say he’s used it. Those odds are pretty good for a product that was released less than two years ago. In fact I bet that level of population penetration is higher than even Facebook in its first few years.
> In fact I bet that level of population penetration is higher than even Facebook in its first few years.
If memory serves me correctly, Canadians were the most likely to take an interest in Facebook in the early days. As of 2008, less than two years after being opened to the general public, 32.9% of Canadians were Facebook users. (If my memory fails me, it is possible that some other country had an even higher uptake.)
Only 13.8% of those in the United States were Facebook users at the same time, so if you are referring to the USA specifically then your bet is a pretty safe one. But this also highlights that early interest in technology tends to be highly regional. Relatively high ChatGPT usage in the USA does not imply relatively high usage worldwide.
Which is important as Joe Public hales from the UK. You might have been thinking of John Q. Public, which is the member of the Public family who lives in the USA, when you went to US figures. But I can find no ChatGPT usage data for the UK. The chances of what Joe Public will say are unknown. The odds are probably not as good as what you suggest.
AI feels to me like it's in a similar state. ChatGPT was a genuinely exciting breakthrough, and because of the previous example of web, everyone instantly wants to see "LLMs as a platform" take off. This has not happened whatsoever. I literally only use ChatGPT. I don't even use Copilot because it's janky and doesn't solve any real problems for me. I guess I sometimes use the RAG-based applications (like docs pages now support a ChatGPT interface), but these are basically ChatGPT with some extra context injected in-- so, ChatGPT. You talk to any of these AI companies and they all admit they're just using the AI label to fundraise and behind the scenes it's either a CRUD app or the thinnest GPT integration in front. I literally don't use any other AI applications. They're all annoying and flooding the web and it is pure clutter everywhere that adds no benefit ever, all because everyone wants to see "LLMs as a platform."
I grew up being a huge fan of YC, and I would respect them so much more if they would take the contrarian (but in my humble opinion correct) view and say actually, judging by the structural evidence and actual results, it's not clear what exactly AI has to offer right now, and we're going to return to PG's founding philosophy and continue funding unsexy and unpopular but ultimately actually important things.