> When we look back in 3 months we will wonder why we thought LLM/GPT type queries will replace search in all forms.
I really hope so. LLM for search seems like a big leap right now. It isn't even really search but just a UI change for interacting with the underlying search engine. The caveats being that the people who actually think and create don't gain the clicks that they would get today (with the accompanying ad revenue) and there are no citations.
I really hope more companies realize that LLM for Wikimedia sites would be a vastly superior application of the technology. Could you imagine the impact this application would have given the sheer amount of knowledge and data that is on these sites? Learning and teaching would be changed from virtually the ground up. IMO, this is the killer app and not general search. Given the extreme verbosity and general un-readability of many technical pages on Wikipedia, a LLM that can summarize and answer questions correctly is a huge paradigm shift. Oh,don't know what the Second Law of Thermodynamics is? Here you go. In whatever length of text you want. Want to know how this relates to Information Theory? Okay, here's a primer on that. The internet can once more become a place that people come to for learning rather than being fed total crap by an algorithm.
I do have to commend Satya Nadella though. He and Microsoft know exactly what they are doing. They know Google and Sundar Pichai are on the backfoot and they really are making them "dance". Bing + ChatGPT isn't really scalable right now. Riding the hype wave and putting pressure on Google hoping they make poor decisions based on short-minded thinking is the best thing they can do right now. Looks like it's working out well for them.
> I really hope so. LLM for search seems like a big leap right now. It isn't even really search but just a UI change for interacting with the underlying search engine.
To question the underlying premise: what makes you so sure that an improved interface for searching _isn't_ meaningful? From what I can tell, we're a lot closer to optimal collection/categorization of data in search engines than we are to optimal interfaces for searching that data. We've all seen stories like the grandmother typing in questions into google with "please" and "thank you" (https://www.theguardian.com/uk-news/2016/jun/16/grandmother-...), and the concept of having good "google-fu" shows that right now, being able to find the answer to your question is influenced not just by whether the answer exists but whether you're skilled at _asking_ the question.
I don't think this improvement is limited to non-technical folks either. As an example from just the past couple of days for me, I recently have been running into issues with Linux gaming on my laptop due to abysmal power management, and from doing some research, it's somewhat of a known thing with my laptop brand and model. I decided to research what laptops are known for being good for gaming on Linux and also fit my specific preferences (at least 1440p, 16 GB or more RAM, and AMD CPU/GPU for good measure due to my issues being related to Nvidia's weirdness on Linux). I spent a good hour or two finding specific models that seemed promising, searching for mentions of them in places like /r/linux_gaming, looking up availability and prices, and while I found a few potentially decent options, I didn't have much confidence that I was finding all potential options. I found some options for laptop-specific searches that were purportedly able to let me select on whatever criteria I wanted (e.g. noteb.com, notebookcheck.net), but none of them let me pick the _exact_ critieria I wanted; some of them were too granular (e.g. making me search and select exact GPU models to check off instead of letting me just say something like "discrete AMD GPU from 2021 or later", or giving me a list of 30 or so different resolutions and making me manually check off the ones I wanted to include without enabling bulk checking with shift-click) and some of them were not granular enough (e.g. only letting me select a single resolution to search for at a time, or allowing me to require a discrete GPU but not specify the vendor). On a whim, I decided to open up a session with ChatGPT and present it with these criteria to see what it came up with. I needed to nudge it to prune a bit (occasionally it would give me a clearly incorrect option, e.g. one with an Nvidia GPU or only 1080p), but within a few messages, I was able to get it to generate dozens of options. Unfortunately, it only had knowledge up through 2021, and despite trying various roleplaying methods with it to circumvent the "I can't search the internet" policy based on things I saw back when it first became available, I wasn't able to get it to completely finish the job, so I only was able to use those options as a guide for looking up newer models and then finding reviews from people who had used them for Linux gaming. If/when a language model like that that has access to search current data is made generally available, it genuinely seems like that would be a game-changer.
I'm not downplaying the change improved interface at all. I think it's an extremely impactful change. But it isn't sorted out completely right now. The Google search box cannot be replaced by a LLM chatbot. I totally relate to your laptop example since I've experienced something similar myself. The main advantage with LLM/GPT is distilling something complex into a much simpler form with the ability to ask questions and maintain context.
In fact, your laptop example is a perfect illustration of my point. Finding a laptop for Linux gaming is extremely complex. Let's not kid ourselves. The number of things that can go wrong (especially with a Nvidia GPU) is bonkers - my machine completely nukes the display manager every time I update Debian forcing me to do a re-install of SDDM. But the problem here isn't fundamentally search. We know what we're looking for and the exact criteria. Like you said, the problem is collection and categorization of data and presenting it to the user in a helpful manner. This is a digital version of a computer salesman who actually knows their job. LLMs are just salesmen who know about a lot. I'm just extending this to teaching and the knowledge industries and saying, "Look, if you can present information about computers so well, you can tell me about heat death a lot better"
I agree broadly with your point but the subtleties have to be kept in mind.
The main difference between the Chrome/Windows and the current Apple situation is quite significant imo. Even though Microsoft tried their best to keep user share with IE like sneakily changing the defaults, installation of Chrome was always easy. Downloading and installation of Chrome was always an option and more importantly didn't require Microsoft's approval. That's not the case with iOS/iPadOS right now. A Chrome-like app couldn't be downloaded because Apple's not going to approve the app to be put on the App Store.
Also, from a user perspective, Safari is a good browser which manages to keep up with both Firefox and Chrome. This combination means that users don't really have that much of an incentive to switch.
To use your toast analogy, IE used to burn my toast to a sooty, carbonized block. Chrome used to do it perfectly. That's a pretty big differential in terms of performance. Now Safari isn't as bad as IE was in that era. So if it burns the edges every now and then, it's not really that big a deal simply because the user experience is more or less comparable to Chrome.
Users just want toast, but the difference isn't between great and mediocre anymore like it was with Chrome and IE. It's between very good and very good-ish.
This is precisely why I'm okay with Apple not allowing side-loading even though I would prefer greater choice. Apple's ecosystem isn't fundamentally as rotten as the Windows platform was in terms of performance. In fact, it's actually pretty good. Could things change? Sure. But as long as the user experience isn't shitty, I'm okay with the status quo.
No, they just focus on power consumption more than the other browsers (they also have the advantage of not needing to work cross-platform, making this easier). This is often why Safari is behind other browsers on features: a focus on quality of implementation of quantity of features.
I also would want a source for that. At least in the mobile dev space, it has always been common practice for them to leverage private APIs to do things regular app devs cant.
> At least in the mobile dev space, it has always been common practice for them to leverage private APIs to do things regular app devs cant.
On mobile (iOS), yes. But power consumption comparisons tend to be on desktop (macOS) because the other browsers can't run on iOS at all. And I don't think Apple can enforce that API privacy on macOS because they unlike iOS they allow apps to be installed from outside the App Store.
I really hope so. LLM for search seems like a big leap right now. It isn't even really search but just a UI change for interacting with the underlying search engine. The caveats being that the people who actually think and create don't gain the clicks that they would get today (with the accompanying ad revenue) and there are no citations.
I really hope more companies realize that LLM for Wikimedia sites would be a vastly superior application of the technology. Could you imagine the impact this application would have given the sheer amount of knowledge and data that is on these sites? Learning and teaching would be changed from virtually the ground up. IMO, this is the killer app and not general search. Given the extreme verbosity and general un-readability of many technical pages on Wikipedia, a LLM that can summarize and answer questions correctly is a huge paradigm shift. Oh,don't know what the Second Law of Thermodynamics is? Here you go. In whatever length of text you want. Want to know how this relates to Information Theory? Okay, here's a primer on that. The internet can once more become a place that people come to for learning rather than being fed total crap by an algorithm.
I do have to commend Satya Nadella though. He and Microsoft know exactly what they are doing. They know Google and Sundar Pichai are on the backfoot and they really are making them "dance". Bing + ChatGPT isn't really scalable right now. Riding the hype wave and putting pressure on Google hoping they make poor decisions based on short-minded thinking is the best thing they can do right now. Looks like it's working out well for them.