We used to have this in the form of a pair of HTML tags: <frameset> and <frame> (not to be confused with the totally separate <iframe>!). <frameset> provided the scaffolding with slots for multiple frames, letting you easily create a page made up entirely of subpages. It was once popular and, in many ways, worked quite neatly. It let you define static elements once entirely client-side (and without JS!), and reload only the necessary parts of the page - long before AJAX was a thing. You could even update multiple frames at once when needed.
From what I remember, the main problem was that it broke URLs: you could only link to the initial state of the page, and navigating around the site wouldn't update the address bar - so deep linking wasn’t possible (early JavaScript SPA frameworks had the same issue, BTW). Another related problem was that each subframe had to be a full HTML document, so they did have their own individual URLs. These would get indexed by search engines, and users could end up on isolated subframe documents without the surrounding context the site creator intended - like just the footer, or the article content without any navigation.
Neither is impressive solely because we've gotten used to them. Both were mind-blowing back in the day.
When it comes to AI - and LLMs in particular - there’s a large cohort of people who seem determined to jump straight from "impossible and will never happen in our lifetime" to "obvious and not impressive", without leaving any time to actually be impressed by the technological achievement. I find that pretty baffling.
I agree, but without removing search you cannot decouple. Has it embedded a regex method and is just leveraging that? Or is it doing something more? Yes, even the regex is still impressive but it is less impressive that doing something more complicated and understanding context and more depth.
I don't think they mean "knowledge" when they talk about "intelligence." LLMs are definitely not knowledge bases. They can transform information given to them in impressive ways, but asking a raw (non-RAG-enabled) LLM to provide its own information will probably always be a mistake.
They kind of are knowledge bases, just not in the usual way. The knowledge is encoded in the words they were trained on. They weren't trained on words chosen at random; they were trained on words written by humans to encode some information. In fact, that's the only thing that makes LLMs somewhat useful.
Not parent, but "Google AI" is overloaded - Google has a many AI products that won't be "Google AI". "Gemini" refers to a specific set of capabilities, which are a subset of Google AI efforts[1]. Imagine Apple developing a new, non-iPad slate and branding it the "Apple Tablet".
Granted, Google's AI strategy is still muddled, e.g. Gemini is maybe replacing Google Assistant in some scenarios, but I'm able to express my meaning clearly with Gemini in the preceding sentence, as opposed to "Google AI is replacing Google Assistant - which is Google's AI assistant"
1. Gemma, Flash, anything Google Deepmind develops would be Google AI products that won't fall under the "Google AI" branding
Gemini has already replaced Assistant for Pixel users and on modern Nest devices. In the current Android Auto beta, it's also replaced it there, too.
The thing that confuses me, though, is the fact that they use the Gemini branding for both the dev-oriented products you can license via Google Cloud, as well as the consumer facing AI interfaces, and then also for the ties into Workspace products. ... but then there are standalone AI products (or is a feature?) like Notebook LM that aren't associated with Gemini.
It's a great name. "G" matches the company, it's easy to say, it's a known word, it sounds good spoken, and the word itself has many subjective interpretations as to what it might mean (e.g. gemini = twins = you and AI).
Same reason that it's Alexa and not Amazon Assistant, Siri and not Apple Assistant, etc.
Google Pay/Android Pay/Google Wallet/Android Wallet/Pay Pay/Yap Yap should be the focus of our ire.
Sure, but we have clear evidence that generating this pseudo-reasoning text helps the model to make better decisions afterwards. Which means that it not only looks like reasoning but also effectively serves the same purpose.
Additionally, the new "reasoning" models don't just train on human text - they also undergo a Reinforcement Learning training step, where they are trained to produce whatever kinds of "reasoning" text help them "reason" best (i.e., leading to correct decisions based on that reasoning). This further complicates things and makes it harder to say "this is one thing and one thing only".
I have often heard it used to create a (false) impression that the choice of tools does not affect things that matter to customers - effectively silencing valid concerns about the consequences of a particular technical choice. It is often framed in the way you suggest, but the actual context and the intended effect of the phrase are very different from that framing.
reply