I'm probably way too late for this thought to get any traction / discussion - but I have this weird feeling that openai screwed up and showed it's "cool new thing" too early, and publicly.
As much as it pains me to say this, I don't think the real money is in making this a service, or "the product." I think the real money is in using AI internally as a puzzle piece of your backend - ie. the secret sauce behind xyz product.
I'm being very narrow here, but you can only do so much integrating what openai has built into your products - eventually "everything" providing data from the same model brings "everything" to the same level. In contrast if you train and create your own models to make xyz do something specific, nobody knows how it was done, or it surely makes it a lot harder to kang.
I have zero proof, but I suspect Google for instance has models that would literally obliterate what openai has shown capability wise. They're probably not necessarily language models though. Again, nothing to stand on here but I doubt their search and analytics for example are driven by hard coded algorithms these days.
Bard may have been released sort of as a "psh, we've been there done that" when in reality they didn't, because they never planned to make the models they were/are working on "publicly" available to use. It makes me wonder if this is how Google has lead for some long with some areas - now openai sort of screwed it up for everyone by making it a service that can be integrated / adopted by nearly anyone.
The only people I guess that are really going to know are the devs working for these big orgs, and I'm sure that lock and key knowledge.
> I have zero proof, but I suspect Google for instance has models that would literally obliterate what openai has shown capability wise. They're probably not necessarily language models though. Again, nothing to stand on here but I doubt their search and analytics for example are driven by hard coded algorithms these days.
Then why is Bard so bad? Bard feels like GPT-2 or LLaMA 7B with no finetuning most of the time (I tried it two or three times over the course of a week and went back to ChatGPT)
From my perspective, Bard went from "literally didn't exist" to "released" over the course of about a month. GP seems correct in that it very much felt like something picked up off the shelf, slightly dusted off, and released. Is it as good as chatGPT? From my testing, no. Is it the pinnacle of what Google can create, given motivation? I'm pretty sure also no. In comparison to the state of all the research papers Google and Deepmind release, it definitely feels rushed. So I'd suggest not judging Google on its initial fast -follow project: either Google will come out with something compelling in the next 6mo or so, or we can conclude it really was leapfrogged and has fallen behind. But judging it now seems a bit too conveniently pessimistic, IMO.
(There's a legit chance Google will flub this, don't get me wrong. It's just too early to properly conclude one way or the other.)
Google is in a weird spot. I suspect they are capable of doing so much more, but there is a serious risk to cannibalizing their 99% revenue model (search), which they probably don't yet know if they can monetize in the same way.
Unfortunately for them, OpenAI has forced the question down their throat, which I think is exactly what they intended or at least hoped for.
well my thought is that they 'whipped it up' real quick to attempt to downplay it a bit. Did that backfire? Yeah, I think so. But personally, I think people are missing where the real money is. OpenAI will do great for awhile until every damn product and service is using it, and then it's a race to the bottom. But that's just like my opinion...
I dunno, the stock price hasn't really tanked, so it didn't really backfire that much and it allowed them to get a bit of feedback as they try to figure out how to maintain revenue.
What if they put out this insanely great model and people just stopped using search? That would be a backfire most likely.
You mean open ai put out a model and people stopped using search? Or Google?
What Open ai offers currently can't really compete with search - I understand the data it's being fed gets newer and newer, but it's not really real time like the search engines are. Indexing and presenting data is so different than NLM. Even if it is fed data that's new it's going to have to infer a lot because of a lack of history. It might be able to summarize recent events I guess. Way dumbed down here, but I consider chatgpt like a really smart encyclopedia that can search fast and stay in context across "searches."
If you meant Google, that's sort of what I'm saying - they wouldn't release something that could blow open ai out of the water. But I suspect what open ai offers as a product is something Google could've built long ago, or maybe did and couldn't figure out how to monetize it. They've instead invested in ai to make their products and services better, not as much to offer ai as a service.
What I meant by bard not working out so great was that Google quickly dusted off or slammed together some shenanigans to be relevant, even though what openai is doing doesn't appear to be a part of their master plan.
It can compete with search by integrating it as part of its internal workflow when answering the question, which is exactly what Bing and web browsing mode in ChatGPT already do.
And yeah, this means that it still needs the search engine. But it also means that ads are out of the picture for the user.
OpenAI couldn't do a google style many endpoints style product diversification. OpenAI wants to make big models that do stuff no one else's can, period. They aren't going to waste their time making, for example, a therapy chat bot, and a code helper, a semantic search platform, etc. and, curate, market, and bottle them all separately. If they wanted to do that and focused their business on that Google would easily beat them because they have much better inertia with all their existing services. It makes much more sense to let other companies just use their simple API to do so so they can focus on being the very best at the core models they offer.
The issue with Language models for the companies that create them is they are so general. If a company builds a language model into their backend, and another one comes out from a different company that's 1.3x as good, it would be trivial to switch to the better one. It's not like a company being so tied to AWS that they can't even fathom switching to another cloud platform because all their internal shit is built in to the thousand specific ways AWS works. As a result, to be competitive you need to be the very best in that realm.
They are a weird company, and their core motivations aren't money, though obviously they have to make money if they want to keep doing what they want to do.
Yes, the release was wild and disruptive, but when your core motivation isn't greed, you can do some seriously wild and disruptive things.
As much as it pains me to say this, I don't think the real money is in making this a service, or "the product." I think the real money is in using AI internally as a puzzle piece of your backend - ie. the secret sauce behind xyz product.
I'm being very narrow here, but you can only do so much integrating what openai has built into your products - eventually "everything" providing data from the same model brings "everything" to the same level. In contrast if you train and create your own models to make xyz do something specific, nobody knows how it was done, or it surely makes it a lot harder to kang.
I have zero proof, but I suspect Google for instance has models that would literally obliterate what openai has shown capability wise. They're probably not necessarily language models though. Again, nothing to stand on here but I doubt their search and analytics for example are driven by hard coded algorithms these days.
Bard may have been released sort of as a "psh, we've been there done that" when in reality they didn't, because they never planned to make the models they were/are working on "publicly" available to use. It makes me wonder if this is how Google has lead for some long with some areas - now openai sort of screwed it up for everyone by making it a service that can be integrated / adopted by nearly anyone.
The only people I guess that are really going to know are the devs working for these big orgs, and I'm sure that lock and key knowledge.