I have commented this many times on such articles, and will say it again:
Google still thinks of AI as a research project, or at best a way to produce better search results. They essentially created the entire current generation of the AI space and then... gave it away, because no one on the product side understood what they had actually built. Handing the reins to the DeepMind team – who have never launched a single product in their history – seems to be a doubling down on that same failed strategy.
Google doesn't need more smart AI researchers, academics or ethicists. They need product managers who understand the underlying technology and can commercialize it. They need pragmatic engineers who can execute, launch and maintain services. That has always been their problem as a company.
As someone who's been at Google Research ~5y, this nails is it 100%.
I was at the non-Brain part of Research and it was seen as Google Brain is the "cool", pure research one, dealing with some future abstract AI and not caring for the products, feasibility, or even if the research "could" be made practical any day.
Deepmind was an "extreme" version of it, with some animosities and politics between the two, which I didn't follow too closely.
There were attempts at making Deepmind useful, called "Deepmind for Google", but the people there were... clueless. Though one really cool thing came out of it (XManager).
(I was at a closer to the product part, "Perception", which I loved. And still got to publish, explore, pursue my own research goals, etc.)
Ilya _was_ at Google Brain, so something doesn’t add up there. I believe people wanted to launch things, but higher management stopped it.
I was next to the team that created Allo’s chat bot, but they said that they had to take out most cool stuff because legal didn’t allow it to launch, so they had to dumb it down totally.
I believe the main problem was all the ethics/safety teams that just hired a lot of non-programmers, while OpenAI management treated safety as an engineering problem that has to be solved with a technical solution.
OpenAI isn’t a multibillion profitable public company with many prior lost (some won) lawsuits. To some extent they didn’t have anything to loose by cutting corners.
This is one of startups greatest advantages over established players.
Microsoft is a multibillion profitable public company and they managed to extend the search in no time. Google is not as effective as it could be. They have great engineering and they proved that many times, but something is not working on the product design side.
This is such a lousy excuse on Google part. We see multi-billion public companies bringing bold innovative products to extremely regulated sectors, like pharmaceutical and medical devices. Google can't deploy a chatbot because of lawsuits? Oh, c'mon...
Hindsight is 20/20. Before ChatGPT, chatbots had a lousy track record. Pretty much any such project run by bigtech had been cancelled due to PR issues ("Tay" etc).
That's what great about competition. It kicks you in the pants and reminds you you need to try.
iPhone scared the shit out of the phone market and today we have great phones from Samsung and Google which dominate the market. If everyone was trying to predict the smartphone market in 2007 they'd be talking about how Nokia missed the boat but excited to see their response (or Motorola/Sony/Blackberry etc). The market today won't necessarily be the market in 10yrs from now. It might be Google, they have a solid head start to be #2 and future #1, but who knows what will happen and whether that talent/advantage stays in Google.
It could just as easily be other companies we don't even consider serious players today.
I bought an s7 or s10 which was waterproof and took photos under the water. Iirc apple was rumored to also be waterproof at the time but it was advertised.
I don’t know how true these things were. Did anyone else get this perception?
This is a great analogy, and it touches upon why I try to avoid google services. The company that once championed "don't be evil" now cannot think outside of the system of perverse incentives it has created (by that I mean add based revenue).
IE was already dominant by 2002. Microsoft didn't ignore the Internet. They went hell bent on it during Gates era and won decisevely. It's only when they had no competition IE got stuck and then surpassed.
If talking about bundling a web browser inside their OS, sure. It was more that Microsoft missed the entire potential of the internet as a whole and how fundamentally transformational it could be. They had no presence in online services, e-commerce, search and more until they saw competitors eating their lunch, and have been lagging behind ever since.
> AOL was already dominant by the time MSN launched.
And never got off the pre-web walled garden end-to-end business model, despite connecting to the web, and died because of it. Not exactly the best example to use to argue Microsoft missed the boat on Web-era online services.
Number one browser aside, they had the number one web mail app, the number one chat app, the number one voice chat app, the most web native news service.
Yes Ballmer threw everything away and IE, Hotmail, MSN Messenger, Skype and MSNBC are jokes now, but that doesn’t mean they weren’t dominant force in Bill Gates era.
> Yes Ballmer threw everything away and IE, Hotmail, MSN Messenger, Skype and MSNBC are jokes now, but that doesn’t mean they weren’t dominant force in Bill Gates era.
At least we got Xbox and Xbox live… which seems to have barely lived by a life line. Confusing mess with Xbox vs windows vs media center PC.
I still think they could have been much more successful with the kinetic. Wii was loved for its ability to bowl and play tennis and etc. Nobody I know really wants to strap a sensory deprivation device onto their face. I don’t see VR working out but a highly evolved sensor bar that allows you to interact with humans physically present and online seems an easier pill to swallow. Never owned one but seems like Kinect is still fondly remembered in some applications.
All that said I wouldn't buy one today because I don’t need x amount of cameras and lidars and y microphones recording inside my house 24/7 and going to MSFT and whoever else.
> They had a presence in online services, MSN is older than even IE.
No, it isn’t. IE, Microsoft Internet Start (Microsoft’s original web portal), and MSN (originally a separate subscription-based dialup online service that later merged with the main portal) all launched simultaneously in August 1995.
IE was not a terrible browser, and Microsoft did really go hell bent on it in terms of dedicating resources to the IE development. I am not sure why you call IE a terrible browser here. (IE has to be compared to Netscape for any such assessment to make sense). IE development stopped once Microsoft had won, not before.
It’s hard to overstate how far ahead of everyone else DeepMind was as far back as 7-8 years ago. When AlphaGo burst on scene it blew everyone’s mind and even redefined what’s possible by an AI agent. I had thought back then surely Google will figure out a way to convert those research breakthroughs into consumer products.
And yet here we are. If anything it’s a great example of how a “money printer” business, Google’s ad business, makes an org into a lazy, risk averse and accumulates an army of empire builders who collectively breed promo culture.
Combine that with a decade of 0% interest rates which also boosted its stock price and it’s no wonder that Google is struggling on so many fronts.
Is there anyone who has successfully productized AI? ChatGPT isn’t a profitable product, at least not yet. Google Photos and Spotify recommendations are the best AI products I can think of with clear revenue, and in these examples AI is just a cherry on top of a product people would use anyway.
I’d be astonished if they’re even close to breaking even on copilot. In its current incarnation it wouldn’t even lace the boots of what’s coming out of OpenAI.
CopilotX with its OpenAI collab will be the real winner - if it ever gets released to those on the waitlist. I’m not aware of anyone who got in yet, which leads me to believe it doesn’t yet exist.
I got access to the Copilot CLI, which is supposed to be part of the Copilot X package eventually. Dunno anyone who has gotten access to Copilot Chat yet, which I expect is what everyone really wants.
The question there is how many of the estimated 100,000 software engineers at Microsoft are using Copilot, and what has been their (I'm sure measured) productivity boost. Microsoft does derive some benefit from Copilot being accessible to the paying subscribers of the world from having them give feedback and in free (and paid) press. But the internal use numbers probably easily justify its initial cost to train.
Say Copilot makes an engineer 2x as productive, their all-in salary is $500k to make the math easy, 1,000 MS sw engineers are using it, and Copilot took $5mm to train (GPT-3 took $4.6mm). Those 1k MS employees now being twice as productive are doing the work of an extra 1k people at $500k, or $500 million's worth in a year. That means $5 million in Copilot training costs are paid for in... 4 days. I have no inside information, so those number are all made up, but I'm pretty sure the initial training costs have already been paid off internally.
We also don't know how many multiples of $5mm it took to produce the initial version of Copilot, nor how many subsequent training runs there have been.
Point is, any significant productivity gains made, across an organization the size of Microsoft engineering, easily pays for big expensive training runs.
SOME parts of an engineer's work are 2x faster, but not all. Generating code - yes, writing tests and docs - yes, designing the system - no, debugging - no, attending meetings - no, getting your data faster, or moving the other team to finish integration sooner - no help. So it's going to be 10% boost overall, not 100%.
The nice effect is that the AI makes people more confident to try things and go out of their comfort zone. Maybe the quality of the end product will be higher.
It's a good question, but also helpful to point out that one of the beauties of these models is that you can train them once and deploy to many use cases. The same model can be used by Github, Bing, Office 365, Azure and so on.
And as for the big multi-billion investment in OpenAI, they may have more than made that back up on their valuation already. Plus the deal was structured that OpenAI would pay it's revenues into Microsoft till the investment was paid back and MS would sill end up with a 49% stake.
All in all, sounds like a smart investment from MS and, cerry on top, managed to majorly embarrass a main rival.
How do you want to define AI? Google's been using ML models to power various Google products for years. Translate was a big switch over, back in 2016, and it powers the Google Search answer box. I have no idea how profitable that answer box is, but I'm fairly certain that Google Search is profitable. The Google Home speaker voice recognition is also powered by ML models.
It is currently April 2023. “~0 in 2022” is the only part of that that seems credible. I not convinced of OpenAI’s rosy predictions of future explosive growth.
The fact that Microsoft is baking GPT into all of their products guarantees explosive growth.
ChatGPT is also one of the fastest growing consumer products in history by number of users. At $20 a month for plus, it could be a significant revenue stream.
Then add all the companies like Duolingo and Snapchat that are using GPT as well.
If you don’t see this as explosive growth, then I don’t know what to tell you.
Because Microsoft is the largest software company in the world. Their products are in wide use by virtually all businesses and schools in the country. Judging by the popularity of ChatGPT, these features will be very popular and heavily used.
"Hopes to grow" revenues. Current estimates put hardware costs alone at $700k/day, so even if they hit $300M in 2023 that won't make them profitable. This isn't even counting the people costs and other operation costs required to run a company.
edit: order of magnitude was wrong on costs per day.
And yet, $300M is only 1.25M subscribers at the current $20/mo rate. If we say that they need a $1B / year to be comfortably profitable, that's ~4.2M subscribers. A good rule of thumb is that you can hope to convert about 10% of your free user base to paid; one random source says they have 100M monthly active users - which at 10% conversion, is $2.4B / year. I think they'll be fine.
You are talking about a revolutionary product that literally dominates mindshare from consumers to students to CEOs to governments. Its a pure monopoly that has insanely wide utility and instantly obvious value proposition.
Do you know anything about B2C freemium products? 10% conversion over 10s-100s of millions of users is literally insane.
It doesn't matter how revolutionary GPT-4 is, people's willingness to pay for anything is generally very low. ChatGPT premium is also a very expensive subscription for a consumer product!
People are willing to pay $10k a year for college, very high conversion rate.
People are willing to pay for tutoring, at rate that's probably about 10% of student base, despite its very high costs ($50/h, not $20/m)
At the very minimum, I can already see every university and high school student paying for GPT-4. Its a way way way more powerful essay writer and personal tutor than GPT3.5, that alone is incentive to upgrade. For only $20 a month.
Know what GPT-4 currently is insanely capacity limited. So far, the limitation is not on the demand side, but the supply side.
Getting 100 million users in 3 months with 0 marketing or network effects already annihilates existing records, getting 10% conversion is nothing special.
You could have saved yourself and everyone else the time by instead writing “I have no idea what I’m talking about and I don’t understand constraints”.
The cheating sites I see on the first page of Google seem to charge around $10-$30/page. GPT is cheaper, but lower quality. I don't think the market for tens of dollars cheaper but shittier quality than buyessayfriend.com is anywhere near every highschool and college student.
GPT API is a successful product. All those start ups that are just a thin layer over GPT that are funded by YCombinator are paying for API use and that's profitable for OpenAI.
No, I would like facts, not assumptions. It's definitely not safe to assume they are making a profit, as a whole, or per transaction. It's more complicated than that.
Profit has a strict definition of $revenue - $cost, for a business operation as a whole, which leaves money in the bank at the end of the month.
They could be making more money for a single query than the cost of compute time for that single query, but that may not cover the engineering and idle servers. They could be running at a loss with the assumption that they can improve efficiency per transaction soon. They could be running at a "loss" because they're giving some of the compute away for free right now, to improve the training with the user responses. Or maybe they are making fistfuls of money. "Profitable" has a strict meaning, shouldn't be assumed, and definitely isn't required, at this point in their operation.
I'm very interested to know if they are profitable, at the moment, but I don't think that's been publicly disclosed yet, and I can't find anything. A reference is required.
I don't have a reference. I'm taking the very reasonable assumption that openai is making money on API calls based on how much they charge compared to others in this space, the favorable pricing they receive from Microsoft, their ability to constantly bring down the costs and push the savings to their consumers, their unwillingness to lower the cost of Dall-E even though it's more expensive than it's competitors.
Very reasonable assumptions. You will never get certainty, even if they say they're profitable maybe they're just lying for investors. If you see their bank account total go up every month maybe it's a ponzi scheme.
For my heuristics, if not profitable, at least close, and definitely a major success in acquiring market share and customer mind share.
It was not a petty insult. The options you gave are "fraud" or "unfounded speculation." Literally lacks any kind of nuance. What sort of nuance would you say you contributed?
I think it would be better to make it clear that something is an assumption rather than stating it as fact, to not add to the noise. In the world of tech (and any R&D heavy group), initially running at a loss is the norm, not the exception.
TikTok uses a bunch of AI. From their algorithm for the FYP. To vision for classification of videos. And sound processing to bucket sounds/music. This feeds into their rec engine as well as their safety engine.
if you consider Spotify recommendations as AI then you should consider also Youtube and every social network based on a non-time based timeline and ads, no?
I completely agree. I was involved as a tech executive at a large medical center trying to get collaborative work with both Google per se, and DeepMind, to a usable or product stage, and it was essentially impossible. DeepMind in particular was more interested in pushing the research envelope, and getting more papers in Nature, than in building products.
I wouldn't underestimate the degree to which this is by design, from the very top of Google. Different Google and other Alphabet companies' executives more than once told me they just weren't interested in products that didn't have an obvious path to more than 1 billion users. The companies don't have a clue how to make money retail. If they can't print money with an idea, they don't have the tools and skills to bring it to market.
I would go farther still and say Google doesn’t need PMs — it needs a singularly bold visionary — a Satya Nadella or John Legere or maybe someone else — not a good ceo or a bad ceo; nor a polite ceo or a trash-talking tough ceo — they just need someone to yank the empire builders out and go all in on product building. Not product managing.
The deeper problem is the money people who lead Google have no imagination whatsoever and literally can’t figure out how to make money in anything that isn’t mining user data for ads. Cloud is a money pit, and will NEVER be #1 in the space.
They literally put more effort and resources into rigging ads auctions than trying to solve real user problems.
They're just making a correct calculation. Do we increase our $70 billion / quarter with 2% or do we put resources towards doubling the income of a $1 million / year product.
Yes, the answer isn't what you'd like, but let's not pretend it's not rational.
The cat's out of the bag now, it doesn't take a genius PM to work it out. Maybe a genius PM could've worked out how revolutionary generative AI is going to be pre- chat GPT release but I really doubt that a random MBA who knows nothing about AI can do that. Every single day there's a cool new AI application. The problem space is fairly fleshed out. It's a matter of executing.
How do you make enterprise tools better? (Photoshop + AI, Code + AI etc.)
How do you make consumer tools better? (YT tools + AI)
How do you make search better?
> They need pragmatic engineers who can execute, launch and maintain services
Not the job description Google has for engineers. Their hiring process effectively eliminates anyone who matches that description and bothers to put themselves through it.
To me, you've described a situation that's infinitely more a feature than a bug.
They don't pay me, so I don't care about their profits, but the stuff they've more-or-less given away and people don't think much about is their best stuff. Colab, Docs, Scholar, etc...
> Google doesn't need more smart AI researchers, academics or ethicists. They need product managers who understand the underlying technology and can commercialize it.
Google's revenue is equal to the GDP of New Zealand. Google's cash reserves are sufficient to sustain the company through half a century of bad quarters. Not that they have had any bad quarters, ever.
They don't need anything.
Their ad business is a donkey that shits gold, with Chrome and Android keeping the competition locked out, and everything else doesn't matter for Google as a company. They've done little more than play around for the last decade, with an endless procession of hyped-then-canceled "products", and it hasn't affected their market dominance in the slightest.
They can keep fucking up for the foreseeable future, and it won't really matter. If a startup emerges that appears to have the right approach to AI, Google can simply buy it. Power is power, and everything else is nothing.
Google controls 90% of the browser market, and 70% of the mobile OS market. They can force anything they want down the throats of billions of people, the vast majority of which aren't even aware that there are alternatives. And even those that are aware are in many cases unable to switch, as for example most phones will only run their manufacturer's version of Android, from which Google features usually cannot be completely removed.
This isn't even remotely comparable to the AltaVista situation. Google has a planet-scale stranglehold on how half the world's population accesses information. This arguably makes Google more powerful than most nation states. I can guarantee they won't be dislodged by some search startup with a cool idea.
> Google has a planet-scale stranglehold on how half the world's population accesses information
…currently.
I think OP’s point is that once some better product comes out and e.g. normal people hear about it on tiktok and start switching, it doesn’t seem like Google is institutionally capable of competing. Honestly, considering how much good research they put out, I really hope they don’t.
> it doesn’t seem like Google is institutionally capable of competing
They don't have to compete. They can just buy any startup that's a potential threat to them, or lobby for laws that would effectively make them illegal. That's what power is, and it's all that matters.
You could have said the same about Compuserve or AOL or IBM or Xerox or HP at some point in their history. Where are all of them now?
Every untouchable business inevitably gets disrupted once the company can no longer stay competitive and get ahead of new trends. Same is happening with Google today.
IBM bought RedHat. And the Watson demo is very similar to the Waymo situation. It works, but no one can use it actually, but sure IBM pushed it for years ruthlessly.
Netscape was a company during a time when using the Internet was a niche activity.
Google is one of the most powerful entities in the history of mankind, with read/write access to the private information of the majority of humans alive.
The two aren't comparable. Indeed, there are few entities that are comparable to Google. Google won't use their technological edge to keep potential competitors out, they'll use their financial, social, and political power.
I don’t think they will. And I think like lotus notes running groupware, like windows running the desktop, like novell running the lan, Google will always rule search, and never make the transition into the next thing.
The company hasn’t released the significant product in 15 years, despite the huge amount of money they have this will be not be enough to change the culture.
> they need product managers who understand the underlying technology and can commercialize it.
I would say they more need engineers who care about and can make good products. In my limited experience, it takes time to turn a research-focused group into a product-oriented team. Research vs production requires different skill sets.
So Google did a Xerox PARC or an IBM PC (to a lesser extent)?
It’s curious how predictable it is that a major player is going to fall into this trap at any point in time. There’s probably some way you could measure its likelihood if you were tracking internal comms, org charts and maybe some finely designed survey data.
Well, I'm ok with Google having this "problem". Hopefully, they will produce a couple more of useful things, until it somehow kills Google or whatever.
For a business to survive it needs competition. You could argue it was necessary to create competition in the AI space before business needs arise that Google could provide. Without competition, there's no way for the market to rise to demand. Google didn't give away stuff, they created demand.
Isn't the exact opposite reading of this just as likely to be true from what we know so far? That rather than handing over the reins to DeepMind, they're actually reining them in and forcing them to work on productization?
- This does not seem unexpected. Google is panicked about losing the AI race and pushing resources into DeepMind is a logical step to mitigating those fears.
- Google has currently given ~300M to Anthropic and has a partnership with them. I assume Google continues to see potential in both avenues and won't neglect one AI team for the other. I'm guessing that DeepMind will be their primary focus because of the numerous, real-world applications already at play.
- It's tough for me to compare Google DeepMind to OpenAI GPT4. They seem to be very different approaches. Yet, they both have support for language and imagery. So, perhaps they aren't that different afterall?
- Still waiting to hear more from Google on how they plan to leverage their novel PaLM architecture. The API for it was released a month ago, but, to my awareness, has yet to take the world by storm. (Q: Bard isn't powered by PaLM, right?)
Overall, I am not convinced this will be massively beneficial. I don't trust Google's ability to execute at scale in this area. I trust DeepMind's team and I trust Google's research teams, but Google's ability to execute and take products to market has been quite weak thus far. My gut says this action will hamstring DeepMind in bureaucracy.
>> Overall, I am not convinced this will be massively beneficial. I don't trust Google's ability to execute at scale in this area.
Yes the team which literally created transformer and almost all the important open research including Bert, T5, imagen, RLHF, ViT don't have the ability to execute on AI /s. Tell me one innovation OpenAI bought into the field. They are good at execution but i havent seen anything novel coming out of them.
> Yes the team which literally created transformer and almost all the important open research including Bert, T5, imagen, RLHF, ViT don’t have the ability to execute on AI /s.
This, but non-sarcastically. Google has spectacularly, so far, failed to execute on products (even of the “selling shovels” kind, much less end-user products) for generative AI, despite both having lots of consumer products to which it is naturally adaptable and a lot of the fundamental research work in generative AI.
The best explanation is that they actually are, institutionally and structurally, bad at execution in this domain, because they have all the pieces and incentives that rule out most of the other potential explanations for that.
> OpenAI bought into the field. They are good at execution but i havent seen anything novel coming out of them.
Right, OpenAI is good at execution (at least, when it comes to selling-shovels tools, I don’t see a lot of evidence beyond that yet), whereas Google is, to all current evidence, not good at execution in this space.
They're getting Innovator's Dilemma'd, the same way that Bell Labs, DEC, and Xerox did. When you have an exceptionally profitable monopoly, it biases every executive's decision-making toward caution. Things are good; you don't want to upset the golden goose by making any radical moves; and so when your researchers come out with something revolutionary and different you bury it, maybe let them publish a few papers, but certainly don't let it go to market.
Then somebody else reads the papers, decides to execute on it, and hires all the researchers who are frustrated at discovering all this cool stuff but never seeing it launch.
The typical solution to this (assuming there is one internally) is setting up a sub-company and keeping the team isolated from the parent company aka "intrapenuership" but also keeping them well resourced by the parent.
It seems like that's what they were doing with DeepMind for the last decade. But it's also possible DeepMind as an institution lacked the pressure/product sense/leadership to produce consumable products/services. Maybe their instincts were more centered around R&D and being isolated left them somewhat directionless?
So now that AI suddenly really matters as a business, not just some indefinite future potential, Google wants to bring them inside.
They could have created a 3rd entity, their own version of OpenAI, combining DeepMind with some Google management/teams and other acquisitions and spinning it off semi-independently. But this play basically has to be from Google itself for their own reputation's sake - maybe not for practicality's sake but politically/image-wise.
Yeah. It doesn't really work all that well. Xerox tried it with Xerox PARC, Digital with Western Digital, AT&T with Bell Labs, Yahoo with Yahoo Brickhouse, IBM with their PC division, Google with Google X & Alphabet & DeepMind, etc.
Being hungry and scrappy seems to be a necessary precondition for bringing innovative products to market. If you don't naturally come from hungry & scrappy conditions (eg. Gates, Zuckerburg, Bezos, PG), being in an environment where you're surrounded by hungry & scrappy people seems to be necessary.
For that matter, a number of extremely well-resourced startups (eg Color, Juicero, WebVan, Secret, Pets.com, Theranos, WeWork) have failed in spectacular ways. Being well-resourced seems to be an anti-success criteria even for independent companies.
That may have been true in the 70's and 80's. However, I worked for a 2000 person (startup) software company in the 90's that was acquired at 1.8B, another 4000 person (startup) software company in the 90's that was acquired at 3.4B, and then a few years ago, the acquirer of both was itself acquired for 18B.
I survived ALL the layoffs somehow. Boots on the ground agrees with "doesn't really work all that well" but the people collecting rents keep collecting. Given the size all of these received significant DOJ reviews though the only detail I remember is basketball sized court rooms filled with printed paper for the depositions. I'm sure they burned down the Amazon to print all that legalese, speaking of scaling problems.
edit: i take it all back! my memory is not as good as i thought it was re: software companies. i will leave up my sorry list as penance for my crappy recent tech history skills.
Thanks for the comment. Chortle. That's hilarious.
Indeed, you are right on: Legent, Platinum, CA, and Broadcom in order from little fish to big. CA was the second largest software company in the world behind Microsoft then.
The weird part you couldn't see from this telling is that I worked in the Legent office in Pittsburgh, moved to Boston post-CA acquisition and worked in the CA office in Andover. Resigned and went to Platinum in Burlington. Moved to Seattle. Second CA acquisition in 5 years. I should have quit while I was ahead. Moved back to Pittsburgh. Worked in the exact same office I'd worked in 5 years earlier with the same crew. Weird feeling is a mild understatement. I still know people who work for Broadcom now. I should reach out.
i used to read BYTE mag over in the UK in the early 90s before i moved to USA; CA was such a heavy hitter in the early 90s!! i guess it never really was the same in the post-Wang era(s).
The problem with the intrapreneurship idea is that it's really hard to beat desperation as a motivator. I have seen people behave very differently in the context of a startup vs a corporate research lab thanks to this dynamic. Some people thrive in the corporate R&D environment, but the innovator's dilemma eventually gets to their managers.
Cisco has done a great job balancing this, actually - they keep contact with engineers who leave to do startups, and then acquire their companies if they become successful enough to prove the product.
After a bunch of ex-Cisco people ate Cisco’s core router lunch at Juniper, Cisco vowed it would never happen again. Until a bunch of ex-Cisco people ate WebEx’s lunch at Zoom.
Getting a big seed round once makes you want that next round to keep going (and take even more money off the table).
Getting a X-million-per-year budget from a parent company gives you a very different sort of situation. IME this results in less urge to get something out the door and more urge to get "the best thing" built. Shipping early risks your budget in a way that "look at all this cool theoretical progress" doesn't, because the public and press can critique you more directly.
Lack of major owner equity basically means few intrapreneur efforts will succeed unless the 'founder' really couldn't succeed without the daddy company
> But it's also possible DeepMind as an institution lacked the pressure/product sense/leadership to produce consumable products/services. Maybe their instincts were more centered around R&D and being isolated left them somewhat directionless?
It seems like this is more a Google problem than a DeepMind problem though, no? Google created one of the most successful R&D labs for ML/AI research the world has ever known, then failed to have their other business units capitalize on that success. OpenAI observed this gap and swooped in to profit off all of their research outputs (with backing from Microsoft).
IMO what they’re doing here is doubling down on their mistakes: instead of disciplining their other business units for failing to take advantage of this research, they’re forcing their most productive research team to assume responsibility and correct for those failures. I expect this will go about as well as any other instance of subjecting a bunch of research scientists to internal political struggles and market discipline, i.e. very poorly.
They're also paying for their product managers' cancellation culture. (Sorry.) I'm seeing a lot of AI pitch decks; none suggest trusting Google. That saps not only network effects, but what ill term earned research: work done by others on your product. Google pays for all its research and promotion. OpenAI does not.
Are researchers actually frustrated to never see it launch, or are they mostly focused on publishing papers?
I thought OpenAI’s unique advantage over many big tech companies is that they’ve somehow figured out how to fast track research into product, or have researchers much more willing to worry about “production”.
I’m puzzled that stuff like alpha Fold count for nothing in this discussion (having just browsed through most of it).
I saw quotes from independent scientists referring to it as the greatest breakthrough of their lifetime, and I saw similarly strong language used in regard to the potential for good of alpha fold as a product.
So they gave it away, but it is still a product they followed through on and continue to.
Was it wrong of them that they gave it away, and right, that Microsoft’s primary intent with their open AI technology, seems to be to provoke an arms race with google?
Alpha Fold is a game changer, but nowhere near the game changer ChatGPT(4) is, even if ChatGPT was only available for the subset of scientists that benefit from Alpha Fold. We are literally arguing semantics if this is AGI, and you're comparing it to a bespoke ML model that solves a highly specific domain problem (as unsolvable and impressive as it was).
> We are literally arguing semantics if this is AGI,
And if it isn't? Literally every single argument I've seen towards this being AGI is "We don't know at all how intelligence works, so let's say that this is it!!!!!"
> nowhere near the game changer ChatGPT(4) is, even if ChatGPT was only available for the subset of scientists that benefit from Alpha Fold
This is utter nonsense. For anyone who actually knows a field, ChatGPT generates unhelpful, plausible-looking nonsense. Conferences are putting up ChatGPT answers about their fields to laugh at because of how misleadingly wrong they are.
This is absolutely okay, because it can be a useful tool without being the singularity. I'd sure that in a couple of years time, most of what ChatGPT achieves will be in line with most of the tech industry advances in the past decade - pushing the bottom out of the labor market and actively making the lives of the poorest worse in order to line their own pockets.
I really wish people would stop projecting hopes and wishes on top of breathless marketing.
I asked GPT-4 to give me a POSIX compliant C port of dirbuster. It spit one out with instructions for compiling it.
I asked it to make it more aggressive at scanning and it updated it to be multi-threaded.
I asked it for a word list, and it gave me the git command to clone one from GitHub and the command to compile the program and run the output with the word list.
I then told it that the HTTP service I was scanning always returned 200 status=ok instead of a 404 and asked it for a patch file. It generated that and gave me the instructions for applying it to the program.
There was a bug I had to fix: word lists aren’t prefixed with /. Other than that one character fix, GPT-4 wrote a C program that used an open source word list to scan the HTTP service running on the television in my living room for routes, and found the /pong route.
This week it’s written 100% of the API code that takes a CRUD based REST API and maps it to and from SQL queries for me on a cloudflare worker. I give it the method signature and the problem statement, it gives me the code, and I copy and paste.
If you’re laughing this thing off as generating unhelpful nonsense you’re going to get blind sided in the next few years as GPT gets wired into the workflows at every layer of your stack.
> pushing the bottom out of the labor market and actively making the lives of the poorest worse in order to line their own pockets.
I’m in a BNI group and a majority of these blue collar workers have very little to worry about with GPT right now. Until Boston Dynamics gets its stuff together and the robots can do drywalling and plumbing, I’m not sure I agree with your take. This isn’t coming for the “poorest” among us. This is coming for the middle class. From brand consultants and accountants to software engineers and advertisers.
Software engineers with GPT are about to replace software engineers without GPT. Accountants with GPT are about to replace accountants without GPT.
> Literally every single argument I've seen towards this being AGI is
Here is one: it can simultaneously pass the bar exam, port dirbuster to POSIX compliant C, give me a list of competing brands for conducting a market analysis, get into deep philosophical debates, and help me file my taxes.
It can do all of this simultaneously. I can't find a human capable of the simultaneous breadth and depth of intelligence that ChatGPT exhibits. You can find someone in the upper 90th percentile of any profession and show that they can out compete GPT4. But you can't take that same person and ask them to out compete someone in the bottom 50th percentile of 4 other fields with much success.
Artificial = machine, check.
Intelligence = exhibits Nth percentile intelligence in a single field, check
General = exhibits Nth percentile intelligence in more than one field, check
Maybe it's heavily biased towards programming and computing questions? I've tested GPT-4 on numerous physics stuff and it fails spectacularly at almost all of them. It starts to hallucinate egregious stuff that's completely false, misrepresents articles it tries to quote as references etc. It's impressive as a glorified search engine in those cases but can't at all be trusted to explain most things unless they're the most canonical curriculum questions.
This extreme difficulty in discerning what it hallucinates and what is "true" is what it's most obvious problem is. I guess it can be fixed somehow but right now it has to be heavily fact-checked manually.
It does this for computing questions as well, but there is some selection bias so people tend to post the success-stories and not the fails. However it's less dangerous if it's in computing as you'll notice it immediately so maybe require less manual labour to keep it in check.
Hahaha, if you want nit-picking, all the language tasks chatGPT is good at are strictly human tasks. Not general tasks. Human tasks are all related to keeping humans alive and making more of us, they don't span the whole spectrum of possible tasks where intelligence could exist.
Of course inside language tasks it is as general as can be, yet still needs to be placed inside a more complex system with tools to improve accuracy, LLM alone is like brain alone - not that great at everything.
On the other hand if you browse around the web you will find various implementations of dirbuster, probably in C for sure in C++ which are multi-threaded , it’s not to take away from your experience but I mean, without knowing what’s in the training set it may have already been exposed to what you asked for, even several times over.
I have a feeling they had access to a lot of code on GH, who knows how much code they actually accessed. Copilot for a long time said it would use your code as training data, including context, if you didn’t opt out explicitly, so that’s already millions maybe hundreds of millions of lines of code scraped.
The conspiracy theorist in me wonders if MS just didn’t provide access to public and private code to train on, they wouldn’t have even told Open AI, just said, “here’s some nice data”, it’s all secret and we can’t see the models inputs so I’ll leave it at that. I mean they’ve obviously prepared the data for copilot, so it was there waiting to be trained on.
So yeah I feel your enthusiasm but if you think about it a little more, or maybe not so hard to imagine what you saw being actually rather simple ? Every time I write code I feel kind of depressed because I know almost certainly someone has already written the same thing and that it’s sitting in GitHub or somewhere else and I’m wasting my time.
ChatGPT just takes away the knowing where to find something (it’s already seen almost everything the average person can think of) you want and gives it to you directly. Have you never thought of this already ? Like you knew all the code you wanted already was there somewhere, but you just didn’t have an interface to get to it? I’ve thought about this for quite a while and I knew there would big data people doing experiments who could see that probably 80-90% of code on GitHub is pretty much identical.
> If you’re laughing this thing off as generating unhelpful nonsense you’re going to get blind sided in the next few years as GPT gets wired into the workflows at every layer of your stack.
Okay, now try being a scientist in a scientific field that isn't basic coding.
It's not people laughing at pretences, it's people who know even basic facts about their field literally looking at the output today and finding it deeply, fundamentally incorrect.
I do not believe that is a reasonable threshold for AGI. If it were, I believe a significant % of humans would individually fail to meet the threshold of AGI.
I wonder what your personal success rate would be if we did a Turing test with the “people” who “know basic facts about their field.” If they sat at a computer and asked you all these questions, would you get them right? Or would you end up in slide decks being held up as a reason why misnome doesn’t qualify as AGI?
I find comfort in knowing that it can’t “do science.” There is a massive amount of stuff it can do. I’m hopeful there will be stuff left for humans.
Maybe we’ll all be scientists in 10 years and I won’t have to waste my life on all this “basic coding” stuff.
Absolutely not! I created a powershell script for converting one ASM label format to another for retro game development and i used ChatGPT to write it. Now, it fumbled some of the basic program logic, however, it absolutely nailed all of the specific regex and obtuse powershell commands that i needed and that i merely described to it in plain English.
It essentially aced the "hard parts" of the script and i was able to take what it generated and make it fit my needs perfectly with some minor tweaking. The end result was far cleaner and far beyond what i would have been able to write myself, all in a fraction of the time. This ain't no breathless marketing dude: this thing is the real deal.
ChatGPT is an extremely powerful tool and an absolute game changer for development. Just because it is imperfect and needs a bit of hand holding (which it may not soon), do not underestimate it, and do not discount the idea that it may become an absolute industry disrupter in the painfully near future. I'm excited ...and scared
It does, quite often. Not only that, as you describe. But it does.
For example, I asked it what my most cited paper is, and it made up a plausible-sounding but non-existent paper, along with fabricated Google Scholar citation counts. Totally unhelpful.
Right, i think it's a question of how to use this tool in its current state, including prompting practice and learning its strengths. It can certainly be wrong sometimes, but man, it is already a game changer for writing, coding, and i'm sure other disciplines.
If you're a robotresearcher, maybe try getting it to whip up some ...verilog circuits or something? I don't know much about your field or what you do specifically, but tasks like regular expressions or specific code syntax it is absolutely brilliant at, whatever the equivalent to that is in hardware. ...I've only ever replaced capacitors and wired some guitar pickups.
> it made up a plausible-sounding but non-existent paper, along with fabricated Google Scholar citation counts
I ran into a similar issue: I asked it for codebases of similar romhacks to a project i'm doing, and it provided made up Github repos with completely unrelated authors for romhacks that do actually exist: non-existent hyperlinks and everything.
Now, studying the difference in GPT generations, it seems like more horsepower and more data solves alot of GPT problems and produces emergent capabilities with the same or similar architecture and code. The current data points to this trend continuing. I find it both super exciting and super ...concerning.
This seems like the perfect test, because it's something that does have information on the internet - but not infinite information, and you know precisely what is wrong about the answer.
> I'd sure that in a couple of years time, most of what ChatGPT achieves will be in line with most of the tech industry advances in the past decade - pushing the bottom out of the labor market and actively making the lives of the poorest worse in order to line their own pockets
This is not what any of the US economic stats have looked like in the last decade.
Especially since 2019, the poorest Americans are the only people whose incomes have gone up!
I use ChatGPT daily to generate code in multiple languages. Not only does it generate complex code, but it can explain it and improve it when prompted to do so. It's mind blowing.
FWIW, as a non-pathologist with a pathologist for a father, I can almost pass the pathology boards when taken as a test in isolation. Most of these tests are very easy for professionals in their fields, and are just a Jacksonian barrier to entry. Being allowed to sit for the test is the hard part, not the test itself.
As far as I know, the exception to this is the bar exam, which GPT-4 can also pass, but that exam plays into GPT-4's strengths much more than other professional exams.
What is a Jacksonian barrier to entry? I can't find the phrase "Jacksonian barrier" anywhere else on the internet except in one journal article that talks about barriers against women's participation in the public sphere in Columbia County NY during Andrew Jackson's presidency.
I may have gotten the president wrong (I was 95% sure it's named after Jackson until I Googled it), but the word "Jacksonian" was meant to refer to the addition of bureaucracy to a process to make it cost more to do it, and thus discourage people. I guess I should have said "red tape" instead...
Either it's a really obscure usage of the word or I got the president wrong.
"It's difficult to attribute the addition of bureaucracy or increased costs to a specific U.S. president, as many presidents have overseen the growth of the federal government and its bureaucracy throughout American history. However, it is worth mentioning that Lyndon B. Johnson's administration, during the 1960s, saw a significant expansion of the federal government and the creation of many new agencies and programs as part of his "Great Society" initiative. This expansion led to increased bureaucracy, which some argue made certain processes more expensive and inefficient. But it's important to note that the intentions of these initiatives were to address issues such as poverty, education, and civil rights, rather than to intentionally make processes more costly or discourage people.
Exams are designed to be challenging to humans because most of us don’t have photographic memories or RAM based memory, so passing the test is a good predictor of knowing your stuff, i.e. deep comprehension.
Making GPT sit it is like getting someone with no knowledge but a computer full of past questions and answers and a search button to sit the exam. It has metaphorical written it’s answers on it’s arm.
This is essentially true. I explained it to my friends like this:
It knows a lot of stuff, but it can't do much thinking, so the minute your problem and its solution are far enough off the well-trodden path, its logic falls apart. Likewise, it's not especially good at math. It's great at understanding your question and replying with a good plain-english answer, but it's not actually thinking
That's a disservice to your friends, unless you spend a bunch of time defining thinking first, and even then, it's not clear that it, with what it knows and the computing power it has access to, doesn't "think". It totally does a bunch of problem solving; fails on some, succeeds on others (just like a human that thinks); GPT-4's better than GPT-3. It's quite successful at simple reasoning (eg https://sharegpt.com/c/SCeRkT7 and moderately successful at difficult reasoning (eg getting a solution to the puzzle question about the man, the fox, the chicken, and the grain trying to cross the river. GPT-3 fails if you substitute in different animals, but GPT-4 seems to be able to handle that. GPT-4's passed the bar exam, which has a whole section on logic puzzles (sample test questions from '07: https://www.trainertestprep.com/lsat/blog/sample-lsat-logic-... ).
It's able to define new concepts and new words. It's masters have gone to great lengths to prevent it from writing out particular types of judgements (eg https://sharegpt.com/c/uPztFv1). Hell, it's got a great imagination if you look at all the hallucinations it produces.
All of that sum up to many thinking-adjacent things, if not actual thinking! It all really hinges on your definition of thinking.
exactly. it's almost like say dictionaries are better at spelling bee hence smarter than humans, or that computers can easily beat humans in Tetris and smarter because of that.
That's not a response from someone who wrote the answers on the inside of their elbow before coming to class. That's genuine inductive reasoning at a level you wouldn't get from quite a few real, live human students. GPT4 is using its general knowledge to speculate on the answer to a specific question that has possibly never been asked before, certainly not in those particular words.
It is hard to tell what is really happening. At some level though, it is deep reasoning by humans, turned into intelligent text, and run through a language model. If you fed the model garbage it would spit out garbage. Unlike a human child who tends to know when you are lying to them.
If you fed the model garbage it would spit out garbage.
(Shrug) Exactly the same as with a human child.
Unlike a human child who tends to know when you are lying to them.
LOL. If that were true, it might have saved Fox News $800 million. Nobody would bother lying, either to children or to adults, if it didn't work as well as it does.
>We are literally arguing semantics if this is AGI
It isn't and nobody with any experience in the field believes this. This is the Alexa / IBM Watson syndrome all over again, people are obsessed with natural language because it's relatable and it grabs the attention of laypeople.
Protein folding is a major scientific breakthrough with big implications in biology. People pay attention to ChatGPT because it recites the constitution in pirate English.
This is like all other rocket companies undermining what spacex is doing as not a big deal. You can keep arguing semantics while they keep putting actual satellites and people into orbit every month.
I use chatGPT every day to solve real problems as if it’s my assistant, and most people with actual intelligence I know do as well. People with “experience in the field”, in my opinion can often get a case of sour grapes that they internalize and project with their seeming expertise and go blind to persist some sense of calm to avoid reality.
ChatGPT cannot reason from or apply its knowledge - it is nowhere near AGI.
For example, it can describe concepts like risk neutral pricing and replication of derivatives but it cannot apply that logic to show how to replicate something non-trivial (i.e., not repeating well published things).
The domain is the domain of protein structure, something which potentially has gigantic applications to life. Predicting proteins may yet prove more useful than predicting text.
“Predicting proteins”? I’m a biologist and I can assure you knowing the rough structure of a protein from sequence is nowhere near as important to biology as everyone makes it out to be. It is Nobel prize worthy to be sure but Nobel prizes are awarded once a year not once a century.
Except its not, because they gave it away without any kind of commercialization. Its possible to give something away for free in some context and still have it be a product (Stable Diffusion is doing quite a bit of that, though its very unclear if they’ll be able to do it sustainably), but AlphaFold doesn’t seem to be an example. It seems to be an example of something cool they did that they had no desire to make into a product. Which is great! But isn’t the same as executing on product in a space.
This is hacker news, AlphaFold doesn’t have an app, some obscure GitHub repo, a hyped up website or a bunch of VC backing, so it’s basically a waste of time.
Numerous individuals have since transitioned away from Google, with reports suggesting their growing dissatisfaction as the company appeared indecisive about utilizing their technological innovations effectively.
Moreover, it has been quite some time since Google successfully developed and sustained a high-quality product without ultimately discontinuing it. The organizational structure at Google seems to inadvertently hinder the creation of exceptional products, exemplifying Conway's Law in practice.
Generative AI at its current state is still a very new area of research with many issues including hallucination, bias and legal baggage. So for the first few version we are looking at many new startups like open ai, stability, anthropic etc. It is yet to be seen if any of the new breed of startups actually starts to make sizeable revenue. But again there is nothing defensible here unless all the major labs stop publishing paper.
Uh, you snipped in the middle of a clause so you could argue against something it didn’t say.
Here’s the whole thing (leaving out a parenthetical that isn’t important here):
“Google has spectacularly, so far, failed to execute on products […] for generative AI”
You listed a bunch of products in other domains, some of which are the reasons why it has institutional incentives not to push generative AI forward, even if it also stands to lose more if someone else wins in it.
When did anyone realize that there generative AI was actually a product with wide consumer appeal? Or how many use cases there were for it as an API service? I'd say it wasn't really obvious until around Q4 last year, maybe Q3 at the earliest.
That's a pretty short time ago. So it seems that so far it hasn't really been a failure to execute, but more about problems with product vision or with reading the market right leading to not even attempting to have actual products in this space. That's definitely a problem, but not one that's particularly predictive of how well they'll be able to execute now that they're actually working on products.
The hardware costs alone of running something like GPT 3.5 for real time results is 6-7 figures a year. By the time you scale for user numbers and add redundancy... The infra needs to be doing useful work 24/7 to pay for itself.
It's more than possible Google knows exactly what it can do, but was waiting for it to be financially viable before acting on that. Meanwhile Microsoft has decided to throw money at it like no tomorrow - if they corner the market and it becomes financially viable before they lose that it could pay off. That is a major gamble...
> The hardware costs alone of running something like GPT 3.5 for real time results is 6-7 figures a year.
Can you unpack your thinking there? Even at 5% interest for ownership costs to be six figures a year you're talking about millions of dollars in hardware. Inference is just not that expensive, not even with gigantic models.
To the extent that there is operating cost (e.g. energy)-- that isn't generated when the system is offline.
I don't know how big GPT 3.5 is, but I can _train_ LLaMA 65B on hardware at home and it is nowhere near that expensive.
That's 8 $200k GPUs + all the other hardware + power consumption for one instance. You could run it on cheaper hardware, but then you'll get to nowhere near realtime output which is required for the majority of the use cases not already handled well by much smaller models.
Even if Google/Microsoft are getting the hardware at a 50% reduction (bearing in mind these are already not consumer prices) it gets to $1mn in hardware alone - again for a single instance that can handle one user interacting with it at a time.
It makes a lot of the bespoke usecases people are getting excited about (i.e. anything with data privacy concerns) far from financially viable.
If you want a dedicated instance of full capability ChatGPT for example (32K content) OpenAI are charging $468k for a 3 month commitment / $1,584k for a year.
You can purchase 80GB A100s right now for about $12,5k on the open market. I think the list price is $16k. I don't know what discount the big purchasers see, but 30% should be table stakes (probably explains that $12.5k prices), 50% for the big boys wouldn't be at all surprising to me based on my experience with other computing hardware.
So under the assumption that 8 80GB gpus are required, we're talking about a somewhat more than $100k one time cost (for 8x 80gb A100 plus the host) plus power, not 6-7 figures annually. Huge difference!
Evaluating it in a latency limited regime but without enough workload to enable meaningful batching is truly a worst case. I admit that there are applications where you're stuck with that, but there are plenty that aren't.
Anyone in that regime should try to figure out how to get out of it. E.g. concurrently generating multiple completions can sometimes help you hide latency, at least to the extent that you're regenerating outputs because you were unhappy with the first sample.
> that can handle one user interacting with it at a time.
That bit I don't follow. The argument given there is without batching. You can do N samples concurrently at far less than N times the cost.
There's definitely something amiss. Maybe we're just not seeing the whole picture, but Google has the best potential out there still. Not only vast and fundamental research came out their door (presumably there's more), but they also have their own compute resources and an up-to-date copy of internet.zip and gmail.zip and youtube.zip which they can train on vs what small and stale stuff (compared to Google's data) OpenAI trained their stuff on (like common crawl etc.). What gives, Google? Get on it!
edit: I forgot all about google_maps.zip / waze.gz and all the juicy traffic data coming from android.. which probably already relies heavily on AI
The difference between OpenAI and Google is that the latter's ethical concerns with AI are more deeply held. Google gave us the Stochastic Parrots paper[0] - effectively a very long argument as to why they shouldn't build their own ChatGPT. OpenAI uses ethics as a handwave to justify becoming a for-profit business selling access to proprietary models through an API, citing the ability to implement user-hostile antifeatures as a deliberate prosocial benefit.
To be clear, Google does use AI. They use it so heavily that they've designed four generations of training accelerators. All the fancy knowledge graph features used to keep you from clicking anything on the SERP are powered by large language models. The only thing they didn't do is turn Google Search into a chatbot, at least not until Microsoft and OpenAI one-upped them and Google felt competitive pressure to build what they thought was garbage.
And yes, Google's customers share that belief. Remember that when Google Bard gets a fact about exoplanets wrong, it's a scandal. When Bing tries to gaslight its users into thinking that time stopped at the same time GPT-4's training did, it's funny. Bing can afford to make mistakes that Google can't, because nobody uses Bing if they want good search results. They use Bing if they can't be arsed to change the defaults[1].
[0] Or at least they did, then they fired the woman who wrote it
[1] And yes that is why Microsoft really pushes Bing and Edge hard in Windows.
It was not some anecdotal fact that Bard got wrong, it was during their official public demo. It was a "scandal" because it showed Google was indeed unprepared and had no better product, not even preparing and fact checking their demo before was the cherry on the top.
Ethics is a false excuse because rushing that out show they never cared either. It was just PR and their bluff was called.
Also I skimmed over that Stochastic Paper and I’m unimpressed. I’m unfamiliar with the subject but many points seems unproven/political rather than scientific, with a fixation on training data instead of studying the emerging properties and many opinions notably regarding social activism, but maybe it was already discussed here on HN. Edit: found here: https://news.ycombinator.com/item?id=34382901
> I’m unfamiliar with the subject but many points seems unproven/political rather than scientific
You're exactly the kind of person Stochastic Parrots was trying to warn us about - you bought into the AI hype.
AI are extremely sensitive to the initial statistical conditions of their dataset. A good example of this is image regurgitation in diffusion models: if you include the same image n times in the data set, it gets n times the number of training epochs, and is far more likely to be memorized. Stable Diffusion's propensity to draw bad copies of the Getty Images logo is another example; there's so many watermarks and signatures in the training data that learning how to draw them measurably reduces loss. In my own AI training adventures[0], the image generator I trained loves to draw maps all the time, no matter what the prompt is, because Wikimedia Commons hosts an absolutely unconscionable number of them.
Stochastic Parrots is arguing that we can't effectively filter five terabytes[1] of training set text for every statistical bias. Since HN is allergic to social justice language, I'll put it in terms that are more politically correct here: gradient descent is vulnerable to Sybil attacks. Because you can only scrape content written by people who are online, the terminally online will decide what the model thinks, filtered through the underpaid moderators who are censoring your political opinions on TwitBook.
Of course, OpenAI will try anyway[2]. The best they've come up with is to use RLHF to deliberately encode a center-left bias into a language model that otherwise would be about as far-right as your average /pol/ user. This has helped ChatGPT avoid the fate of, say, Microsoft's Tay; but it is just sweeping the problem under the rug.
The other main prong of Stochastic Parrots is energy usage. The reason why OpenAI hasn't been outcompeted by actual open AI models is because it takes shittons of electricity and hardware to train these things. Stable Diffusion and BLOOM are the biggest open competitors to OpenAI, but they're being funded purely through burning venture capital. FOSS is sustainable because software development is cheap enough that people can do it as volunteer work. AI training is almost the opposite: extremely large capital costs that can only be recouped by the worst abuses of proprietary software.
[0] I am specifically trying to build a diffusion model trained purely on public domain images, called PD-Diffusion.
[1] No problem. We are Google. Five terabytes is so little that I've forgotten how to count that low.
[2] When filtering the dataset for DALL-E 2, OpenAI found that removing porn from the training set made the image generator's biases far worse. i.e. if you asked for a stock photo of a CEO, pre-filter DALL-E would give about 60% male, 40% female examples; post-filter DALL-E would only ever draw male CEOs.
>> To be clear, Google does use AI. They use it so heavily that they've designed four generations of training accelerators.
This +100
Somehow there is a perception that chat bots are the only example of AI research or product that matters and all AI organisations ability will be judged by their ability to create chatbots.
LLMs are the end-game for almost all NLP and CV tasks. You can freely specify the task description, input and output formats, unlike discriminative models. You don't need to retrain, don't need many examples, and most importantly - it works on tasks the developers of the LLM were not aware of at design time - "developer aware generalisation". LLMs are more like new programming languages than applications, pre-2020 neural nets were mostly applications.
> ...nobody uses Bing if they want good search results.
Sadly, I think I'd argue that nobody has good search results anymore. Google's results have been SEO'd to the hilt and most of the results are blog spam garbage nowadays.
> The only thing they didn't do is turn Google Search into a chatbot,
No, they turned google search into what it is now.
For me, trying google bard was an instant reminder of the change in behavior in google search from 15 years ago to today.
We used to have a search that you could give obscure flags to Linux commands and find their documentation or source code. Today we have a google search that often only tell you about how some kardashian or recent political drama is a sounds-alike with the technical term that you were searching for.
GPT4 has some of the same "excessively smart" failure modes, but it (and GPT3.5 for that matter) is so much more useful than bard (which hits the user with "I can't do that dave" 100x more often than chatgpt's already excessive behavior) that they're a useful addition to the toolbox. Too bad the toolbox hardly includes plain search anymore.
OpenAI releasing imperfect products is exactly what they said they would do. We need society to understand what the state and risks are. The 6-month-wait shitstorm is what happens when society gets the merest glimmer of the potential. I applaud them for this, rather than focusing on protecting their brand.
Despite what people often write and believe here, the access controls on PII data at Google are incredibly strict. You can't just arbitrarily train on people's personal data. I know, because when I was there, working on search backend data mining, in order to get access to anonymized search and web logs, I had to sign paperwork that essentially said I'd be taken to the cleaners if I abused the access.
> What gives, Google? Get on it
It's a very difficult decision to intentionally destabilize the space you are the leader in, for all the reasons you can imagine. In a sense, Google needed someone else with nothing to lose to shake up the space. How they execute in the new reality is yet to be seen. The biggest challenge they may have right now isn't technological, but that "ChatGPT" has become a sort of brand, like Kleenex and well, Google.
Meh people would much prefer to be typing their prompts into a Google search box than opening a separate GPT app. I doubt there real issue here is a marketing one. Despite ChatGPT's massive growth numbers the market is pretty immature, it's still very much open and not yet decided.
Many markets had early leaders who got stomped by later entrants.
Social space vs enterprise space. How many companies would want llms integrated with their corporate data, but need trust about the data not being leaked?
Microsoft and Google both have the capability and trust to make this available. When corporates start paying for LLMs, per user, or for applications, both Google and Microsoft and the two companies are in the best position.
All other industries will be users paying for LLMS model access.
1. LLM's don't have a lucrative business model that Google needs.
2. The quality of their language model is really lacking as of now.
You fix 1 and 2, ChatGPT's branding is nothing. Google is the biggest advertisement machine in the world and they can market the hell out of their product. Just see how Chrome gained ground on Firefox for example.
Google is still used several folds more than ChatGPT and if you resolve 1 and 2, Google will make their money and their users have no incentive to go to ChatGPT.
> Despite what people often write and believe here, the access controls on PII data at Google are incredibly strict. You can't just arbitrarily train on people's personal data.
And yet Google is the largest online advertiser in the world. And yet, GMail used to (I don't know if it still does) push ads into people's inboxes.
I have as much belief in their PII controls as in their "Don't be evil" motto.
When you open Gmail, you'll see ads that were selected to show you the most useful and relevant ads. The process of selecting and showing personalized ads in Gmail is fully automated. These ads are shown to you based on your online activity while you're signed into Google. We will not scan or read your Gmail messages to show you ads.
...
To opt-out of the use of personal information for personalized Gmail ads, go to the Ads Settings page
--- end quote ---
They literally train their datasets on people's personal data.
Google could stop sending traffic to webmasters and pivot to directly providing answers based on scraped data long, long ago, but Google knew webmasters would be up in arms over such a blatant bait and switch taking away their traffic and revenue.
OpenAI subverted this by riding on the “open” part of their name at first—before doing a 180-degree turn and selling out to Microsoft.
They could just as easily show ads in answers, the advertisers wouldn’t care. In fact, I can see how a major advertiser would rather prefer that an ad is shown in Google’s own trusted UI rather than on some random website next to who knows what sort of content (that motivation is behind YouTube’s “demonetization”).
I was referring to google‘s one ui but I agree with you. I am wondering if all the going back and forth by not finding what you are looking for increased the ad numbers (even if short term).
The announcement felt cautious and political, like they are running for technological ruler of the world and not a company trying to make money. This is probably why they are not going to not get very far against their competitors despite having so much potential. They care too much about what the EU and governments everywhere think of them now. They are no longer a profit making entity that disrupts and pushes the rules. They are part of maintaining the status quo.
7/8 of the transformer authors are gone, BERT author is at OAI, two first authors of T5 are gone, imagen team left to make their own startup, etc. etc.
95% of those people have left Google because the ethics and safety teams prevented them from releasing any products based on their research. We have those ex-Googlers to thank for ChatGPT, Character.ai, Inceptive, ... which you'll notice are not Google products but rather competitors.
I, for one, appreciate a megacorp purposely sacrificing revenue when they're not confident that the negative externalities of that revenue would be minimized.
Google could have built a search engine where paid results were indistinguishable from organic results, but the negative externalities of that were too great.
Google could have remained in China, but the negative externalities of developing and managing a censorship engine were too great.
Google could have productized AI before the risks were controlled, but they sacrificed revenue and first-mover advantage to be more responsible, and to protect their reputation.
This behavior is so rare, it's hard to think of another megacorp that would do that.
Google's far from perfect, they've made ethical lapses, which their competitors love to yell and scream about, but their competitors wouldn't hold up well under the same scrutiny.
> Google could have built a search engine where paid results were indistinguishable from organic results, but the negative externalities of that were too great.
Have you not used Google search in the past 5 years?
It sounds like you were either not around or didn't use Google in the early 00s. Back then, there was a very clear, bright color difference between ads and organic search results: a yellow bar at the top with at most two ads, and a side bar. But organic results were easy to identify and took up the majority of screen real estate.
Now, when I search any even slightly remotely commercial search term on mobile, about the entire first page and a half of results are ads. Yes, they're identified with a "Sponsored" message, but as you can see from the "evolution" link the other commenter replied, this was obviously done to make the visual treatment between ads and organic results less clear.
The reason I'm thrilled about Google finally getting competition in their bread-and-butter is not because I want them to fail, but I want them to stop sucking so bad. For about the past 10 or so years Google has gotten so comfy with their monopoly position that the vast majority of their main search updates have been extremely hostile to both end users and their advertisers as Google continually demands more and more of "the Google tax" by pushing organic results down the page.
In the meantime I've switched to Bing, not because I think Microsoft is so much better, because I desperately want multiple search alternatives.
To quote from the above, here's what they said in the beginning: "we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers"
The tweet you linked was from an outage that lasted 30 minutes, it's pretty disingenuous of you to try and pass that off as status quo.
I do agree however that the labeling has gotten less prominent over time. I don't however agree that it has become subtle enough to considered indistinguishable from search results.
> I don't however agree that it has become subtle enough to considered indistinguishable from search results.
This is what it looks like on mobile. A tiny "sponsored" text is the only thing that distinguishes ads from search results: https://imgur.com/a/WOk4NdR
Product based on no fundamental innovation is also a path to irrelevance. If there was even something remotely defensible in GPTs then OpenAI would not have sold 50% of company for 10B dollar. Only a matter of time and large about of human in the loop will bring any large transformer model into the same space as shown by recent models like alpaca and vicuña are showing that. Only thing the whole thing has done is no labs will open source any major breakthroughs anymore
> If there was even something remotely defensible in GPTs then OpenAI would not have sold 50% of company for 10B dollar.
If you need 10B dollar to develop your product you have to find it from somewhere. Training an LLM is not something you can do in a garage, bootstrapped.
This is not a typical VC based company. According the HN crowd this is the one company who can execute on AI and challenge Google and all other trillion dollar AI labs. In my opinion they themselves are aware of the fact they are one trick pony. Given how astute a VC SamAltman is, if there was any thing remotely innovative and defensible about the product they would have never done that.
Raising money is ok. Selling 50% of your company for 10B the company when apparantly you have trillion dollar defensible business dont any business sense.
You need money to turn it into a trillion dollar business... Selling equity is how you raise that money.
If you were sure it will be a $1T business you have more reason to sell equity and accelerate the growth of your company, because you know the remaining 50% is going to be so valuable.
These are capitalist enterprises here. I'd argue that product is almost all that matters. Sure someone has to innovate but the final product that can be sold is what keeps people and companies relevant.
I suspect recency-bias may be tripping people up: LLMs and ChatGPT are not the final word in AI, and there is no reason for Google to bet the farm on them.
I wouldn't bet against Google DeepMind originating the next big thing, at the very least, their odds are higher than OpenAIs.
Edit: this may yet turn out to be a Google+ moment, where an upstart spooks Google into thinking it is fighting an existential battle but winds up okay after some major missteps that take years to fix (YouTube comments as a real-name social network. Yuck)
You could say the same about Xerox in the late 70s. And they conclusively showed that they couldn’t execute and squandered all of their amazing original research. Looking at how laughably bad Bard is, Google has a long way to prove they aren’t Xerox 2.0 at this point. I’m amazed that Sundar hasn’t been pushed out yet by Larry and Sergey.
This thread is full of people saying that what Xerox did was some terrible mistake, but I think that it was much better that they could afford to do all this research which spawned a massive industry as a result than had they become this massive monopoly which controlled everything.
If Google spends billions of it's ad money doing original research that spawns a new industry with thousands of companies, that would seem to be a great result to me.
That might be true on a societal level, but is small solace to XRX shareholders, not to mention the many researchers who contributed these brilliant creations only to see them exploited by others while their own company just ignored them and let them die on the vine.
You're reiterating their point. Yeah, Google has competent AI people but that means nothing for their own success if they can't execute. OpenAI has proven that.
> Yes the team which literally created transformer and almost all the important open research including Bert, T5, imagen, RLHF, ViT don't have the ability to execute on AI /s
Yet Google does not have a slam-dunk product despite so many great research results. This looks a gross failure of the CEO, especially given that he's been chanting AI First in the past few years.
I would go on about how much execution matters, but it's not just about execution, cause ChatGPT is actually a better AI than anything Google has put out so far. So unless Google is hiding something amazing...
I'm sure you're right but the only note attached to the author list is in the opposite direction – Tom B Brown has an asterisk with "Work done while at OpenAI".
The original transformer team very much has executed on making successful implementations of transformers ... just not for Google. Clearly something went a bit wrong at Google brain in 2017.
Every single member of the research team that invented the transformer architecture have left Google to go to OpenAI, or make their own startups (character.ai, anthropic, cohere)
It might help to reflect on what the upsides of this have been for OpenAI, re execution.
On the face of it, execution is often all that matters. FB v myspace, AMD v Intel (eventually), Uber v Lyft, MS v Apple (pre 2001), Apple v MS (post 2001) etc.
I think in this context “execute” implies “create traction with a real-world product”. Given that even politicians and comedy shows are talking about ChatGPT, I think it’s fair to acknowledge that Google is lacking in this area.
"Sun Microsystems literally invented Java and has done a ton of open research on RISC, how are they not able to execute as those technologies are exploding"
Outside world is looking at AI innovation only in recent times forgetting the entire journey of last decade. If there was any remotely defensible technology in OpenAI they wouldn't have sold 50% of their company for 10B.
Yes, we saw your other comment stating the same thing.
You are doing a whole lot of tea leave reading with basically zero visibility, which I can’t really reconcile with how absolute you’re being with your language.
You think the researchers who created transformers are going to become a commercial product team and be good at execution?
Google is great at research, one of the best companies in the world. They are also not very good at product. It will not be possible for Google to research their way out of the business problems they’re facing. They may win, but if so it will be because they get good at product, not because the transformers research team comes up with something even more amazing.
PPO for RL. Plus a lot of the people behind the innovations you mention are now at OAI - Lukas who was one of the Transformer authors, Bert paper author, chain of thought prompting author, etc.
But ya their strong point is execution and doing the hundreds of little things that make the model do well and it turns out that that's more important than "novel" ideas
Even in ‘07, Apple had a track record for doing things right, not doing things first.
Current-day Google churns out sterile, uninspiring products, and kills them.
If your argument is “this company is going to act out of character and do something innovative!” then…yeah, sure. That’s a good way to be right, sometimes. Just don’t let everyone see the majority of the time where you’ve been wrong.
Because of course Xerox PARC, which literally invented the GUI, desktop computer, the mouse, freaking Ethernet, etc executed the commercialization of all their innovation flawlessly....
Being able to produce research is a very different skill from being able to produce a very successful product. We have not seen google do that very successfully for over a decade.
DeepMind clearly is a household name. Think of AlphaGo or AlphaFold, those were legendary. Google Brain as well is a household name. Think of the Transformer, or BERT. Those are legendary, as well.
Rlhf wasn't introduced by openai. And GPT is a pretty standard transformer, no? Yes they did it at scale and it speaks volumes on their production skills, but OP was asking about research
GPT is a specific instantiation of a transformer, and doing next token prediction was an openAI thing. Transformers were a big part of it, but GPT was def proposed but openAI
There are plenty of products that launch on top tools and frameworks that are worth far more than the underlying will ever be. OpenAI is creating products, DeepMind was creating tools.
It's not a matter of skill as much as objective. And DeepMind would still be starting at zero if they decide to pivot to products.
"this action will hamstring DeepMind in bureaucracy."
I'm sorry but I fail to see the problem with this. DeepMind has made very impressive demos and papers, but they have yet to add one dollar of revenue to Google's bottom line. Further they have drained billions from Google.
Google has to, somehow, get completely out of the research paper game and into the product game.
Papers have to have little/no impact on perf going forward. Other than a small windfall to goodwill they are a misalignment between the company's goals and those of the employees.
Products Google, products. Unless Larry and Sergey want to turn Google into a non-profit research tank. Which would be fine, but likely with substantially lower headcount. Even they aren't that wealthy.
LOL, if you look at the amount of money Google has poured into Google and how much they got back for their investment, it's laughable.
Things like the Wavenet "contributions" are just Demis paying lip service to the fact that once in a while Google was nudging them to produce something, anything really that was actually useful.
Google putting the extreme amounts of easy dollars they have into things that aren't instantly profitable is very much what the founders said they'd do though
This was paid by Google for unspecified research services. But the way it’s accounted for it’s likely that it was based on some legitimate contribution. It is unlikely it would be structured this way if it was just corporate support.
DeepMind has public financial filings and you can go read the exact language they use to describe the revenue they generate.
> DeepMind has made very impressive demos and papers, but they have yet to add one dollar of revenue to Google's bottom line. Further they have drained billions from Google.
You could say the same about OpenAI and Microsoft, they drained money for years until about 6 months ago when suddenly the partnership started to pay back big style.
OpenAI is still massively unprofitable and MSFT is (rightly IMO) going to invest way more money in them so it’s definitely still a drain. A modest drain relative to MSFTs overall resources
As much as I'd love an OpenAI-style API from Google, I'm not expecting that. It will probably be "profitable" to them in the unseen backend making Search, Google Assistant, etc better. I've been playing with Bard a lot and it's pretty good, but OpenAI's API offering just makes them so much more useful to me since I can use whatever app I want (or even write my own) to consume the product, and it's easy for me to see the value for my dime.
"Papers have little/no impact on perf" - this is a ridiculous and false claim.
Almost every single advancement in any field has come from academia. Sure, it may not be recognized as such by the general public because they aren't experts in the area - but the fact remains that academia is pretty much the only way to progress as a society. Companies just take what academia gives them and make products out of it for their own profit (not to completely trivialize that - it still comes with it's own set of challenges), but the private sector is completely misaligned with making real progress towards hard problems. Deepmind is one of the examples that continues to show this despite being a 'corporate entity' in that the large advancements seen are out of their employment (i.e. giving their excess of capital) of professors at universities who focus on their research.
> Almost every single advancement in any field has come from Academia
This sounds like you need far more evidence. If you say academia as the institution where you share papers, sure but then that’s just a sharing mechanism. Almost like saying all advancements came out of Internet because arxiv is where research is shared.
If you want to say professors and Universities have been heralding AI advancement, that has not been true for at least 10 years possibly more. Moment industry started getting into Academia, Academia couldn’t compete and died out. Even Transformers the founding paper of the modern GPT architectures came out of Google Research. In Vision, ResNet, MaskRCNN to Segment Anything came out of Meta / MIcrosoft. The last great academic invention might have been dropout and even that involved Apple. After that I fail to see Academia coming up with a single invention in ML that the rest of the community instantly adopted because of how good it was.
Huh? None of this is true for a lot of core recent work. A very obvious example is transformers, which did not come out of academic research (or DeepMind for that matter) at all.
I feel like google crossed some point about a decade ago where they stopped making innovative stuff and started focusing on squeezing revenue out of everything else. A bit like when Carly turned HP into a printing/ink racket. Both the decline of google maps and the inability of google to filter out noise from their search results are strong indicators of this for me. Scrambling to field a competing product to maintain relevancy in this emerging market would be consistent with this assessment as well. The old google would have fielded the product first because it was useful, but the current google seems to do it because they don't want to lose revenue.
I don't know if said bureaucracy is a blessing or a curse given Google's track record in product management. If pressed I would bet towards the curse option.
Different people excel at different types of work (particularly where deep experience is the most significant contributor to performance). Tasking academic researchers with building product is the pathway to hell.
The existing, top-performing product teams at Google should be taking that research and building products around it. If Google has any top-performing product teams left, that is...
is this a influx of resources, or consolidation and cutbacks? i read it as google used to have two different ai research teams, and now they have one fewer than they used to.
This reminds me of Nest. When it was separate, it was shipping great hardware and OK software. Then Google appended "Google" in front of it, creating "Google Nest" and kicked off the slow Google Hug of Death™.
The first casualty was Nest shutting down its APIs, cutting off an ecosystem of third party integrations.
The next casualty was replacing the Nest app with the Google Home app. I stopped following Nest after that because I sold all the Nest stuff I owned and replaced it all with HomeKit.
It's astounding how Google keeps doing this, and its shareholders seem to go along with it. I agree, given their track record, its hard to be optimistic about anything Google slaps their name in front of.
What a shortsighted statement for a race that has barely gotten out of the gates. But, if any one company should be panicking then it's OpenAI at the thought of losing their minimal lead and getting crushed by the company, that invented most of the technology they use, put a significant amount of resources behind their AI initiatives.
Google Search had an outage yesterday. Google just underwent its first round of layoffs ever which definitely affects internal morale and makes all employees aware of their company's mortality. Google's CEO was in the news last week for hiding communications while under a legal hold. Google stock tanked with the rushed demo of Bard. And, even if all those things weren't true, Google has continually failed to establish revenue streams independent from ads and continually abandons products that don't meet their expectations. Consumer confidence in new Google product announcements is lower than any other major tech company - the default assumption is that the product will be pulled months/years later.
Microsoft is giving their full support to OpenAI through their 49% partnership. $13B investment compared to Google buying DeepMind for $500M and investing $300M in Anthropic. Microsoft has good working agreements with the US government, a long history of unreasonable support for their flagship products, clawed their way back to being one of the most valuable companies in the world by finding diverse revenue streams, and, frankly, comes across as the wise adult in the room given they already had their day in the sun with legal battles.
I agree completely that if there continue to be marked revolutions in AI that invalidate current SOTA then those innovations are likely to arise from Google's research labs, but from an execution standpoint I have nothing but concerns for Google. It's crazy that I feel they need a second chance in the AI revolution when LLMs originated from inside their org just a few years ago. And it's not like they don't feel similarly - there've been countless articles about "Code Red" at Google as they try to rapidly adjust their strategy around AI.
I think OpenAI has a wider leader than people are acknowledging. It's like everyone was forced to show their AI-hand the last couple of months, in an attempt to appease shareholders, and it seemed like a fair fight until GPT4 hit the ground running. Now we're looking at agents and multi-modal support ontop of $200M/yr revenue when everyone else has no business plan and has yet to announce any looming upgrades. At a certain point, first-mover advantage compounds, the foremost AI app store becomes established, and people building commercial products will become entrenched.
Yeah, fair, the way I expressed myself sounded stupid. What I meant to say was something like: "I don't believe that DeepMind is openly making use of LLM technologies. They're known for their neural networks operating at a pixel-level rather than a token-level. I don't know which of these approaches has more long-term commercial viability."
> This does not seem unexpected. Google is panicked about losing the AI race and pushing resources into DeepMind is a logical step to mitigating those fears.
Google trying to "win" the super-human AGI race is even more flawed than a nation trying to "win" the nuclear arms race.
At least with a nuclear arms race we all die quickly. Super-human AGI will probably just bring about unthinkable levels of suffering before finally killing us all.
And here I thought that Google would achieve AI supremacy because of all the data they have been vacuuming for decades, turns out they haven't even thought to utilize it?
How did they drop the ball so hard? OpenAI has been around for less than a decade and as a smaller team with less resources was able to make a better product.
Though this is usually how it goes - big successful companies begin to bend towards regulatory capture after having their period of upstart growth and disruption. They make as much money as possible for shareholders on their cash cow and its management culture's primary objective to make sure this is not disturbed.
Think about how many decades head start IBM had to perfect search, but search wasn't their core competency.
Delivering advertisements is Google's core competency.
Sundar's email mentions something critical - Jeff Dean is going to be the Chief Scientist in DeepMind, and coordinate back to Sundar. This is a big deal; that move tells you that Google is taking being behind on public-facing AI seriously, Dean is an incredibly valuable, incredibly scarce resource.
If we wind way back to Google Docs, Gmail and Android strategy, they took market share from leaders by giving away high quality products. If I were in charge of strategy there, I would double down on the Stability / Facebook plan, and open source PaLM architecture Chinchilla-optimal foundation models stat. Then I'd build tooling to run and customize the models over GCP, so open + cloud. I'd probably start selling TPUv4 racks immediately as well. I don't believe they can win on a direct API business model this cycle. But, I think they could do a form of embrace and extend by going radically open and leveraging their research + deployment skills.
Jeff Dean is clearly one of the greatest software developers/engineers ever but there isn’t much evidence that he is a brilliant ML researcher
And indeed Google AI has achieved very little product wise during his time as CEO. Kind of suggests he is a big part of bureaucratic challenges they have faced
Of course he is someone any technology organisation would want to have as a resource. But probably not as chief scientist or ceo of an ML company based on the available evidence
>Jeff Dean is clearly one of the greatest software developers/engineers ever but there isn’t much evidence that he is a brilliant ML researcher
Google have oversupply of brilliant ML researchers. What they need is a engineer that sees the applications of the technology so it can be turned into a product. Someone that can bridge the gap between the R&D team and the Bureaucracy.
Want an idea for a stupid product - input - description of a girl, hobbies, some minor flaws - output - create a poem. Have been using Vicuna quite successfully for that purpose.
He had the perfect balance of being legendary engineer and an ML researcher
Can't emphasize more on how much rigorous engineering practice could accelerate research delivery. It is THE key to have a productive research oriented team.
Good research engineers are underrated, and very difficult to find.
Furthermore, Jeff is not a great (or even good...) manager/director/leader.
There were a lot of internal and external dramas because of his leadership, that he failed to address.
How often you hear about dramas about other Chef Scientists at other, comparably sized, companies?
He should stay a Fellow, in a "brilliant consultant" role.
He absolutely should have gotten rid of the troublemaker. Many folks used this publicity to leave for higher positions or higher pay (which is very common at google) but made it look like Jeff Dean was the problem.
That's my bias as well. To me, it seems like every day someone releases a new AI toy, but the thing you would actually want is for a real software engineer to take the LLM or whatever, put it inside a black box, and then write actually useful software around it. Like off the top of my head, LLM + Google Calendar = useful product for managing schedules and emailing people. You could make it in a day of tinkering as a langchain demo, but actually making a real product that is useful and doesn't suck will require good old fashioned software engineering.
Based on the multitask generalisation capabilities shown so far of LLMs I’m kinda in the opposite camp - if we can figure out more data efficient and reliable architectures base language models will likely be enough to do just about anything and take general instructions. Like you can just tell the language model to directly operate on Google calendar with suitable supplied permissions and it can do it no integration needed
Exactly this. There is a reasonable chance the GUI goes the way of the dodo and some large (75% or something) percentage of tasks are done just by typing (or speaking) in natural language and the response is words and very simple visual elements.
People are building toy demos in a day that are not actual useable products. It’s cool, but it’s the difference between “I made a Twitter clone in a weekend” and real Twitter.
1 - companies are deploying real products internally for productivity, especially technical and customer support, and in data science to enable internal people to query their data warehouse in natural language. I know of 2 very large companies with the first in production and 1 with the second, and those are just ones I'm aware of.
2 - you are conflating the problems of engineering a system to do a thing for billions of users (an incredibly rare situation requiring herculean effort regardless of the underlying product) with the ability of a technology to do a thing. The above mentioned systems couldn't handle billions of users. So what? The vast majority of useful enterprise saas could not handle a billion users.
OpenAI from a research point of view haven’t really had any “big innovations”. At least I struggle to think of any published research they have done that would qualify in that category. Probably they keep the good stuff for themselves
But Ilya definitely had some big papers before and he is widely acknowledged as a top researcher in the field.
I think the fact that there are no other systems publicly available that are comparable to GPT-4 (and I dont think Bard is as good), points to innovation they havent released
> Jeff Dean is clearly one of the greatest software developers/engineers ever
Based on what? I've heard all the Chuck Norris type jokes, but what has Jeff Dean actually accomplished that is so legendary as a software developer (or as a leader) ?
Per his Google bio/CV his main claims to fame seem to have been work on large scale infrastructure projects such as BigTable, MapReduce, Protobuf and TensorFlow, which seem more like solid engineering accomplishments rather than the stuff of legend.
Seems like he's perhaps being rewarded with the title of "Chief Scientist" rather than necessarily suited to it, but I guess that depends on what Sundar is expecting out of him.
When I joined Brain in 2016, I had thought the idea of training billion/trillion-parameter sparsely gated mixtures of experts was a huge waste of resources, and that the idea was incredibly naive. But it turns out he was right, and it would take ~6 more years before that was abundantly obvious to the rest of the research community.
As a leader, he also managed the development of TensorFlow and TPU. Consider the context / time frame - the year is 2014/2015 and a lot of academics still don't believe deep learning works. Jeff pivots a >100-person org to go all-in on deep learning, invest in an upgraded version of Theano (TF) and then give it away to the community for free, and develop Google's own training chip to compete with Nvidia. These are highly non-obvious ideas that show much more spine & vision than most tech leaders. Not to mention he designed & coded large parts of TF himself!
And before that, he was doing systems engineering on non-ML stuff. It's rare to pivot as a very senior-level engineer to a completely new field and then do what he did.
Jeff certainly has made mistakes as a leader (failing to translate Google Brain's numerous fundamental breakthroughs to more ambitious AI products, and consolidating the redundant big model efforts in google research) but I would consider his high level directional bets to be incredibly prescient.
OK - I can see the early ML push as obviously massively impactful, although by 2014/2015 we're already a couple of years after AlexNet, other frameworks such as Theano, Torch (already 10+ yrs old at that point), etc already existed, so the idea of another ML framework wasn't exactly revolutionary. I'm not sure how you'd characterize Jeff Dean's role in TensorFlow given that you're saying he lead a 100-person org, yet coded much of himself.... a hands-on technical lead perhaps?
I wonder if you know any of the history of exactly how TF's predecessor DistBelief came into being, given that this was during Andrew Ng's time at Google - who's idea was it?
The Pathways architecture is very interesting... what is the current status of this project? Is it still going to be a focus after the reorg, or too early to tell ?
Jeff was the first author on the DistBelief paper - he's always been big on model-parallelism + distributing neural network knowledge on many computers https://research.google/pubs/pub40565/ . I really have to emphasize that model-parallelism of a big network sounds obvious today, but it was totally non-obvious in 2011 when they were building it out.
DistBelief was tricky to program because it was written all in C++ and Protobufs IIRC. The development of TFv1 preceded my time at Google, so I can't comment on who contributed what.
1. what was the reasoning behind thinking billion/trillion parameters would be naive and wasteful? perhaps part are right and could inform improvements today.
2. can you elaborate on the failure to translate research breakthroughs, of which there are many, into ambitious AI products? do you mean commercialize them, or pursue something like alphafold? this question is especially relevant. everyone is watching to see if recent changes can bring google to its rightful place at the forefront of applied AI.
> large scale infrastructure projects such as BigTable, MapReduce, Protobuf and TensorFlow
If you initiated and successfully landed large scale engineering projects and products that has transformed the entire industry more than 10 times, that's something qualified for being a "legend".
Only if you did it at a company like Google where it's being talked about and you've got that large a user base. Inside most of corporate America internal infrastructure / modernization efforts get little recognition.
I wrote an entire (Torch-like - pre PyTorch) C++-based NN framework myself, just as a hobbyist effort. Ran on CPU as well as GPU (CUDA). For sure it didn't compete with TensorFlow in terms of features, but was complete enough to build and train things like ResNet. A lot of work to be sure, but hardly legendary.
> Only if you did it at a company like Google where it's being talked about and you've got that large a user base
Google has lots of folks who had access to the similar level of resources and no one but Jeff and Sanjay made it. Large scale engineering is not just about writing some fancy infra code, but a very rigorous project to convince thousands of people to onboard which typically requires them to rewrite significant fraction of their production code, typically referred as "replacing wheels on a running train". You gotta need lots of evidence, credits and visions to make them move.
Yeah - just finished migrating a system of 100+ Linux processes all inter-communicating via CORBA to use RabbitMQ instead. Production system with 24x7 uptime and migration spread over more than a year with ongoing functional releases at the same time. I prefer to call it changing the wheels on a moving car.
No doubt it's worse at Google, but these type of infrastructure projects are going on everywhere, and nobody is getting medals.
This announcement, including the leadership changes, sounds more like they've shut down DeepMind and moved everyone over to Google Brain. Keeping the DeepMind name for the new team is a clever trick to make it look like more positive news than it actually is.
He is key to defining the culture (secrecy etc). There is a huge culture difference between Brain and DM and, with Demis at the helm, I'm concerned that it'll be Brain moving towards DM culture, not vice versa.
ehhhhhhh sure but for how long? these statements stick out to me:
- "I’m sure you will have lots of questions about what this new unit will look like" aka we're not going to talk about specifics in public comms
- "Jeff Dean will take on the elevated role ... reporting to me. ... Working alongside Demis, Jeff will help set the future direction of our AI research" aka Demis isn't the only Big Dog in the room anymore
It seems that DeepMind has now gone from what had appeared to be a blue sky research org to almost a product group, with Google Research now being the primary research group.
Jeff Dean's reputation has always been as an uber-engineer, not any kind of visionary or great leader, so it's not obvious how well suited he's going to be to this somewhat odd role of Chief Scientist both to Google DeepMind and Google Research.
How things have changed since OpenAI was founded on the fear that Google was becoming an unbeatable powerhouse in AI!
I'd hope that a direct report to the top also has the strength of character to call for shutting it down, if it doesn't fly. Or, to respond appropriately to requests to commercialise it, with huge latent risks.
Rushing it out the door in a space race model is probably not a good idea. At this point, the value proposition is moot.
As a client and even paying customer of google I don't want this AI intruded into my product experiences without a big fat OFF switch. Not because of some terminator skynet fantasy: I want an opportunity to discriminate between reality as projected from pagerank and classic NLP algorithms, from the synthetic responses from a model.
That sounds like a fairly brilliant counter to "Open"AI. Something tells me that Google is still too scared of this tech in the hands of the public to go there though.
Piles of previous statements talking about safety and a general unwillingness to put any of its models in the hands of anyone not under NDA. Bard is the first counterexample I can think of and that was forced by OpenAI.
Anecdotally, Bard is much better at guiding the responses than chatGPT. They're slowly releasing more and more "features" as they feel comfortable. You can see what they're up to with the TestAI kitchen app.
I kinda liked how open chatGPT was before the heavy filtering, but I see why we need to reign in chaos overall.
The AI primitives are pretty basic. The real brains are figuring out how to make the best model. The engineering integration are pretty straightforward
I'm worried that OpenAI has started a trend of these AI companies being a lot more secretive about their research in the future. I mean basically OpenAI took Deepmind's/Google's public research on transformers and ran with it, not publishing back the results of improving it.
This probably sent a bad message with consequences for the whole public research field.
I agree with this. Last decade was golden age of AI where all major players including Google brain, deepmind, FAIR, Microsoft Research contributed a lot. To honest OpenAI had the least intellectual contribution of them all except of few marketing material masquerading as papers. From now on we can expect all labs to secretive and not publish anything. This is really bad considering all this models are black box and research is needed to understand them better. Hope government comes into picture and forces this labs to explain details of each model
FAIR will continue to publish. Nvidia and Uber also. Then you have open source oriented labs who should continue publishing. Google is the big one. They have made more research contributions than all other labs combined basically.
No, not really. It was never really as large, and most of their output was in probabilistic programming (e.g. Pyro) and work that was relevant to self-driving cars (point cloud compression, etc.). But they shut down Uber AI in the layoffs last year.
This comment reminds me of what Facebook did to the open web in 2005.
Before Facebook, the web was more open, with websites being more accessible to each other. Google scraped resources like Wikipedia and Twitter, and augmented the results into their search page.
When Facebook appeared, Google tried to integrate Facebook data into their search page. But Facebook, then an up-and-coming internet company, wanted to protect their data as a competitive moat. With this seeming to set an example, each platform started to hoard their data on their website. The web no longer interoperated with their data, and all data began to be siloed in their own platforms.
Because they gave a product that everyone loves, which is a credit to them. At this point nobody cares about whether they made AI open, or they really care about the safety and negative aspect of LLMs.
Don’t think this is true. Facebook has released both llama and segement-anything after ChatGPT took off, with llama being half-open and s-a being fully open. I think apart from google which worries about openai producing a google killer the show will go on. Facebook doesn’t sell language models or image models, its not a threat to their business.
That trend started at about the moment when Llama was leaked. We didn't really take the good-faith limited access in good faith ourselves, and as a result lost trust.
I disagree. Meta released a research paper and the model (the latter only to researchers.) OpenAI won't even release an actual paper detailing the specifics of their research. Thats a much lower bar, and I highly doubt those two incidents are really related.
While ultimately I think this is probably a very good organizational change, to have similar teams working on similar projects under the same leadership, it does seem to spell trouble in the short term.
I can read between the lines that Google is done having Deepmind floating out there independently creating foundational research and not products. Sounds like this is a sign that they've internally recognized they are behind and need all their resources pulling in the same directions towards responding to the OpenAI/Microsoft threat.
It also seems to signal that they won't have their answer to Bing in the short term. As they say, nine women can't make a baby in a month and adding people to a late project makes it later.
This sounds about right. I think it's acknowledged that OpenAI's strength has been product rather than just pure research - google and facebook both have way more publications and deeper benches, but aren't really commercializing anything.
The shift to commercialization (by companies) was inevitable. It's also a bit sad though. Somebody still has to do the fundamental stuff, and Google (along with Facebook) have been amazing for the ecosystem, especially open source. If everyone is going the OpenAI route, the golden age of AI is going be be over as we to the profit extraction phase
I see your point, though I think it's ultimately going to be good for AI progress. So far the research has been mostly a vanity project for these companies. Who knew if there was really any gold at the end of those rainbows. Eventually the appetite for participating in the research paper olympics was going to run out, probably right at the same time that monetary policy stayed tight for too long.
The possibility of building a trillion dollar company on this tech means a whole lot more investment, more people entering the field. More people excited to tinker in their spare time and more practical knowledge gained. More GPUs in more data centers. Eventually things will loop back around to pure research with that many more resources applied.
It sure beats an AI winter, which probably would have been the alternative had LLMs not taken off.
> google and facebook both have way more publications and deeper benches, but aren't really commercializing anything.
I am absolutely certain that Google and Facebook are productizing their AI research and integrating it with their money-making products and measurably earning more money from the effort. Perhaps what you mean by "commercializing" is packaging AI in direct-to-consumer APIs? IMO, that market is not currently large enough to be worth the effort, but is almost certain GCloud will continue to expand ML support.
> Somebody still has to do the fundamental stuff, and Google (along with Facebook) have been amazing for the ecosystem, especially open source
A new golden age for university research? It has been completely made irrelevant in the last 3 years, and now it has the chance to capture fundamental research back. Let corporations worry about products, as has always been.
> When Shane Legg and I launched DeepMind back in 2010, many people thought general AI was a farfetched science fiction technology that was decades away from being a reality.
For context: this is pretty surprising, given the significant amount of independence Deepmind had within Google. So much so, in fact, that they tried for a long time to be spun off from Google: https://www.wsj.com/articles/google-unit-deepmind-triedand-f...
Yeah. DeepMind has sought more independence than they had, but now they have lost it completely. It seems there was an internal power struggle and Google won.
The people who want to do R&D disconnected from product dev should be leaving Google Deepmind. There's still a valuable role for them elsewhere and there will always been money to fund them. Doesn't mean that Google will suddenly be at a disadvantage as a result though.
One of the biggest liberties DeepMind had was a completely separate hiring process and pipeline than Google.
Their open positions mysteriously disappeared on November last year and they are still closed outside of specific senior roles and a very open ended "register your interest if you have a PhD".
Big loss for DeepMind if the separate pipeline is lost. Being able to hire for their priorities instead of whatever Google's hiring for is one of the reasons it was so successful.
I'm surprised to see so many comments in this thread criticizing Google for not milking more money out of their AI research sooner. Not being a shareholder, I'm pretty happy with how they catalyzed the modern AI revolution and have worked on very hard and meaningful problems like protein folding.
People can be negative and critical for no reason. In this case, I think criticism is due because Google's failure to productize has lead them to a potential existential disaster. Most of their revenue depends on search and there being an ecosystem of websites to link to and display even more of their ads. Generative AI is an existential risk to their current search interface, the ability to insert ads into that experience and there even being any ad-supported websites with free content to link to.
With the reports that Samsung may switch to Bing, you could quickly see an exodus in users over to chat search. It wouldn't take much lost revenue to implode Google's business model and the business model of every ad-supported site on the internet.
> and there being an ecosystem of websites to link to and display even more of their ads.
Regardless of the change in the interface, the websites aren't going anywhere. Maybe they'd be more tailored for LLMs to parse than humans.
> Generative AI is an existential risk to their current search interface,
I don't think so. It'd be easy to pivot to a different interface if it gains popularity after the initial hype. There are still a lot of monetizable queries that people use Search for, and won't use LLMs.
> the ability to insert ads into that experience and there even being any ad-supported websites with free content to link to.
It'd be easy to insert ads in LLM responses. I think LLMs will be a good thing for Google Ads. Right now, people hate ads on web because they are obnoxious and are competing for attention. Inserting ads in LLM responses will much less distracting and valuable.
Ask Google Search chat which TV should I buy based on my criteria, and Google could suggest the top brands, and "ads" for where you could buy it. For example: "there is a promotion going on at your local BestBuy for this TV" or whatever. They have this info in the Shopping tab.
Fair. ChatGPT is definitely the most serious event in Google's long and utterly dominant history of web search. Samsung is definitely using ChatGPT as a negotiating tactic with Google. I assume that ad revenue share is part of the arrangement. There's no way that Bing Ads could match the amount that Google Ads pays out to Samsung. This whole space is moving excitingly fast, but it's still too early to claim that anyone has "won" the space. If I had to bet who had the most advanced and most used AI service 5 years from now, I would definitely bet on Google.
Looks like DeepMind will no longer be able to pursue academic research with the pressure to monetize. Talent exodus could happen similar to what happened at Google AI where many prominent researchers either went to OpenAI or started their own companies
Agreed. I'm seeing multiple other comments here suggesting DeepMind was somehow a waste when they have done a lot of very impressive research. "Solving" protein folding. Retrieval transformers. Novel solutions to math problems using ML? What about beating fucking Lee Sedol in go? No? None of that matters? C'mon.
Those are towering achievements that don't add to Google as a business. It was an important part of Google's reputation, but now that reputation is in the mud as Microsoft and OpenAI become king in the eyes of the public.
Arguably none of that made a comparable difference to the life of a common man as ChatGPT did. It's necessary to do fundamental research, but imperative to maintain focus on delivering real world value to real world people.
Do you know if there's a list of such companies floating around? Really curious to see where the research talent in the space is heading, especially if they're leaving the warm embrace of their BigCo...
I feel like this is a bad sign. What this announcement reads like is "Hey! I won this internal political struggle!". Ok, sure, not sure why anyone outside the company should take this as good news. This announcement either means the AI outside of Demis' team has been neutered, or they're lining Demis up to be the scape goat for missing AI. Remember - what this announcement means is that Demis now has a load of people reporting to him who previously were rooting for his failure. Trying to synthesize those two separate teams (half of which wanted you to fail) into one productive and world-leading team is a hell of an ask.
One of HNs failure-modes is inaccuracies get voted to the top if they seem correct to the majority of voters whose biases resonate with poster's.
Aside: thank you for asking. When I previously encountered incorrect top-level comments that I knew to be wrong (insider information), I'd simply ignore and move on. You've inspired me to push back more often.
But not always! There are those among us who like nothing better than to double down in a flame war. One nice thing about having visited often over the past 7 years is that I know whom to avoid responding to (for the most part).
If you've worked at a large organization, you'll know the news can paint a cartoonishly distorted picture largely informed by the perspective of the anonymous sources, journalist and news organization.
The WSJ article expressly considers that factor and goes into detail on what's under the surface.
> The end of the long-running negotiations, which hasn’t previously been reported, is the latest example of how Google and other tech giants are trying to strengthen their control over the study and advancement of artificial intelligence.
Fighting between Deepmind and Google leadership over autonomy doesn't really directly support that Google Brain employees and Deepmind had infighting. They seem to me to be quite different things.
It seems like a big leap to take these articles as support the statement:
> Demis now has a load of people reporting to him who previously were rooting for his failure
It certainly might be true, but I'm missing the connection between these articles and the statement.
How are "Google Brain employees" distinct from "Google leadership with Google Brain personnel in their respective reporting line?" What is the criteria for that distinction?
Good managers insulate reports from the politics, if you weren’t plugged into it it’s either your manager did a good job or it’s the only part of Google that isn’t 90% politics.
Signed, “didn’t work at brain or dm but was involved in a lot of alphabet level decision making”.
I never like the word “politics.” It carries the association of a bunch of people just playing backstabbing games to further themselves.
While this does occur, in general what I see is that with any large-enough group of people, there will be strong differences of opinions on how to steer the project to success.
In fact, I don’t think I can remember a single “political battle” that didn’t stem from a legitimate concern in how some project was being run and what they had decided to focus on.
> This announcement either means the AI outside of Demis' team has been neutered, or they're lining Demis up to be the scape goat for missing AI.
I read that to mean the party is over, we are treating that as a strategic subject and are streamlining our organisation. As you rightfully pointed Google basically had two competing organisations with all the complexity associated with that. That’s now over. From now on, there is only one captain steering the ship.
My outsider perspective is that DeepMind was the research arm and Brain was specifically tasked with making the company money through AI/ML applications. This appears to me that Google is combining the two to make sure that DeepMind starts turning a profit by adopting Brain's mission. Of course, DeepMind's brand is orders of magnitude more valuable so it makes sense to keep the name around. Would be happy to hear more knowledgeable takes on if this is an incorrect reading of the tea leaves.
This is not correct. Both DeepMind and Brain had/have separate applied groups. A lot of Brain research was/is not product focused at all. Transformers I'd say are more impactful than any other research innovation in the current AI boom and came from Brain not DM. DM does do great PR.
Google has (or until very recently had) some of the best researchers in the industry. Google's problem isn't developing new stuff, it's turning stuff they come up with into a viable product and then developing a market around it. All of their most successful products (search, gmail, maps, youtube) were developed at least a decade ago. They've come up with decent technology since then, but they seem to have developed a catastrophic inability to actually build a business around any of it. The failures of Google+, Duo/Allo, Inbox, Stadia have nothing to do with technology and everything to do with managerial incompetence.
Google could be sitting on the most advanced AI on the planet, but none of that matters as long as they're under the current leadership.
They made the enormous mistake of releasing bard with a lightweight model. It instantly made google look like they were way behind, and made bard largely irrelevant.
Bard should have been limited access and been the absolute most power model they had.
Now everyone is questioning if Google actually can compete with OpenAI at all, despite decades headstart and far more research and funding.
As much as I would like Google to compete strongly with OpenAI(i.e., ClosedAI) I somehow have this feeling that they are going to end up like IBM Watson.
so sick of this line of corporate worship. "Google" didn't invent anything, the employees did, and the top talent are leaving. Ilya Sutskever was a very key person at Google before he left to start OpenAI, and at most one person from the context is all you need paper still even work there.
the view of outside google of how great they are has zero bearing on the realities inside the company. the company is people - and google is no longer the place to be if you have talent. simple as that.
> "Google" didn't invent anything, the employees did
No shit, way to be pedantic over a common simple abstraction. Do you want a list of every author for every thing that is ever invented when someone refers to something?
It’s not pedantry, people keep saying “google” “invented” this stuff and a) no, they paid the people who did, and b) those people are no longer there.
“Google” is the wrong model of abstraction to deal with as with the founders long gone there is no intrinsically stable mapping between “Google” and “ai talent”.
This isn’t PageRank where the founder invented the original tech. Sundar going on 60 minutes saying vapid nonsense like “ai is fire” is the furthest extent of his abilities. There is nobody in power at google who is capable of leading in this moment.
Do you have any prior knowledge of these teams? They weren't working against each other. One group focused on research and the other focused on products.
Not to be snarky but do you realize that what you have stated is the definition of working against each other? Research teams are about getting to the paper and a deeper understanding, product teams are about getting something out the door that helps you capture value whether you understand it or not. Engineering research teams are notorious for being both ungovernable and spending so much time "understanding" their ideas that they miss the market window. The canonical book on the subject for me was "Fumbling the Future" which talked about Xerox PARC, I worked in Sun Labs ("where good ideas go to die"), hired people out of Microsoft's BARC (Bay Area Research Center), and worked in IBM's Watson group which pulled a bunch of people out of research to "make a product out of AI".
It is a really hard problem to "commercialize" imagination or innovation. Two very different mindsets between "doing product" and "doing research." DOW Chemical did a pretty good job of it, but they have always been more "components of the solution" rather than the full solution.
It wasn't engineering research, it was pure computer science. They published papers, attended conferences, etc. The other team, whom I personally interacted with more were engaged in solution design. They would have a goal (e.g. alpha go) and architect a solution for that specific problem. The two teams were somewhat orthogonal from what I recall.
Yeah, absolutely lining him up to be the scapegoat. His chance for success seems severely compromised and his mission was always to invent AGI, not create some kind of lowly ad-search product. The party sounds over for him and I bet he will be out and off to the next research think-tank soon.
Exciting time for AI. A little competition from OpenAI is finally forcing google AI researchers to actually focus on real world applications instead of just publishing papers and patting themselves on the back.
What do you mean? The attention based transformer architecture was created by Google. AlphaFold took the biotech world by storm. Tensorflow is significant platform for AI developers. Chinchilla pioneered new method to improve LLM.
"Just publishing paper" is such an ignorant and dismissive attitude to one of the most significant contributors to AI development in the world. Without Google research and publication, OpenAI would not have the foundation to build its GPT to the current level.
>Without Google research and publication, OpenAI would not have the foundation to build its GPT to the current level.
Right, and shareholders are asking Sundar "Why is OpenAI launching our product and taking our (massive) commercial success?"
Honestly I think Sundar should be let go over this, he should have been let go years ago, but now I definitely don't see what leadership sees in him. The dude is a better fit for running General Mills than a tech company. No innovation, just sell the same thing over and over.
The whole point is that all the progress you mention is worth peanuts to Google's shareholders. Hence this decision and blog post about it.
If anything it allowed competitors to raise above Google.
PS: Not saying Deepmind's research is not worthy, nor that this is fair. Just that it appears that Alphabet/Google (and by extension Deepmind) is being reminded that its main goal is making money.
Search is being ruined by the pursuit of maximizing ad revenue but AI research is being wasted because it's not used in pursuit of maximizing revenue. Can't really win, huh? There should be nothing but gratitude that Google uses its ad revenue to pay for research that greatly benefits everyone.
I'm not judging. I'm grateful of Alphabet/Google and that its research is being extremely useful in AI/ML, just saying shareholders may not think that way.
> Sundar, Jeff Dean, James Manyika, and I have built a fantastic partnership as we’ve worked to coordinate our efforts over recent mo We’re also creating a new Scientific Board for Google DeepMind to oversee research progress and direction of the unit, which will be led by Koray and will have representatives from across the orgs. Jeff, Koray, Zoubin, Shane and myself will be finalising the composition of this board together in the coming days.
How is it different from Google's structure of having reviewing committees over everything? I hope that this is not yet another layer of gatekeepers. In a large enough organization, the high-level leads have such fragmented attention and such ingrained tendency towards avoiding political mistakes that they mainly contribute concerns instead of ideas, especially product ideas. As a result, they become gatekeepers and projects slow down. The larger an oversight committee is, the more concerns a project will receive, and the more mediocre the project will be because the team will focus on making the committee happy instead of making hard trade-offs with fast iterations. Of course, the Scientific Board consists of people way over my caliber, so they may well do a fantastic job for Google.
In reality, this is just Sundar looking through the org chart and saying: wow, these things seem related. Let's combine them because surely that will mean that it starts working. Just so that he can announce "something" as a growing army of sharks are snapping at his feet.
Two specifics here that seem problematic for me and I am curious about
1) DeepMind was given very significant autonomy since day 1 it was acquired. I find it very hard to believe that any attempt to take that away won't result in huge internal problems and / or attrition
2) Sundar Pichai has been coming in for a lot of criticism in general because he seems to be constantly out-maneuvered by Microsoft and we have seen very little new emerge from Google under his watch. Putting himself at the helm of this is going to really accentuate this and actually seems high risk - if he is the the reason Google is struggling to deliver elsewhere then positioning himself at the apex of an existentially important effort could be lethal.
Added together, there seems like a high risk this could go catastrophically wrong for Google, and Pichai in particular. Maybe it will work, but the downside is enormous.
> I’m sure you will have lots of questions about what this new unit will look like for you.
Can any HN Googlers comment on what this announcement means? Is this announcement just a PR move to get people to pay attention to upcoming announcements? Or does it actually have deeper impact to the way Google functions with internal teams?
DeepMind is going to stop making models to do MineCraft speedruns, and instead start making models to improve search results and ad click through rates.
Bingo. This feels like Google is _trying to get serious_ about leveraging DeepMind to create better products right now (and generate more revenue) instead of: "Look at this robot play soccer. Cool, huh?"
Lol. Makes sense. Any idea why they're queuing this up for a multiple announcement PR stunt? Seems a bit out of character for Google to tease out announcements like this.
My guess is they have a bigger announcement coming next week. Otherwise, it seems like a bad PR move... it positions Google as playing catchup in AI... which is accurate, but strange PR.
PR blitz because it's going to look good to investors. Google is killing the vanity projects and moonshots and rolling those resources into teams that aim to launch products in the next 6-12 months.
In some sense, it's PR, but not in the typical gimmicky way. Alphabet has had DeepMind for a while, and at this point with all of the competition in AI, it doesn't make sense to keep DeepMind at arm's length. I personally think it's a good move and gives me more confidence, but it doesn't affect me directly. I do worry what redundancies this causes with Brain and Research though.
There's an interesting history behind the RCA CED (Capacitance Electronic Disk), an attempt to put video on vinyl. While the full history behind it's failure is complicated, a large factor was the differing priorities between research and other departments that delayed the product by several years.
Considering some of the other comments about merging two AI departments together (DeepMind and Brain) and injecting more bureaucracy into DeepMind, it seems to have some parallels with the story of the RCA CED. You can't just let researchers do research. There needs to be a clear goal/priority that this research can eventually be converted into a profitable product or service. Otherwise, the researchers will continue to work on "cool projects" and publishing papers with their name on them, with little consideration given to how to monetize this research.
Personally, I'm not a fan of this AI gold rush trying to inject AI into everything. It's just interesting to ponder.
End of the day, the best product innovation has come from hungry passionate and capable founders with a solid mix of science, engineering and product.
As we are now seeing before our eyes, Google has aged. Big tech cushy culture does no longer creates an environment that yields innovation.
The MSFT move was probably brilliant most for this reason. They saw the writing on the wall. ChatGPT would never have been invented at any big tech co.
Goog investment in anthropic is just taking msft sloppy seconds and kind of copy cat play. Who knows maybe anthropic will make a happy mistake and create something surprising.
You are likely reading the result of a lot of corporate reorg that was a big political battle and the victors are now patting themselves on the back.
That said, reorg can be good to refocus the company, but you’re bleeding out massively while the infection spreads, putting a little bandaid is no reason to celebrate.
Anyways wish them the best of luck. As a kid it was always one of those companies we all dreamed to work for. Now it is like an aged grandparent who needs a cane to walk and encouragement when they are able to walk by themselves.
This demonstrates to shareholders of Alphabet that Sundar is actually not a good CEO. The focus is not on the product or quality but organising resources. The resources are already the best at Google but led by a moron.
Microsoft was smart on letting OpenAI keep doing their thing. Pichai seems to have chosen to micromanage DeepMind. The board should find an actual CEO ASAP.
I hear that Demis had been fighting this for a while. I guess he lost.
Which..... of course he did. They don't make any money. That's ultimately how these decisions are made.
I talked to one of their in-house recruiters (or HR or whatever) some 5-6(?) years ago. I asked them how they make money, they gave me a really muddled answer. It had the word "clients" in there. I didn't understand, so I tried to clarify, I said "oh, you make revenue from consulting for your clients?". Then they gave me a crystal clear answer, they said: "No, we're a lab". I noped outta there really fast.
In retrospect, I was right that I wouldn't have made any money, but might've been a good boost for my CV to do for a couple of years.
Google has to be freaked out at the rapidity with which OpenAI and Microsoft are taking their generative language models into various markets. Look at the way Microsoft is (fairly successfully) grabbing attention-share through the efforts of Peter Lee and others in healthcare with GPT-4, e.g. - Google is floundering in comparison (despite having a huge head start, particular through DeepMind). I don't know that I'm convinced Microsoft can actually make good on the promises they are suggesting, but it's be a daft bet on Google's part to assume they can't.
I agree. The only reasonable explanation (from the outside) is that a bunch of Legal AI doomers put guardrails and red tape everywhere inside DeepMind, preventing anything that’d look like a go-to-product.
It's an embarrassment to Google to have two independent AI research teams. It looks like a failure of management and oversight. I'm very surprised it took this long for them to be merged.
Google politics and history aside, it's much better to link research with products for software. Unlike physics and biology, software is basically what we say it is, so there isn't a natural ordering to research (and it can wander forever, all too much like literary criticism).
What both Google research and product missed, and ChatGPT provided almost accidentally, is that people need a way to answer ill-formed questions, and iteratively refine those questions. (The results are hit-or-miss, but far better than traditional search.)
What both OpenAI, Bing, and now Google realize, is that the race is not to a bigger model but to capturing the feedback loop of users querying your model so you can learn how to better understand their queries. If Microsoft gets all that traffic, Google never even gets the opportunity to catch up.
If Google were really smart, they would take another step: to break the mold of harvesting free users and instead pay representative users to interact with their stuff, in order to catch up. Just the process of operationalizing the notion of "representative" will vastly improve both product and research, and it would build goodwill in communities everywhere - goodwill they'll need to remain the default.
Progressive queries are just the leading edge of entire worlds of behavior that are yet ill-fitted to computers, but could be accommodated via AI. And if your engineers consider the problem as "fuzzy" search or "prompt engineering" or realism, you need to get people with more empathy, a minimal understanding of phenomenology, and enough experience with multiple cultures and discourses to be able to relate and translate
This move makes sense from the perspective that DeepMind has some street cred in their ability to produce novel models that solve interesting problems. The only issue is that DeepMind has also suffered from the same problems that the mothership has: an inability to execute. Are there any documented success stories of DeepMind making serious money off their models? They've been great at producing interesting and valuable research, but all of their partnerships have failed as far as I know.
Google's screwed because LLMs offer us a fundamentally different business model for search, and I'm not convinced though that you can actually make a company out of LLMs that is as wildly profitable as Google was during its hayday. If that's true, then I just don't see how any CEO could go to the shareholders and say: "in order for us to survive, we have to accept that we're going to be a much smaller company in 5 years, both in terms of head count and profit." Sundar would be overthrown in a matter of days.
Transformers were fundamental research that ended up having huge side benefits. (Google has plenty of money to keep spending on fundamental research, especially focused on smart things like ML.)
> DeepMind and Google Research's Brain team are merging to form a new unit called Google DeepMind, which will combine their talents and resources to accelerate progress towards building ever more capable and general AI, safely and responsibly. This will create the next wave of world-changing breakthroughs and AI products across Google and Alphabet, while transforming industries, advancing science, and serving diverse communities. The new unit will be led by DeepMind CEO Demis Hassabis, with Eli Collins joining the leads team as VP of Product, and Zoubin Ghahramani joining the research leadership team reporting to Koray Kavukcuoglu. A new Scientific Board for Google DeepMind will also be created to oversee research progress and direction.
Accelerating AI development and improving safety are inherently contradictory. It's pretty annoying and disingenuous when someone says "this move will speed us up and make us safer".
One important signal here is that Jeff is now freed from his daily managerial works and can focus on Pathways. I think the overall vision from Pathways is the right approach but in my pure impression, the current direction is more focused on scaling out the model rather than making inference more efficient. Now the project is going to get much a stronger technical leadership, so I expect some interesting developments regarding productionization story in the foreseeable future.
From this blog post, I could already feel the bureaucratic nature of their org. My money's still on OpenAI. I think their motivation is more pure, their objectives more focused, and their org more simple. I usually think of product dominance in two vectors: first to market and benchmarks.
Google took over the world as something like the 11th search engine to hit the market, but some of their benchmarks were 10x better.
OpenAI has both going for them right now and I don't think that's going to change.
About time and finally for some very serious competition against O̶p̶e̶n̶AI.com but unsurprising that DeepMind would be directly involved [0] and merged with Brain.
Now lets get on with accelerating the real AI race to zero and the big fight against O̶p̶e̶n̶AI.com, X.AI and the other stragglers.
Releases like this are more about stock price and investment than anything else.
I’m glad we’ve put more investment into this area as ultimately AGI will be able to uplift a large sector of the population that historically went underserved, or at least level the playing field.
I wonder if this change has any implications for the TensorFlow vs. JAX situation/transition.
IIRC I read that DeepMind mainly used JAX, but not sure about the Brain.
Any insights from the people in the know? It seems JAX is the future, but TF dominates current production stacks.
Flax, a big neural network JAX library, is developed and used by Brain. TF will stay around for boilerplate / data-loading, but JAX support probably isn’t going anywhere.
I think they'll google-news the name "Bard" due to bad reception caused by unrealistic expectations (and a market primed by vastly superior alternatives).
Does anyone else feels some sort of "corporate-speak-blindness" when reading these statements from Google? They are just informing that some orgs are being rearranged, but for some reason they had to make the text have super low information density.
1. All fundamental AI research now falls under Demis. So basically what was Brain is now Deep Brain.
2. Jeff will lead the product build out of a multi-modal AI (LLM).
3. Google research under James will continue with everything else not directly AI related.
I'm curious what this will mean for DeepMind's work in medical and bioscience applications vs. now what may be more aligned with Google products and Anthropic which seem to be prioritizing commercializing consumer applications over science.
OpenAI, Musk's whatever the name, Google Mind, the other dozen projects that spring up every single day. -- I just read Scott Alexanders Meditations on Moloch for the first time, and this mad rush to monetize AI seems to be right on track.
Am I the only one who thinks this is Sundar screwing up big time? If there's only one AI team, and it fails, then you blame whoever is leading them. If there's multiple teams, and they all fail, there's only Sundar to blame.
So Google's response to OpenAI is... bureaucratic? Creating and rearranging organizations and committees, and making big announcements is how the EU responds to challenges. The actual results often underwhelm.
While AI ethicists and safety researchers are urging for a pause to understand the implications of what we have already built, Google is announcing they will invest more in the acceleration of Artificial Intelligence.
This shear panic does not look good for Google, because openAI does not has a technological advantage but a marketing advantage, something Google does not have an upper hand compared to Microsoft.
Cant believe when i heard this news of google lacking ai development despite being front runner in tech for too long and with all that talent under the hood.
How times changes or is it true that nothing good lasts long?
I suppose with org like Google like this some overhaul is necessary. But I felt they are still playing politics when OpenAI/MS et, al. is murdering their profit margin.
While other cool kids building stuff, Google goes through bureaucracy and power struggle to decide who would be in control and will receive raises and bonuses...
Both DeepMind and Brain use Jax, so they will definitely use Jax. However, they use different high level frameworks: All of DeepMind uses Haiku, while on the Brain side there are competing frameworks, with flax currently being the most often used one AFAIK. I'm not aware of anyone using trax there, and I would not expect it to get more adoption, on the contrary.
> Now, we live in a time in which AI research and technology is advancing exponentially. In the coming years, AI - and ultimately AGI - has the potential to drive one of the greatest social, economic and scientific transformations in history.
I'm not an AI Doomer, but is there some kind of scenario where the coming of AGI doesn't trigger a communist revolution and a lot of death and destruction along the way? I dunno, maybe it could be a Fabian revolution, but seems pretty unlikely. Seems more like AGI → everyone is pissed off that they still have to work for a living → a lot of rich people with heads on pikes. Is there some other scenario that's more likely? Doesn't feel that way to me. Then again, I'm the creator of https://bellriots.netlify.app/, so maybe I'm a Revolution Doomer.
• DeepMind and Google Research's Brain team merging into single unit: Google DeepMind
• Goal: accelerate progress in AI and AGI development safely and responsibly
• Demis Hassabis leading the new unit
• Close collaboration with Google Product Areas
• Aim: improve lives of billions, transform industries, advance science, serve diverse communities
• Greater speed, collaboration, and execution needed for biggest impact
• Combining world-class AI talent with resources and infrastructure
• DeepMind and Brain teams' research laid foundations for current AI industry
• New Scientific Board for Google DeepMind overseeing research progress and direction
• Upcoming town hall meeting for further information and clarity
If Google were to go on a startup acquisition spree in this hot new competitive space in a further attempt to catch up - how would they locate and assess potential companies?
I am a big fan of Alphabet as a company, but this is how I read the first two paragraphs...
> When Shane Legg and I launched DeepMind back in 2010, many people thought general AI was a farfetched science fiction technology that was decades away from being a reality.
Translation: "We were not able to see what the founders of OpenAI saw back in 2015".
> Now, we live in a time in which AI research and technology is advancing exponentially. In the coming years, AI - and ultimately AGI - has the potential to drive one of the greatest social, economic and scientific transformations in history.
Translation: "Now we live in a time in which AI research and technology has advanced exponentially thanks to the great achievements by our competitors – and we clearly feel left behind."
Personally, I feel AlphaGo was the biggest deal ever. It put AI truly on the map. OpenAI is just a corollary to that, and would not exist without DeepMind in the first place.
What has OpenAI accomplished other than a lot more publicity for AI? If I've been following the story right, Google and a few others have created all the tech breakthroughs and OpenAI just created a way for common folk to play with it then sold out to Microsoft.
Google still thinks of AI as a research project, or at best a way to produce better search results. They essentially created the entire current generation of the AI space and then... gave it away, because no one on the product side understood what they had actually built. Handing the reins to the DeepMind team – who have never launched a single product in their history – seems to be a doubling down on that same failed strategy.
Google doesn't need more smart AI researchers, academics or ethicists. They need product managers who understand the underlying technology and can commercialize it. They need pragmatic engineers who can execute, launch and maintain services. That has always been their problem as a company.