I think it's more about transitioning from the founder-type CEO driven by vision to the maintainer-type CEO who only cares about pleasing the board to keep his position as long as possible.
Tbf while Cook doesn't have the product vision of Jobs, he does have a clear strategic/what we want Apple to be vision. He's a post founder CEO for sure, but one of the best and a different league from Pichai by all accounts.
Ballmer was employee #30 at microsoft.
While not technically a founder, he had been with the company 20 years out its 25 years of existence before becoming MSFT's CEO.
+1. It takes a certain kind of people to raise through the ranks like that, so you automatically filter out people who might be more akin to the original founders. Vision _might_ be an asset when rising through the ranks, but pleasing superiors, hitting metric, politics and luck probably overwhelm it.
All corporations grow, decay and rot on the similar trajectories, so in this sense it's similar.
I worked at MS at this time, and I think BigG now is like MS in 2003 roughly speaking. I'm waiting for miniggl to appear, then the circle will be complete. :)
Having worked at both. Google is far far ahead. Any young dev reading this, you will get good engineering foundations at Google that will keep you in good stead for the future.
People said the same thing about Microsoft in the late 1990s - The Windows NT kernel was supposedly a thing of beauty, while Cairo was way ahead of its time. Problem is, computer science advances, so the type of programming that was state-of-the-art in the 1990s paled in comparison with what Google developed 10 years later. And the Google stack of 2003 is remarkably outdated by 2018 standards.
As a Xoogler I use & value the engineering chops I learned at Google every day, but I'm not naive enough to think that'll last forever. There are some really exciting developments in multiple areas of computer science - notably blockchains, Rust, GPGPU, serverless - that Google is poorly positioned to take advantage of, as well as others (machine learning, search, big data, distributed systems, capability security) that Google has historically been the market leader at but that are rapidly being commoditized by very high quality open-source projects.
Microsoft's hiring pretty much went down the drain because there was no common hiring bar. I can point different orgs with different level of devs. This is not the case at Google no matter how much HN complains about Google it consistently hires smart and above average engineers. Google is growing, it will meet its challenges, wait and watch.
Right Google is losing in ML or not caught up to speed ?
I don't know when you left but even outside Google it's pretty much known that ML innovation is Google's strength.
Two big teams use Rust at Google in production. I guess Google didn't make the TPU or Tensor Flow as well. Take your pitch forks out but once you're done have a look at some facts.
Not losing, but commoditized - in those fields, there are now perfectly good open-source alternatives.
I was at Google from 2009-2014. When I joined, Google was literally the only place you could work if you wanted to do data science on web-scale data sets. Nobody else had the infrastructure or the data. Now if you want to do Search, ElasticSearch has basically the same algorithms & data structures as Google's search server, with Dremel + some extra features thrown in. (The default ranking algorithm continues to suck, though.) If you want to do deep learning, you reach for Keras, and it'll use TensorFlow behind the scenes but with a much more fluent API. Hadoop was a major PITA to use when I joined Google; now in many ways it's easier & more robust than MapReduce, and the ecosystem has many more tools. Spark compares well with Flume. Zookeeper over Chubby. There are a number of NoSQL databases that operate on the same principle as BigTable, though I'd pick BigTable over them for robustness. Take your pick of HTML5 parsers (I even wrote and open-sourced one while I was at Google). Google was struggling mightily with headless WebKit for JS execution when I left, now you can stand up a Splash proxy in minutes or use one of the many SaaS versions. Protobufs, Bazel, gRPC, LevelDB have all been open-sourced, as have many other projects.
The big advantage of big companies like Google is that they have lots (and I mean lots) of data and for them that data is comparatively cheap to store and manipulate.
I mean, I first wrote a text-categorization algorithm using a k-NN algorithm about 12-13 years ago, and in order to make it run with acceptable results I only needed to manually categorize about 200 articles for each category training set. That was very doable, both in terms of time spent for constructing the training set and in terms of storage costs. Now, I have been thinking for some time to write a ML algorithm that would automatically identify the forests from present-day satellite images or from some 1950s Soviet maps (which are very good on the details). I’m pretty sure that there already is some OS code that does that, but the training set requirements I think would “kill” the project for me. I read a couple of days ago (the article was shared here in HN) about some people at Stanford implementing a ML algorithm for identifying ships included in satellite images, and I remember reading that they used 1 million high-res images as a training set. Now, for me as a hobbyist or even for a small-ish company there’s no cheap way to store that training set. Never mind the costs of labeling those 1 million training images. Otherwise I totally agree with you, we live in a golden age of AI/ML code being made available for the general public, but unfortunately is the data that makes all the difference.
They kicked off the deep learning trend when they bought deep mind I guess. Otherwise what innovation are you talking about?
Switching from KNN to DL in machine translations is impressive as a technical achievement ... but not really an innovation, and I doubt all this "innovation" impacts their bottom line in any way.
> Otherwise what innovation are you talking about?
Quantity: Google has the highest number of deep learning papers accepted into top conferences among all institutions, even when papers from DeepMind are not counted in Google's.
Quality: Transformer and the recent BERT have, pun intended, transformed the entire NLP field. Batch normalization is now a staple of all neural networks, as are its descendants instance normalization, group normalization, etc.
These are just on top of my head. Google may have done many things wrong these days, but it definitely has not lost any edge in machine learning.
While I don't know real numbers, back of the enveloper estimations for hardware costs alone (based on GCP TPU/GPU pricing) give order of hundreds thousands for BERT, and tens of millions for AlphaGo and friends. Notice how very few organizations in the world are in position to commit these kind of resources to AI problems, and that only Google and China are choosing to do so.
And? Most scientific breakthroughs after the World War II require expensive equipments and materials. That fact doesn't make the achievements from Google, from Bell Labs, from CERN, from Fermilab less innovative.
There's a lot of interesting stuff going on with computational blockchain platforms (Ethereum, Stellar, EOS). Basically they make it possible to write and deploy code - with nobody's permission, no approvals or policies or corporation necessary - that can ensure that when a user performs an action, they receive something of value. And they can do this without the user needing to trust that the terms of the transaction won't change later.
One of the hardest parts in many software markets is in designing incentives, making sure that the user has a reason to perform the action you want them to perform. And for startups, there's the added problem of getting users to trust that the incentives you advertise will actually hold. I might trust Stripe or Google to actually deliver the money they say they are collecting on my behalf to me because they are big established companies, but I'm certainly not going to trust a random payment processor who just started up and is advertising on a forum somewhere. But once platforms like Ethereum actually have decent UIs and reasonable transaction processing rates, you can just inspect the code of the smart contract that collects Ether (or Dai is the new payment hotness, now) from users and disburses it to the parties that were involved in producing whatever service they use.
The permissionless aspect of this whole system is very similar to the early WWW, where you could just stand up a website to do something useful and if it was good users would flock to it. That's why I'm excited. The cryptocurrency world gets a lot of bad press because a lot of the early users were quite gullible and a lot of the early use cases were in finding better ways to scam them, but there's real, fundamental technological innovation behind it.