I don't understand your argument - why is the one trick pony revenue generation incompatible with high quality systems and processes for writing code?
Both can be true at the same time, and my feeling is that they are, having worked there. The business is not very diversified, and the tools are great.
> I don't understand your argument - why does the lack of a diversified business have imply a lack of quality for systems and processes for writing code?
That wasn't the argument. The argument was simply that having a profitable monopoly doesn't imply the presence of quality.
Google's massive internal processes and systems are the result of having already been huge for many years. Google needed the time and resources to build them up. They're not the cause of Google's becoming huge in the first place.
Of course the quality of Google's external search results helped Google achieve a monopoly. Ironically, many people say that Google's search results have been getting worse, and I'd have to agree with that sentiment.
I think you hit on a key part of the debate in this thread. There clearly is some sort of functional quality floor, in the sense that poor enough quality means something doesn't actually work to do its job. But beyond that, what is "quality"? Some would consider it to mean elegance of algorithmic and architectural design, or readability of code, or some other more abstract measure. Some consider it suitability to purpose, with low-bug count.
I've give one example I observed commonly early in my career. The unexplained memory leak. Some process is running and the memory usage continues to grow. Eventually it will use the entire memory of its machine and die. You have a few options: 1) debug the issue and address the root cause 2) debug the issue and workaround it in some way 3) Give up and rewrite the code using some other kind of tooling 4) wake up people when the process dies and have them restart it 5) write a cron-job that restarts the process periodically.
What is the right answer? The best from a QA perspective is probably to identify the root cause and fix the underlying issue. Tools like valgrind have made this much easier in recent years, but it still can be a challenge. Pragmatically, my own answer (speaking generally, there are more different cases for different contexts) would be to time-box and investigation and fix, and if that wasn't achievable in reasonable time, just write the cron-job and work on the next problem. You can imagine very successful operations filled with kludges like that. Is that low quality?
It means that you wouldn't expect them to have products that fail because of unresolvable tech problems. You can see plenty of cases like Stadia where the tech was solid but the product strategy and leadership follow through was garbage.
I had a front-row seat to some really revolutionary ideas in Google getting to the prototype stage before being squashed in the gears of "We're chasing a market and that's not how users see this product working." Stuff where, if it caught on, it'd be a paradigm shift... But it turns out users don't want every paradigm shift that comes down the lane.
Because Google has (traditionally; this has changed in recent years) a real push-pull in authority between management and engineering leadership, the company can't commit fully to building a quality implementation of a status quo. Nor can it commit fully to chasing entirely new ways of doing things that could shake up an established market. In general, this... Actually kind of works out fine for them, more fine than critics often realize, because neither of those answers are always correct. Sometimes you get Gmail. Sometimes you get Google Drive. And sometimes you get iGoogle or Wave. And sometimes you get the stuff in between, like Reader or App Engine (really popular among the users, but the users don't have the money to make it profitable to commit to it).
> If Google's code writing process really was superior, you'd expect them to consistently produce killer products in other fields.
This is a very simplistic notion of what enables the creation of killer products. That has much more to do with understanding users' needs and identifying market opportunities. Good code writing processes are about code maintenance and scaling engineering effort, not dreaming up the next killer app.
I was there for well over a decade, and my read on it is that the tools are great for building solutions for all sorts of problems, including web applications and big data analysis systems. The biggest issues with the ability to launch a killer app are IMO:
- The risk aversion to anything that threatens a big existing successful product.
- The product/feature approval processes that implements the above.
- The concern about launching something experimental or half-baked under the brand (vs a startup which does that by default).
Personally, I saw a lot of product and engineering creativity, but it was often stifled or watered down by the above.
Don't mistake publicly visible products with internal products. There is a lot of amazing infrastructure internally that has no public presence at all.
Both can be true at the same time, and my feeling is that they are, having worked there. The business is not very diversified, and the tools are great.