Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Cool. Could someone, maybe an ex-googler, comment on which parts of these work well and which don't?

A lot of other companies get into trouble trying to cargo-cult what Google does when they are operating in very different environments wherein those practices aren't optimal. E.g. different levels of scale.

Additionally, critics of Google may point out that their engineering culture may not be great on its own terms -- every time Google launches a new feature, people post links to the Google product graveyard.



In my ex-Google experience, here are the stages of denial about something that Google does which is good but the industry doesn't yet embrace.

Stage 1: "We're not Google, we don't need [[whatever]]";

Stage 2: Foreseeable disaster for which [[whatever]] was intended to address happens;

Stage 3: Giant circus of P0/SEV0 action items while everyone assiduously ignores the [[whatever]];

Stage 4: Quiet accretion, over several years, of the [[whatever]] by people who understand it.

And the [[whatever]] ranges from things that are obviously beneficial like pre-commit code review to other clear winners like multi-tenant machines, unit testing, user data encryption, etc etc. It is an extremely strange industry that fails to study and adopt the ways of their extremely successful competitors.


Strong disagree. In my experience, this is not commonly why competitors don't adopt Google's practices. The main reasons I've seen are:

1. Money. Google essentially has a giant, gargantuan, enormous, bottomless pit of money to build a lot of this tooling (and also to take the risk if something ends up not working out). I think you might be able to say that other companies are just being short sighted if they don't implement some of these things up front, and that may be true, but (a) that's pretty much human nature, and (b) given that very few other companies have a bottomless pit of money like Google, that may just end up being the right decision (i.e. survive now and deal with the pain later).

2. Talent. This is closely related to #1, but few other companies have the engineering talent that Google does. If there is one thing I've seen with my experience with ex-Googlers is that most of them are fast coders. So when you go to your boss and say "I'd like to implement engineering/tech-debt improvement XYZ", at other companies it's a harder decision if (on average) it would take 9 months to implement vs. 2 or 3.

3. Related to both of the above, but your 4th bullet point, "Quiet accretion, over several years, of the [[whatever]] by people who understand it.", is actually other companies just waiting for more evidence to see what "shakes out" as the industry-standard, optimal way to do things.

4. Finally, your stage 1, "We're not Google, we don't need [[whatever]]" is actually true in tons of cases. Many of Google's processes are there to handle enormous scale, both in terms of their application/data capacity, as well as the sheer number of engineers they need to coordinate. Very, very, very few companies will ever hit Google's scale.


I will add one more category - companies that have or need scale but nowhere near the technical talent that google has.

Eg - most telco companies in the US run at a scale similar to google. They need most of the software engineering best practices, internal tools teams, etc. They used to have it during Ma Bell times when they had a cash cow. That doesn’t exist and they’re left with scale and the points 1-4 described above.

This in general leads them to outsource to lowest bidder contracting firms that compound the shitty software problem. In the end it’s a miracle that all of it works together :)


Eh, money is only a factor when it comes to scale. That is, Google can afford to hire 30 engineers to support their CI infra, you can't.

Everything else isn't. Unit tests aren't a luxury that Google's infinite riches allow it to have - they pay dividends whenever code exists for more than a few weeks.

You can bet your ass Google engineers don't write unit tests for throwaway code.

CI saves time, and while Google can maintain a team you can afford to pay for Jenkins or GitHub Actions, because not paying for them is more expensive - if your company is to survive for more than 3 months.


CI can totally cost time, especially if it doesn't have a team of good engineers keeping it running sanely and ensuring it tells you useful things on failures.

A CI bot which waits 24 hours then says "no", with a text file that crashes your browser and ultimately only contains the information 'exit code nonzero', which fails for reasons totally unrelated to your code change is dubious as a value add system.

If that bot is also a non-negotiable gate on shipping things you get a bunch of other antipatterns, like massive code patches to decrease how often you have to roll the die and a tendency to hit retry every day or so until the probability that it's actually your patch that's broken gets high enough that you try to debug it locally, at which point you may be unable to reproduce the blocking error anyway.

The real question is whether that pathologically rubbish implementation is still better than shipping without CI, which rather depends on whether your engineers ship code that works without the guide rails, which to a fair approximation they do not.

Thus it might still be a net win for product quality but saving time is harder to see.


Setting up a decent CI pipeline on GitHub with GitHub Actions is super, super easy - less than a day's work for a basic initial implementation.

Of course, the difficult part about managing a CI pipeline is writing quality tests, ensuring your tests don't take forever, deciding the right balance between mocking out 3rd party runtime service dependencies vs. calling their dev versions in your tests, etc.

But this is why I argue that the bare minimum should just be to have the CI pipeline created. If you don't have that, you are definitely going to screw over future you. Once that's there you can balance the cost/benefits of how much to invest in your test suite and test coverage.


As I said in another comment, I think folks are just disagreeing on terms. I absolutely don't consider things like unit tests or CI to be any kind of "Google-specific" engineering advice - they're just standard good engineering practices.


True, but what I was trying to introduce into the discussion was what another sibling commenter astutely labeled the anti-cargo-cult: industry feeling that anything at Google is an anti-pattern even when that thing is firmly established among other successful software developers for a long time. And in my experience comprehensive unit testing is one of those things that I have sometimes heard waved away.


I don't think only Google are writing unit tests.


Just discussing your point #1, I hear this but what I see is that the companies I have direct experience with spend much more and move more slowly with their we-are-not-Google hacks. People move fast and break things into a corner where their entire project is a haunted graveyard with no test and no comments, that has never been reviewed, and at that point nobody is allowed to change anything.


Perhaps, but to me everything you put in your comment above just sounds like bad engineering practices in general and not something particularly related to Google processes.

E.g. things like "do feature work on a branch and then code review/run PR checks before merge", "have unit test coverage (being a hard balance to judge what is sufficient coverage)", "have useful comments" - absolutely none of these things I associate with "Google engineering practices", and many of them definitely predate things that were specifically done at Google.

Things I think of when I think about Google practices are things like ensuring data is infinitely horizontally scalable, monorepos, etc. Those things are all scale-specific.


Monorepos actually work better at small scale than at Google scale. I think it's nuts that individual startup founders actually consider microservices; if you are validating out a software product idea, write a computer program, the very simplest one possible, to prove that you can do it and get the general shape of the architecture before you start dividing it into microservices.

I usually see the pressure to split into microservices appear around 20 engineers, just as your single repo is starting to get unwieldy. Knowing that the big companies use a monorepo is pretty important information here, because it may prompt you to invest in tooling to make that one repo less unwieldy rather than splitting into many small repos that will be very difficult if not impossible to merge back together again.

Google doesn't actually plan for infinite horizontal scalability in data. The framing I've found most useful is [Jeff] Dean's Law: "Plan for a 10x increase in scale, but accept that sometime before you reach 100x, you will have to rewrite the system with a different architecture." The reason for this is shifting bottlenecks: as the system gets larger, different aspects become the bottleneck to future scalability, and each time the bottleneck changes you usually need a different architecture. But by planning for an order of magnitude growth, you ensure that you're not artificially introducing bottlenecks, and that you have enough headroom to actually discover the new bottleneck.


Re: monorepos, I think we're talking about 2 different things. I usually hear the term "monorepo" discussed in the context of how it is practiced at places like Google and Facebook: having the code for all the company's services (micro or not) stored in a single source control repository.

A monorepo really doesn't have anything to do with how code components are deployed - your comment seemed to be contrasting a monolith architecture with a microservices one.


I was fast & loose with terminology, but I'm thinking of the organizations where every binary and every library is its own Github repository, and you make copious use of git submodule to build anything. I think that's the same thing you're talking about, right?

It's impractical (particularly when the project is young) for the same reason having separate binaries is impractical: it makes it very difficult to do refactorings that cut across repos while still keeping atomic checkouts and rollbacks.


Yep, we're talking about similar things but in slightly different ways:

1. When you're small enough, and only have a few teams and/or deployed binaries, you keep everything in a single repo.

2. As you grow with more teams and more products/binaries, often companies will split into having separate repos per team/product, and then use some sort of dependency management tooling (either something like the git submodules you discuss, or a private package repository, e.g GitHub's Package Registry, Nexus, or Artifactory). I totally agree with you that a lot of companies do this prematurely.

3. What distinguishes "monorepos", in my opinion, is that this was an innovation at either Google or Facebook I think (not sure who was first), where they realized exactly what you point out - it makes it really hard to do refactorings across lots of dependent projects in different repos. So they decided to keep everything in a single source control repository, with a single commit history. But in order to do that and have things be sane with lots of teams and thousands of developers, they needed to invest a ton in custom tooling to make these giant monorepos usable by individual developers and teams. E.g Facebook has Sapling, Google has Piper, and there are open source tools like Lerna for JS monorepos.

So, in my experience, just having everything in a single repository but without any special tooling (because you're small enough to not need it yet) is just a repo. Monorepo IMO implies that you've grown to the point where it's difficult to keep everything in a single repo without special developer workflow support tools.

All that said, I definitely agree with your main point - a lot of companies can just keep everything in a single repository a lot longer than they think they can, even without special tooling beyond some separate build targets.


Yea--I've found it surprisingly difficult to get plain vanilla medium-sized companies to adopt obvious, time-tested best practices. I'm not talking about "Google engineering practices" but basic table stakes practices like using source control and a bug tracker. So-called "Joel Test"[1] items. The most common excuses are: "We don't have time/money to do infrastructure/process, we need to write shipping code!" and the usual "We've always done it this way".

1: https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s...


It is honestly baffling, totally baffling to me that in this day and age there are companies that don't use source control or a bug tracker (but trust me, I believe you).

With these companies I think it's just best to walk the other way. Some of them may state that "yeah, we know we need to improve our engineering practices, so we're willing to learn" but they usually just have so much of a different mindset of what it takes to actually run a software company that it's just a waste of time. There are many more companies that have a modicum of understanding about what they're doing.


These rarely are "software companies". They're companies in other industries, that happen to need some software. Sometimes it's a pretty plumb gig: Good, but not great, pay, often in relatively lower cost-of-living areas, a relatively light workload, a good amount of autonomy as one of a maybe a handful of software devs, and in their blindness to good practices (like source control), they're also untouched by common bad practices in software, like whatever bastardized version of agile/scrum that your bosses heard through an extremely lossy telephone game.

But there's also the bad: Software isn't the company's focus, so you aren't the company's focus. That means no "Senior FAANG" salaries, no best practices to keep things sane, and often you find yourself working on a codebase that was originally hacked together in a week by a chemist who may or may not have been deliberating huffing reagent fumes.


Tests, comments and code reviews is not something unique to Google. It's a commonly accepted practice. There might be some dark corners, just like there are people who perform version controlling with ctrl+c, ctrl+v technique, but it's not a norm. I don't think that many people would argue against basic software development rules. However being Google is much more than writing tests and doing code reviews.

Being Google means having a team which writes source control management system for you.


One conspicuous omission in the ex-google is reflection on killed products like Google Wave, Plus, Glass etc etc etc .. for many of the [whatever] was the gross imbalance between Eng owning the product but ignoring the userbase.

What ex-googlers often fail to grapple with is the product lifecycle (how short it may be) and the value of having diversity in the loop of product testing. Google is designed to be a safe place to focus, and that’s not what the real world is like outside the plex.


Actually I think it is never-Googlers who have the wrong perspective here. The fact that Google constantly produces and destroys products demonstrates that it is extremely easy for that company to churn out code, and validates their software development methodology. It's incredibly easy to just dash off a product building on their gigantic foundation of source code, infrastructure, and launch process.

The fact that Plus and Glass got canceled and Wallet has been canceled sixteen different times is merely a consequence of the fact that leadership and product is often led by imbeciles. That's an organizational problem and I hope nobody is out there cargo-culting Google's org (even though I know they are, with OKRs and Perf being widely copied).


Exactly, Google engineers deflect user issues and product failures to the leadership and non-engineers. That happens in any large team, but Google has sweetened the situation for engineers to keep them focused on engineering rather than the larger consequences. E.g. credit cards are just fine, nobody actually wants to see ads, etc. It’s the user’s fault for failing to see the esoteric details behind the thing.


Google's penchant for killing promising products is 100% the result of poor incentives. People are incentivized for launching challenging projects, but they are generally not responsible for the bottom line (which is going to be dwarfed by Search Ads revenue anyway) or for user happiness & brand loyalty (which is challenging to measure). As a result, lots of promising and exciting products are brought to market and then killed, as the easiest way to bring new products to market is to cannibalize the stuff your predecessors did and show how great your alternative is instead.


I'm not sure why your comment was previously downvoted. I've often heard, and it's not hard to find these comments from ex-Googlers on HN, that Google's "promotion-oriented development" is one of their biggest factors in some of their cultural shortcomings. That is, launching a big new product is seen as one of the best ways to get promoted, while working on the little nits (which in my experience, especially with some of Google's enterprise products, can languish for years, even though they can be really important but "boring" issues to fix) is not seen as high-value work.


clearly there is a missing link in your experience of companies where you don't have control over a lot of the things you think is needed and companies want to push a product as fast as possible and cut as many corners as possible.

More and more companies will be like this to cut costs


I think it's also the ability to have a deep bench of coding talent who just get to work on the toolchain. Most companies ration that talent to the product, shipping features that drive revenue.


Off-topic for this thread, but one of the most poignant quips I remember about Google culture was that the performance-review process was really good at rewarding hard, challenging work that didn't produce much value and not very good at recognizing work that produced lots of value but was not astoundingly difficult. I think you were the one who first noted this.


I recently explained Google's perf process to an employee of the US federal government, and was told that the performance review and promotion processes in the government were simpler and less wasteful.


Is the government getting good results out of their process? Remember when a bunch of ex-Google engineers had to step in and save healthcare.gov? If their simple promotion process works, why didn't they curate that talent in-house.

People at L4 might not really like Google's process, but if some Distinguished Engineer shows up at your design review you're pretty much guaranteed to get some sort of valuable feedback. That is not a given in other organizations.


Good people don't work for the government because it pays poorly, not because its promotion process is wonky. The usual strategy in any high-paying field for government-track people is to work there for a few years to build credibility, then transition to being a consultant for a 100+% raise (or move to private industry completely).


Wasn't healthcare.gov outsourced to contractors in the first place? I'm not sure that government actively maintains much in the way of dedicated employees or teams to build stuff like this.

The problem lay with picking the cheapest contractor bids. If anything, the FAANGs should establish consulting arms for this kind of work.


The performance review process has a small impact on salary.

The promo process is not based on value or difficulty, but on the size of the organization that one is running. This is also true for higher level ICs, except they do not manage people, but rather manage/lead projects (which then have a certain amount of people involved).

Here's a rough breakdown:

  - L4 -> 1 person
  - L5 -> 1-3 people
  - L6 -> ~7 people
  - L7 -> ~25 people
  - L8 -> ~70 people
The approximate 3x difference between the levels is also found in other organizations, for example in armies: division ~= 3 brigades, brigade ~= 3 battalions, battalion ~= 3 companies, company ~= 3 platoons.

Misunderstanding this is the source of almost all frustration with the promo process. This process is designed to build and expand the organization, not reward awesomeness. There are of course deviations from the simple schema I listed here, but this is the hard reference point.


Perf feeds into promotions, which are the real way to raise your long-term salary (both inside and outside of Google).


This is a terrible process unfortunatelly. Raising the salary should be related to the usefulness of the person to the company, and not the breadth/impact of their work. This leads to terrible things like gaming the system to get high impact/leadership projects to get raises which comes with huge side effects, like projects getting abandonned fast, being deprecated in favor of new shiny promo-bringing things.

But this is not just a Google specific issue, and it is quite widespread in the industry. Google however suffered from this especially due to its obsessive culture of pay-for-perf and by ignoring simple facts:

- inflation means that your salaries should raise regardless of performance. if you only tail the market by adjusting salaries only if the market changes, then you are 1 year late(at least). This isn't a problem in an economy with low inflation, but is a huge problem in one with much higher inflation.

- there is a significant number of people needed to maintain projects that won't show large impact. Those people need to be at the very least recognized and compensated.

- making new products is great, but it requires huge amounts of ressources to do at Google scale from the get-go. A wider strategy is much needed, which Google obviously lacked for almost a decade.


Perf is almost irrelevant for promotions beyond level 5.


That was not my experience. A long run of good perf scores is clearly not sufficient for a promotion at that level, but it is necessary.


There's a lot of external complaining about perf at Google. My experience is that most of these complaints are wrong. I've personally had two reports fail to get promoted to L6 off of projects that were very difficult and executed well but for various reasons did not have the impact that we expected.


The problem is at the VP level, not the L6 level. It's not that impact isn't considered, it's that impact is relative to the current org goals of the moment, because it's evaluated by your peers that have presumably all bought into the current org goals of the moment (if they haven't, they will probably be fired or sidelined soon). However, there's very little feedback between things users care a lot about and things executives care about. You're rewarded for doing things that your VP deems important. Your VP is almost certainly not going to sweat the small stuff (although I do know a couple that try). It's impractical for someone with 2000 reports to keep up with every little bug in their product area, and they would be a terrible micromanager if they did. So what usually happens is that they call out the few annoyances they happen to see when using the product, everybody in their org jumps on fixing those because that will get them promoted, and everything not noticed or not specifically called out by a VP languishes.

How would I fix it? Not get to the point where the shots are being called by people with 2000 direct reports, for one. Software has distinct diseconomies of scale, where you have a much smaller loop between "Notice a problem. Identify who can solve the problem. Solve the problem" in a small organization than a big one. But that ship has sailed.

Failing that, I think orgs need to adopt quantitative measures of impact (eg. X tickets closed, X customers helped, X new sales generated) along with backpressure mechanisms to ensure that those metrics are legitimate (eg. you can't just create new bugs to solve them; you can't just help customers only for them to need to return tomorrow; you can't generate new sales that are unprofitable).


I do agree with this criticism. There are impactful things that go unrewarded because of org mismatch and that's dumb. My team has been affected by this very problem and it is very frustrating to look at a list of things that I know to be very valuable and be told "don't do that, it is misaligned with org priorities."

But this is "some impactful stuff is unrewarded" rather that "hard stuff that isn't impactful is rewarded", which was the complaint in the post above mine.


The "hard stuff that isn't impactful is rewarded" is relative to the overall marketplace, including conditions outside of Google. Starting a new chat client that replaces Google's other half dozen chat clients may be challenging, but it has little benefit to either users or Google.

Good entrepreneurs go into markets where the existing alternatives suck, and make them radically better. Most Google executives and PMs (most executives, period) go into markets that their company and all of their competitors already have pretty decent alternatives for, so the true impact is pretty limited.


I miss you too, raldi.


The tools, design and manpower needed to build a skyscraper are different from those needed to build a 1-story wood house. It's not that the ones that build the wood house are failing to study and adopt the ways of their extremely successful competitors.

Now, some of the things you say like unit-testing and user data encryption are ones that I've never seen associated with the "We're not google" mindset, so maybe people have started using that phrase for anythingnow


"were not google" is usually good for things where people are using cargo cult. I saw at one company that went open floor plan because google did it. No one was happy about that. Retention became very low and everyone bailed out. Emulating google does not fix process and management issues. As what may be at google for a good reason may be an utter failure at another company. There are things all shops can adopt that google does that would help them. But many of the ones I have seen adopted were little more than showy garbage instead of the things that would actually help.

Also sometimes you just need a simple tool to get something done. As engineers we like to build things so sometimes we make it way more complex than it really needs to be. For someone like google that may just be fine to do. For others a minimum viable product may be in order. Do not worry about optimizing for the 3 million user per day case when you have 10 total users a month. Add logging and keep an eye on it. Then worry if you need to scale. As building good scale takes time and thought. Many times you do not need that at all.

As your company/group grows you will take on more and more of the things 'google does' because you will need to, or you will go nuts chasing everything. You could probably even make stages out of the different times to do/evaluate things. To do it early could actually harm what you are doing. You need to evaluate what you are doing and why. Just copying someone else does not always lead to a good outcome and you could be wasting effort when you could be making product.


> "were [sic] not google" is usually good for things where people are using cargo cult

It's no different to cargo culting. That should not be the reason for not doing something, any more than the opposite should be a reason for doing it. Just see if the practice makes sense in your context and decide that way.


"We are not Google" is the answer to "I saw on that blog here that we should do X", "everybody is doing X, we should too", or "you have to follow this good practice here" where the practice is only "good" because it's hyped.

Those kinds of demand happen exactly because they saw them at Google, and an outright refusal is exactly how they should be dealt with. Once the unreasonable person is cut out, you can look at your context and decide what's the best way to solve the problem.


I think you wanted to reply to the same person I replied to? Since I'm saying basically the same thing you do I believe


I'm not really talking about artisanal 3-man software shops, I'm talking about mid-sized companies with thousands of engineers, who don't realize they are already larger than and facing the same problems as Google was when they started adopting these practices. And to be clear, rejecting something as proven as pre-commit code review is not only to reject the example of Google and many other very successful enterprises, but also to ignore decades of developer productivity knowledge before Google existed. It's almost like the fact that Google adopted a long-standing best practice makes modern engineers reflexively revolt against those best practices. This can only be seen as a structural advantage for Google.


> I'm talking about mid-sized companies with thousands of engineers

Can you name such mid-sized companies with thousands of engineers? If you hit thousands of engineers headcount, you are not a mid-sized anymore.

Theoretically, Google does 100 things right, pays for those 100 things, but also Google has tons of cash, if Google didn't release product in Q1, no worries, they will release in Q3.

Now consider startup with 50 engineers, if you didn't release feature in Q1, you might need to stop the project, because customer with whom you signed the contract just goes away and you will be laying off 5 people


mid-size with thousands of engineers? Wow, mid-size for me is around 100-200 people :)


Curious how pre-commit code review worked, could you please elaborate a bit?


Every change is reviewed by someone other than the author before it lands in the repo. At google they take this a bit further. Every change has to have been either written or reviewed by a designated owner of the code (designated by subdirectory) and one of the participants must be a qualified user of the languages used in the change ("readability"). And they have technical measures in place to ensure that programs running in production descend exclusively from reviewed and committed code.

Pre-commit review is common but not universal in the industry. Some shops practice post-commit review or no review. Some believe review consists entirely of quibbling.


Oh, I see - by "pre-commit" it doesn't really imply it in "Git commit" sense - the change is still propagated to others (presumably by committing it as a sort of "draft"), it's just not committed to the mainline - is that correct?

I'm very familiar with CR at other companies, but tbh since most use Git, I wouldn't call that "pre-commit" but "pre-merge", if you will - unless I misunderstood and it really is pre-commit at Google (i.e. the changes are not even _committed_ to the repository - and then I'm confused once again at what exactly that means..)


Google (for $reasons) doesn't do long lives code branches and doesn't use git, at least for the main repo. So in that context every commit is reviewed pre-commit, but you'd do the same workflow elsewhere with trunk-based development, small pull requests, and CI and automated and human review of all PRs before they're merged.


Right, I know it’s not on Git and hence was my question - and it sounds like this is more about terminology and less about technology. I.e. what Google does in this case is not that different from just “Code Review” in the traditional sense, as most other companies (with good engineering practices) do - reviewing code before it enters production (+CI/CD, as you mentioned).

Edit: as OP mentioned, it does seem to differ in technical sense from traditional CR, in that the changes live only on developer machine, not in source control.


> Edit: as OP mentioned, it does seem to differ in technical sense from traditional CR, in that the changes live only on developer machine, not in source control.

Yes and no. There's a decent whitepaper on Piper and citc you can find by searching for it (or actually I will :P [0]), as far as piper is concerned they aren't checked into source control, but the vast majority of development happens in "citc" workspaces, which aren't source control, but also are source control in a sense that every save is snapshotted and you can do point in time or point in history recovery, and I can open up a readonly view of someone else's workspace if I know the right incantations, and most of the developer tools treat the local citc workspace enough-like a commit that it's transparent ish.

[0]: https://cacm.acm.org/magazines/2016/7/204032-why-google-stor...


OK, thanks for these details and the link! Somehow I remember the title, but not the content :-) That'll be an interesting read.


> and I can open up a readonly view of someone else's workspace if I know the right incantations

So thats not just a basic feature?


It's the equivalent of me being able to view the local, unpushed changes you have in whatever directory you git cloned into.

If that sounds somewhat magical, yes, correct.


The normal workflow is to create a changelist and then have somebody patch that changelist into their client rather than accessing their client directly.


> as OP mentioned, it does seem to differ in technical sense from traditional CR, in that the changes live only on developer machine, not in source control.

Which seems worse.


"On the developer's machine" isn't correct.

They are actually saved within a special file-system called "citc" (Clients-in-the-cloud). It saves literally every single revision of the file written during development. If you hit save, it is saved in perpetuity, which has saved me a bunch of times. Every single one. No need for any kind of commit or anything else.

Further, these saved revisions are all fully accessible to every engineer within the company, any time they want.


Yeah, there are no direct translations between git and perforce concepts. The right term within Google would in fact be "pre-submit" not "pre-commit". Before a change is submitted in the Perforce-derived flow it exists only in the author's client and isn't really part of source control in the way that git users are accustomed to pushing their branch to origin.

NB: At that company there are also users of git-compatible and hg-compatible tools, but I am discussing the main perforce-derived flow.


Oh I see, thanks! I was under impression Google has migrated away from Perforce towards an in-house system a while ago, but looks like I was mistaken (or do you mean that system is derived from Perforce?). Edit: I guess its name is Piper..

It’s quite interesting/mind-bending to think of work-in-progress that’s still somehow synced between peers (in fact this is one of those “missing nice-to-haves” I wish Git had, and can only be approximated with wip branches..)


The synced-between-peers features are built atop a thing call CitC, or Client in the Cloud. An author's client isn't on their machine, it's hosted in prod.


OK got it, thanks for clarifying it


The real problem with code review is that if people don't do it/just hit sign off, it's worthless. Your whole company has to believe.


Pre-commit means before committed to the canonical repo, not before commit locally.

The SPDK project has an elaborate pre-commit review and test system all in public. See https://spdk.io/development . I wouldn't want to work on a project that doesn't have infrastructure like this.

Even mailing lists with patches are really a pre-commit review system, as are GitHub pull requests. Pre-commit testing seems more elusive though.


Or, alternately phrased:

As a company grows and matures, their software development processes evolve to meet the business needs.


I just wish more places would adopt `third_party`; I would also love reproducible builds but I'll settle on third_party.


There is risk of selection bias here. The companies that runs into [[whatever]] are the ones that made it far enough to have run into it. What you're not seeing are all the companies that tried to do what google does at scale, built a complex code base that doesn't serve it's customers needs and can innovate fast enough and are now dead.


> pre-commit code review

Unless you're referring to automated precommit hooks, this sounds baffling. What's wrong with reviewing pull requests? What if I want to push a WIP while I switch to another branch, I still need a review? Is the final PR reviewed again at the end?


What they mean is code review prior to merging into what you'd call main or trunk or master or release, not for committing your WIP changes or whatever (unless you want those merged at that stage).


That's a git user's perspective and Google doesn't use git or anything analogous. Under their system, and generally under Perforce, it is never necessary to "push a WIP" because your client just contains whatever edits it contains. You never need to manually checkpoint, stash, or commit. People with multiple changes in flight will usually use two different clients, one for each change, although that is not strictly mandatory and in the perforce model you can have disjoint sets of files in multiple changes in the same client.

Anyway, TL;DR, the problems you suggest are git-specific and one solution to them is not using git.


Readability doesn't help Google or anyone else, it's a pure "inmates running the asylum" artifact.


I'm sorry are you saying Google invented multi-tenant services, unit testing, or user data encryption?

I'll give you "pushed the WEB industry to have transport-layer encryption for the entire industry by default".

I'll even give you "code reviews".

But not the first 3.


label the tradeoffs?


"Readability" works terribly when your company is acquired and your team enters all at the same time.

Google has (or had ~10 years ago), a thing called "readability" for each language, where in order to be allowed to commit code to the central "google3" repo, you needed to have written some large amount of code in that language, and needed to have a readability reviewer sign off on your code. The process is designed for slowly on-boarding junior people into a team, and introducing them to google coding style and practices. Eg, the senior, mentoring folks on the team do the reviews and bring the new person up to speed. I imagine it must work well in that context.

However, this breaks down when your entire team is new. How do you find somebody to review the code? All several million lines of the product that was acquired? Especially when it is written in multiple languages.

So we were basically locked out of the main corporate repo, unable to do anything productive. We finally figured out that there was a paved path with a git repo used by the kernel team (and android?) that had none of these hurdles, where we could put our code and get productive immediately.


"Readability" is very much still a thing. It's a mess and would be one of the worst things to take from Google. If you can't enforce the code style you like through autoformatters and linting, it's not worth enforcing.


I kind of disagree in the sense that readability indirectly forces someone who has been at Google for a while/ is more experienced to have to sign off on new people’s code. Without it, you could have some very junior members with OWNERS reviewing other very junior members’ code.

And there is more to style than just linting, IMO. For example in C++ there are some complex macro-based test predicates that are hard to learn and use but which greatly simplify/improve on naive testing. Part of the point of C++ readability is that people who understand this stuff teach new people how to use them, or at least introduce them to concept, during code review


> I kind of disagree in the sense that readability indirectly forces someone who has been at Google for a while/ is more experienced to have to sign off on new people’s code. Without it, you could have some very junior members with OWNERS reviewing other very junior members’ code.

Exactly. It is very likely for a lot of junior engineers will be working with other junior engineers, and they will in fact have the most specific knowledge of the part of the project that they are implementing. And human nature makes it so people are afraid of being judged by their "superiors". Readability makes so that it breaks that barrier, guarantees that a more senior engineer will be involved, and will teach the ropes into writing readable, mantainable code to nooglers.


I dunno, I am pretty sure I got Java readability the second month I was at Google and was already in the OWNERS file.

I was a readability reviewer and most of the readability CLs were the first project a person worked on at Google, often rather unnecessary but redone strictly to meet readability requirements (largely new code, more than X lines, etc.). I would go back and forth for quite a while to turn 1000 careless lines of throwaway code into 50 lines that were actually good, but I basically had to grant readability after that one interaction, and it never felt great to me.

The most hated readability process at Google was Go's process (at least in the early days; k8s is obviously not using it), but I think it was actually one of the best. It took me a long time to get Go readability, but after going through the process I feel like I'd write the same Go code as anyone on the Go team. When I look at people's open source projects I think to myself "don't they know that that Simply Isn't Done?" But of course they don't; Go readability can only be experienced, not explained. People didn't like that process, and I am sure I said nasty things about it at the time, but in retrospect I really like it.


As someone who quit Google in large part because of all the stuff like readability that I ran into there (red tape everywhere in sight, low productivity due to process, zero urgency bc everyone is fat off the money-printer, no deadlines for the same reasons, etc.), I was about to strongly disagree with you, and write yet another excoriating take on why readability is AlwaysBad(tm) and etc, etc. I did already snark elsewhere here...

After taking a walk and reflecting, though, I'm remembering something that my manager said to me when I gave notice. Google is not for everyone, for a lot of reasons, and especially a lot of people who came up in startups really have problems (which is ironic since so many startups come out of people leaving Google). How you feel about readability may actually be a pretty good test of whether you will fit in at Google in the current era: it's not a small, scrappy company anymore that gets shit done quickly using whatever tool is most efficient RIGHT NOW and ships it as fast as possible to see if it gets product/market fit. It's a behemoth that runs one of the most prolific money-printing machines that has ever been built, and fucking that up would be a DISASTER. It'd be better to have half the engineers at the company do literally nothing for 10 out of 12 months in the year than to let someone accidentally break the money-printing machine for a day while they figure out how to fix it.

And obviously, it's better if everyone is productive even as they're shuffled around from project to project (which they will be, a lot), which means that you want as little "voice" as possible in their code. At a lot of companies you can tell exactly who wrote a line of code just by the style (naming, patterns, etc.), without even checking git blame, but at a place like Google individual styles cause problems. So the goal is to erase as much individual voice/style/preference as possible, and make sure that anyone can slot in and take over at any point, without having to bother the person that originally wrote the code to explain it (they might be at another project, another division, another company, and even if they're still at Google there is a very strong sense that once a handoff is complete, you should not be bugging people to provide support for stuff they've moved on from).

In that sense coding at Google is a lot closer to singing in a choir than being the frontman in a band: you need to commit to and be happy with minimizing what makes you unique or quirky, rather than trying to accentuate it and stand out. Some top-tier singers just can't force their vibratos down, or hide their distinctive timbres, or learn to blend with a group, and are absolute trash in a choir; it's not their fault or some ego failure, it's just that there are some voice types that don't work in groups, and that's fine, you just don't add those people to a choir.

At least below director-level (or maybe L7 equivalent on an IC track), Google doesn't need individuals to come in and shake things up, bust apart processes and "10x" a codebase. That's startup shit, and even if it might sometimes be worth some risk for the high payoff, it's too dangerous for them to allow for the thousands upon thousands of (still quite senior, sometimes 15+ years of experience) L4 or L5s at the company. The same process that prevents that from happening also makes sure that the entire machine keeps humming along smoothly. If being a part of that smoothly functioning machine while painting within the lines is exciting, then Google can be one of the best places on the planet to work; if you would be driven crazy because you can't flex and YOLO a prototype out to users in a couple days, then it's really not going to be for you.

I'm in the latter camp, I couldn't handle almost anything about the process and was so desperate to move quickly that I started talking to investors to line up my own funding a few months after I joined, but even as a quick-quit (<1 year), I have the utmost respect for the company and the people, and highly recommend it to almost everyone who applies there (the exception being people like me who TBH should just be doing their own startups). Everything they do has a pretty well thought out reason, even if I don't like following those rules myself.


Readability is far, far more than formatting and linting. I hate the current system a lot, but no linter or autoformatter knows if an identifier is appropriately named or if a function is properly decomposed.


Autoformatters and linter presubmit checks are used extensively at Google. Readability has nothing to do with those. It exists for everything else - ensuring that code is structured properly and idiomatically. Readability talks about structuring code, using the proper tools and containers where possible, and more. Everything from "that method should be named differently" to "you can use this function to do that thing you just wrote code for" to "this could be done with Immutable containers if you A, B, and C" and so much more.


The way it's supposed to work is that acquired teams get lots of support on integration, including readability. This helps your team get integrated into writing Google-style code. Not sure why that didn't work out in this case?

(Left Google a year ago)


This. Someone dropped the ball.

There's a form to get your corner of the codebase exempted from readability temporarily. This gives your team a quarter or two to build up readability.


A quarter or two isn't going to be enough for a drastic realignment of a large codebase. It's a start, but only a start.


Readability is usually applied in an incremental way. You don't have to fix all the code to make it conformant. If there are concerns about consistency, the style guide actually encourages people to prefer consistency over its own rule.


The 1-2 quarters is just to "realign" the SWEs, not the codebase.

Old code can usually be left as-is unless there some particularly egregious security hole or the like.


"Readability" requirement is still a thing, but it isn't for every single piece of code in G3, and I haven't worked close enough to it to think about the exact mechanism of how it applies.

My previous team - pretty much any python submission was hitting me with a python "readability" requirement, and it was a bit painful, because only a single person in my entire group of teams (roughly 15 people total) had the "python readability expert" status. My current team - already submitted quite a few significant C++/TS/Java pieces of code to G3, and not a single "readability" requirement triggered.


There’s now an explicit safeguard against that.


In my opinion, the monorepo, global presubmit, testing culture and the beyonce rule (if you liked it then you should have put a test on it) are basically a superpower for infrastructure teams. Without these things it'd be utterly impossible for certain kinds of infra refactors to be done and many more would be very very painful.

In the open source world I see a fair amount of "tests are always red, don't worry" and "we can never edit this interface because who knows who it breaks." These problems aren't intractable at Google.

This approach does have its own set of challenges and I do suspect that the monorepo has contributed in some ways to Google's inability or refusal to maintain some older products. But holy cow the ability to do something like move everybody in the company to different vocabulary types is powerful.


On the other hand, most weeks someone else breaks my system and I have to track down the culprit.

Google's emphasis has always been to make things easy for library developers, at the expense of library clients. For people who value backwards compatibility over long timespans, Google's practices could be better.


I dont think its fair to classify code review and test coverage as “the google way.” Should evaluate more by the unique things google does or the things they specifically invented (not code review and testing).

And of course volunteers working on open source projects have lower standards. Lets instead compare Google to companies which say “we arent google.”


What I am describing is not code review and test coverage. What I am describing is the ability to run all of the tests for the entire company in one go so you can safely make absolutely massive changes to the codebase.


So having a monorepo?


A monorepo and TGP plus a culture where changes are okay when they don't break tests.


Engineering culture has somewhat collapsed at Google. The things that made engineering great didn't really survive the last couple rounds of internal coups.


Interesting -- having not had any experience inside Google I'm having difficulty painting a picture, could you give an example or two of some of these internal coups?


I thought so too, but since then I moved over to Cloud and things are a LOT better.


I always here about Cloud having the worst culture tho? Has that not been the case to you?


People work hard in cloud but there are no MBAs in sight. It’s all very technical work, often very bottom up driven.

A lot of the overall goals of cloud are more ambitious than AWS offerings. Reliability is prized more than it is in other areas of Google as well, because customers are so technical and often notice.

Not a place to coast, but I’d say most people do a solid 45 a week for those that want to get good reviews and get a fat bonus.


> A lot of the overall goals of cloud are more ambitious than AWS offerings.

In what sense?


Not at all, it's a breath of fresh air. It's much less of a check buganizer, check email, write code, push CL cycle. Work is very project focused with high flexibility, my current team isn't even pushing to g3 and using different build systems entirely just because we wanted some more flexibility for example, and it doesn't matter as long as we are getting results.

The problems are a lot more technical though and I don't see a lot of L3s being able to work in the environment as it requires a lot more intuition and experience.

I usually work 45hours a week, but I don't mind it. Plus I'm 100% WFH here cause my management isn't dealing with in office bs.


Some teams in Cloud suck but the core engineering teams have some top talent and solve some very hard problems. Keep in mind Borg and Spanner are both “cloud”, but so are many field sales teams with an average tenure <2y


Nice bait


The “policies don’t scale well” section is inaccurate.

There are plenty of policies floating around that don’t scale well, and plenty of migrations that are still forced on internal users rather than handled magically by the migrating team. The reality is that Google is such a big company most of these fly under the radar of whichever person actually enforces these policies, and it becomes a whole thing to escalate up to whoever enforced them, and then there’s potentially a political battle between whatever director or VP is in charge of the bad actors and the enforcer (ideally they get away with not allocating HC to the internal migration and amortize it across all their users, so that HC can work on flashier stuff).

I think one reason Google has a proliferation of bureaucracy and red tape is that they do not “review” postmortem action items very formally. They are only reviewed as part of the larger incident postmortem review process and the tooling is way overengineered such that performing that review beyond a perfunctory once over isn’t easy to do. So you end up in a situation where “we need to do something” and whichever person handled the incident has to suggest a way to make sure it doesn’t happen again - the easiest of which is to introduce some CYA process. The other reason is that non-coding EMs introduce processes to show some kind of impact on their team.

Also, the existence of the monorepo, global test runs, forced migrations, etc makes it so maintaining a mothballed project incurs some inherent engineering costs - IMO it’s a non-negligible reason Google kills products that could instead simply exist without changes. It also makes it so Google doesn’t really “version” software generally speaking.


DISCLAIMER 1: Current Googler here, but opinions are my own.

DISCLAIMER 2: I think from a hands-on-keyboard SWE there is a lot of useful stuff. What you mentioned about Google culture of killing products and such I am not gonna talk about.

I recommend chapters about testing first and foremost. Among all the codebases I saw (both OS and proprietary) Google tests are the most comprehensive and reliable. However, If you are in a startup-like environment you should pick and choose and not try to follow every single principle listed as they could sink your velocity drastically in the short run.

Other interesting points (IMHO) are Monorepo, Build System, and Code Reviews.

For the Monorepo I discover being a huge lover although I was skeptical. The sad thing is that it's a rather niche practice and tools like Git don't play ball very well (i.e. each time you pull you have to retrieve changes for all the codebase, even files you never saw/heard of managed by another team). I think there's no nice off-the-shelf offering for running monorepos out there. However, not having to fight with git submodules, library versions, ... is great. If the change I am submitting breaks something else in the company you are immediately aware and so can act accordingly (e.g. keep the old implementation alongside the new one and mark it as deprecated so the other team will get a warning next time they do anything).

The build system is a bit more controversial. I learned to love blaze/bazel, but admittedly, the OS version is a bit messy to set up. Additionally, being so rigorous about the build rules felt like a massive chore at the beginning, but now I appreciate it a lot. I can instantly know the contacts of all the teams that use a build rule I declared and hence can be contacted to warn them about bugs, ... . I can create something experimental and have private visibility so only my team can use it and only later expose it to the wider world with just a one liner.

Finally, the code review AKA Critique. Google has the best review tool I had the joy to use hands down. It's clear about what happens, at which stage is the review of a particular section/file and is focused on discussion. The evolution of each change is easy to follow along. These are things I really miss when using GitHub/GitLab PR view. The tooling is incredibly confusing to me. Luckily (I am not affiliated in any way) an ex-Googler (I believe) is working on an alternative that works with GitHub (https://codeapprove.com/).


I am a big fan of Anki, and for reasons I wanted to build it on a machine I have on an uncommon architecture (it has a graphical desktop). I have all of the components... rust, typescript, qtwebengine, etc) installed and working. I invested some time in trying to convince bazel that the required dependencies existed, to no avail. Rules broke left right and centre and every time I found the solution, other things broke. I think it insisted on pulling stuff from the internet, including definitions of other stuff I needed to change. I can't remember much more than that, as I gave up and haven't thought about it much since.

Thing is, pkg-config would've picked up the dependencies just fine - they were literally all there. I even built Rust from source on my weird machine with musl variant, before realising musl has some issues on my architecture.

I suspect Bazel may work well inside of Google for infra/server side stuff (never worked there). I'm a lot more skeptical of more complex builds, like desktop applications across various platforms. Chrome still uses "gn" to generate ninja files, and then ninja to build. For my own stuff, I won't touch it.

I probably wouldn't have commented, except that, to my surprise it seems the Anki developers have also decided life is too short https://github.com/ankitects/anki/commit/5e0a761b875fff4c9e4...


Anki's build seems particularly problematic for some reason - both Arch and NixOS have given up on updating their from source builds and just repackage the first party builds


Sadly first-party builds aren't available for my platform. I suspect the complexity might be in the number of languages they're trying to simultaneously use, plus the complexity of qtwebengine, which requires a working chromium port, which outside of {x86-64, arm64} is not a given.


As I mentioned, bazel is a bit messy to set up. I think it's still doable and it brings a lot to the table, but it requires an upfront investment in learning Skylark and/or fight with the various rules provided. Blaze (the internal version) is just great unless you want to do shady things that most likely you shouldn't do anyway.


> I think there's no nice off-the-shelf offering for running monorepos out there.

I think Git works perfectly well for 99% of monorepos though. It just doesn’t work for the massive ones. I think its a perfect example of something most codebases shouldn’t follow google on.


Maybe if you only allow for shallow clones/pulls. I am not sold on vanilla git handling monorepo well. If anybody in the company pushes a huge blob you mess up everyone else and so on. Git and some modifications perhaps yes though


> If anybody in the company pushes a huge blob you mess up everyone else and so on

Again, this is really only a problem at larger scale. The segmented number of people that pull this large blob at Google may be higher than the entire engineering team at other companies.


Thanks for the CodeApprove shoutout! If anyone here in the comments wants to try it out, just let me know!


No need to thank me. I didn't try it yet, but we chatted months ago I another HN post and so once in a while I check it out. Any plans to include gitlab as first class citizen?


No plans for Gitlab at the moment, but maybe one day!


Personal story from ex-Googler: after getting exasperated at yet one more internal tool launched with great fanfare and almost no testing, let alone documentation, I suggested to the Internal Tools group that we have a contest for BEST internal tool.

Not "worst" since that would be too hurtful. The hope was to recognize excellence, motivate people to be better, and maybe shame the people whose tools received no votes. This suggestion was summarily dismissed.

There were, indeed, some truly excellent tools: Dremel comes to mind. And lots of tools that were nearly unusable.


I left Google around six months ago. I worked in medium and small companies, currently at a startup with ~30 devs.

I would say the vast majority of it works well, some you just don't need until you hit scale (here, scale in the number of developers).

For example, policies work if you have <20 engineers, probably don't really work otherwise.

Blaze/Bazel I miss a lot. Just wrangling the dependencies between shared packages is a mess (though we might just suck at configuring Poetry - at any rate it's not intuitive). Building and deploying is much more involved.

Another thing I miss is code review the Google way. Google asks that you review within 24 hours, reviews by (the equivalent of a) commit and not by PR, and strongly advises to keep commits small; The GitHub PR workflow is terrible in comparison:

1) it nudges you into batching commits into large PRs

2) Is the PR message informative? Is each commit's? What about squash and merge - how many people edit that message? At Google part of the code review is reviewing the commit message. When you squash and merge, that's post approval, so you can't even do that.

3) Hidden conversations? What the actual fudge

4) How many comments have I not addressed yet? For that matter, how many PRs are waiting for my attention and when were they sent?


Of all the Google dev tools, I miss Critique the most. GitHub is terrible at giving enough context to efficiently review a PR on a second or third pass.

I think coupling commits with review progress was a mistake.


Shameless plug but if you're missing Critique and working on GitHub, try CodeApprove (https://codeapprove.com) which brings as much of the Critique magic to GitHub as possible.


Offtopic but, aren't you scared by "GitHub improves its PR workflow" and put your product out of business?


Yes that's a real risk! CodeApprove is not (yet) anyone's full time job so it's also an acceptable risk.

However I think the biggest issue with the landscape for code review tools is that 99% of developers use the default system that ships with their VCS. So on most teams, that's GitHub. People should be actively choosing their code review tools just like they choose their VCS, IDE, CI/CD platform, Issue Trackers, etc. It's one of many tools that makes up your SDLC "Stack".


They haven't done it yet. What's been stopping them?

As long as CodeApprove and Reviewable remain niche they've got nothing to worry about.


This seems very simiilar to Gitlab's MR. Does anyone know all of them to highlight their strengths and drawbacks?


https://reviewable.io is the earliest full-powered Critique alternative for GitHub.

It supports some cool things Critique doesn't/didn't, such as reviewing multi-commit branches (also across history-rewriting force-push cleanups), and indicating exactly the nature of your comment (just FYI, or you want this to be changed before you'll give your approval).

(I was an intern in the initial making of Critique, and subsequently got interested in finding an out-of-Google alternative. I contributed a bit to other review tools such as ReviewBoard, and actively used Gerrit, Crucible, Phabricator, and GitLab reviews.)

When looking for a Critique alternative for my startup, reviewable.io had just appeared and ticked all the boxes, and we use it successfully for many years. The drawbacks are that it's GitHub-only and isn't free software.


Small commit reviews sound miserable. You have no context of the rest of the branch unless you look for it, you have no idea what's in the author's head for future commits (I can imagine some devout YAGNI follower rejecting a commit because a function argument is unused, which the author planned to use tomorrow..), and it sounds like it would encourage minor nitpicking when there's not that much to review. As opposed to a whole branch PR where I can see the entire feature at once and how it comes together.


For a large feature you work off a design doc or an issue tracker issue (Jira ticket equivalent). If you're going to call the function tomorrow tell your reviewer you're going to call it tomorrow. We're all adults.

Re not that much to review, it's the exact opposite. When people get 400 line code reviews they tend to nitpick on style. When they get a 100 line CR, they critique on naming, organization, consistency.


> comment on which parts of these work well and which don't?

I don't think there is a visible distinction between those parts that work/doesn't work. In fact, most of the cases each practice has pretty strong rationales. The problem is, when you take everything as whole, its cumulative complexity and cognitive overhead tends to go wild and almost no one can understand the whole stack when its original writers/maintainers leave the team.

In fact, this might play a certain role of the Google graveyard narrative; it's not because its engineering culture is bad, but sometime its standard is too high for many cases so it's nearly impossible to keep it up for newcomers, especially when you have external pressures that you cannot ignore. Even if you make an eng team of 3~4 people for a small product, they'll likely suffer from tens of migration/deprecation/mandates over years.


Readability is hit and miss. Very nice to have everything written to the same standard, it makes it much easier to navigate through any project. Downside is it's pretty rough for more peripheral teams or teams working in a language that's a small component of their product. I remember for one of DeepMind's big launches the interface was all in files ending in .notjs, presumably since they didn't have anyone on the team with Javascript readability. This was 5+ years ago, though, so some of the downsides may have been mitigated.


> Cool. Could someone, maybe an ex-googler, comment on which parts of these work well and which don't?

TBH most of this stuff is transferrable and even "common sense" in most of the companies you've worked for. Similarly how Google's SRE book is actually a very good collection of battle won experience on how ops can keep systems more reliable and running.

The book is written in a way that you can easily throw away advice that you don't think useful.


> Additionally, critics of Google may point out that their engineering culture may not be great on its own terms -- every time Google launches a new feature, people post links to the Google product graveyard.

It is personally scary when they develop new products. What if it is a brilliant idea, one I cannot live without? If Google develops it, then I am looking at this stillborn thing, mewling for life when I know its horrible fate.

The trouble here is that Google employees (and perhaps even its upper management) want to believe that they are a company which is an inventor of things. But they are not this at all. They are an advertisement company. Advertisement companies should not and do not want to invent things... inventions are worse than burdens, inventions are these weird alien objects that appear valuable but are quite expensive and do not help to sell ads at all.

So they hawk the inventions like they were freaks in some carnival sideshow to move traffic past their billboards. Until the traffic dwindles (or until they get tired of it). And then they take it out back behind the woodshed and put an end to it.


The inventions are to keep the talent stream coming ... to work on ads.

The inventions are the small tax they pay to pretend to candidates that they could work on inventions when the vast majority of them will be "allocated" to ads.


It's also Google licking the cookie. They maintain a moat around ads by doing just enough to threaten to destroy anyone that gets close to their ecosystem.

Facebook survived because G+ product vision was so out of touch with reality and FB were not part of the anti-compete hiring nonsense so that they managed to poach a lot of good people.


You're telling me this stupid, bizarre thing: that Google's major innovation was an HR process.

That's fucked-in-the-head just enough that you've made me wonder if it's true.


"The best minds of my generation are thinking about how to make people click ads"

https://quoteinvestigator.com/2017/06/12/click/


I doubt they did anything like that intentionally. My impression in my time there was they were constantly cargo-culting themselves. X obviously works, so keep doing X, even if it doesn't look like it makes any sense. And X was absolutely everything.


Mind me asking why you moved on?


You need Shiny Inventions so you can divert the talent stream away from your competitors, more than to actually work on the ads.

I'm sure there's some Shiny Invention Corner in the ads business -- let's call it "AI" -- and some of the top people can be motivated to work there.

But isn't the ads business by its very firehose-of-money nature something that will get on fine with that average level of talent that is sufficiently motivated by cash and doesn't need Inventions?

And isn't the top talent able to make the same money doing interesting things elsewhere? (I keep hearing this is happening with AI, but I hear it on Xwitter so who knows.)


>A lot of other companies get into trouble trying to cargo-cult what Google does when they are operating in very different environments wherein those practices aren't optimal. E.g. different levels of scale.

Any prominent examples?


I could not care less about what Google practices. They operate on enormous scale and have vastly different goals and values.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: