I agree, but I might phrase it a little bit differently. I recommend thinking about corporate stances as actions and interests, not moral intentions. Don’t expect a corporation to do things for moral reasons. Trust them only to the extent that their actions are in their self interest. To be fair, some organizations do have charters and interests that make them more palatable than others.
One takeaway to startups that hope to stand for something even after tremendous growth and leadership changes: you have to build governance and accountability structures into your organizational DNA if you truly want specific values to persist over the long run.
This is probably a good thing -- faith in such structures was never justified.
Any relationship with a corporate entity is transactional in nature. A great deal of effort is often expended to manipulate us into feeling otherwise, but that is all it is.
Companies don't have feelings. They aren't conscious entities with a capacity for guilt or morality. They are, in essence, software applications designed to execute on systems composed of human employees. In a sense they are the original AI agents.
Yes, OpenAI demonstrated one way not-for-profits can be commandeered. Altman appears to be quite astute at gaining power.
Every organizational design and structure has the potential to be subverted. Like cybersecurity, there are many tradeoffs to consider: continuity, adaptability, mission flexibility, and more. And they don’t exist in isolation. People are often going to seek influence and power one way or the other.
One more thing. Just because it is hard doesn’t mean we should work less hard on building organizations with durable values.
I don't think there are any companies that care one way or the other about taking away your freedom.
Companies are revenue maximizers, period. The ones that aren't quickly get displaced by ones that are.
The simpler test is to stay away from any company that has anything to gain by taking away your freedom. THAT unfortunately is most of them.
The depressing reality in consumer tech is that anything with a CPU doesn't belong to you, doesn't work for you, and will never do more than pretend to act in your best interest.
This explanatory model explains a lot of what companies do but not all. It is a useful first approximation for many firms.
Still, the conceit of modeling an organization as a rational individual only gets you so far. It works for certain levels of analysis, I will grant. But to build more detailed predictive models, more complexity is needed. For example, organizational inertia is a thing. One would be wise to factor in some mechanism for constrained rationality and/or “irrational” deviations. CEOs often move in herds, for example.
> The ones that aren't quickly get displaced by ones that are.
Theory, meet history. But more seriously, will you lay out what you mean by quickly? And what does market data show? Has this been studied empirically? (I’m aware that it is a theoretical consequence of some particular market theories — but I don’t think successful financial modelers would make that claim without getting much more specific.)
One takeaway to startups that hope to stand for something even after tremendous growth and leadership changes: you have to build governance and accountability structures into your organizational DNA if you truly want specific values to persist over the long run.