Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's interesting he decided to go this way rather than put it into a sustainable trust and just trickle money out indefinitely.

I suspect he believes that these causes need shock therapy. To eradicate a disease, you are better off doing it all in one go.

I also wonder if he looks at something like the Ford Foundation and realize in the long run that any charitable trust will just turn into an overstuffed political advocacy group that does little to advance his charities or even his legacy.



Was just talking with some folks last weekend about this in a different context. Open-ended foundations can easily have their missions drift and also become essentially sinecures for an executive director.

Ford Foundation is a great example of what can happen. Olin is a good example of a foundation that was set up to dissolve after some length of time.


Mission drift can sometimes go in a positive direction. The Howard Hughes Medical Institute, for example, functioned primarily as a tax evasion vehicle while Hughes was alive. After his death, the HHMI was in deep trouble with the IRS and sitting on an endowment of ~$5 billion. So it appointed former NIH director Donald Fredrickson to turn it into an actual research funding organization and mend relations with the tax authorities and research community.

https://www.latimes.com/archives/la-xpm-1986-08-11-fi-2620-s...


You may know this already, but both Olin Foundations are good examples, actually. I believe the John M. Olin foundation dissolution plans were specifically in response to the Ford foundation's drift. The F. W. Olin foundation (John's Father) coincidentally dissolved in the same year, but that was due to largely accomplishing their original goal of endowing engineering buildings at colleges, and pivoting to founding a new engineering college entirely.


I don't really know the details but an organization I was/am involved with did get money from the "Olin Foundation" but didn't know specifics beyond that. Yeah, one of my fellow board members observed that Olin was pretty much the canonical example of a foundation that set itself up to be dissolved.


I've always wondered about the Gates and Buffets commitment about giving away their wealth in death. It assumes that the people of the future are more worthy of it than the people of now. Whatever poverty will exist in the future also exists now. I suspect they've thought about this too, hence the acceleration. If anything, addressing the issues now has a chance of reducing the issues in the future.

There's always something to learn from everyone. Elon reiterated one thing frequently - "We have to get to Mars soon because I don't want to be dead before it happens" (paraphrasing). If this philosophy is used for the right purpose, we can get some cool things happening sooner. Recent events also show that there are people who are not interested in being charitable at all, so it's even more of an imperative.


> It assumes that the people of the future are more worthy of it than the people of now

I don't think that is the assumption. The assumption is that people will treat them well for planning to give away their money without them needing to live their life without their precious wealth.


This is a weirdly conspiratorial idea to my eyes. Not because people don't have unsavory hidden motivations that they give good excuses for, but because this doesn't really seem to confer any benefit.

The benefit of wealth is your capacity to spend it. If they don't spend it in order to give it away on a future date, they have lived their lives without it.

You can say that they are selfishly maintain optionality while they are alive, but that's a less biting critique, I guess.


I might spend millions easily enough, but I might struggle to spend millions more on top of that. Bill Gates has a hundred thousand millions. Personally I probably wouldn't spend it because figuring out how to spend it sounds like hard work. I think this idea was first explored in trashy 1902 comedy novel Brewster's Millions, but is somewhat true.


Yeah, multiple opulent houses seems like a lot of work. Even if I hire people to manage seems like a lot of head-space. I can stay in really nice hotels and eat in nice restaurants for a lot less money and effort. OK, maybe a private jet or at least NetJets or whatever the current thing is.

At a much smaller scale I've thought about a small city place and concluded it just wasn't worth the effort vs. renting at a nice time of the year.


Not hard at all. His goal is to eliminate unnecessary deaths. 1) stack rank causes of unnecessary deaths (malaria, malnutrition, etc) 2) identify places where people die those deaths because nobody gives a shit about them 3) start giving a shit

In fact the acceleration seems like it’s happening because it has suddenly become far easier to identify unnecessary deaths because someone who shall remain nameless here lied about fraud in US Aid and cancelled all their funding leaving millions of people to die unnecessarily deaths.

Gates decided to step up and prevent as many of those deaths as possible in his lifetime.


Many things take time. And it may be good to fund them. But outsized contributions don't necessarily make them happen that much faster.


I think the phrase is something like you can’t get a baby in a month if you have nine women.


> There's always something to learn from everyone. Elon reiterated one thing frequently - "We have to get to Mars soon because I don't want to be dead before it happens"

I understand the idea of learning from everyone, even those whose values I strongly disagree with.

But after learning about everything Elon has done in the public sphere, would this statement be more likely just narcissism rather than a deep and inspiring virtue?


That doesn’t really matter though? The end result is we’ll get to Mars faster than we’d otherwise do.


I think it matters a great deal! If that result comes at the expense of - let's say - progress fighting climate change because Elon also ended up defunding and destabilizing a lot of science with the same instincts that got him to proclaim his mission of going to Mars, it is not obvious that we only ought to focus on the end result, and even if we do, that only one set of end results need focus.

We may be closer to going to Mars, an extremely inhospitable place, at far greater expense than simply making the world we were evolved to survive in a little better.


The way things are going here on Earth (thanks in part to Musk and "Glorious Leader" Trump), we ain't makin' it to Mars before global resource wars completely cripple all our scientific aspirations, including becoming a "space-faring" species. It's all wasted effort at this point, because nobody seems willing to do what it's gonna take to ensure that humanity has any future at all.


I very much would like to see a human on Mars very very soon —- only because I really hope it will be Elon.


He can have other motivations. Between 2020, 2024, Mackenzie Bezos & Laurene Powell Jobs, the deeply unimpressive philanthropy of the Buffett children, and his own divorce, a very rich philanthropist has excellent reasons to aim for the foundation being liquidated in his lifetime, and not handed off to administrators like, yes, the Ford Foundation or Harvard...

(And then, of course, given his enthusiasm for AI, there is a major question of whether 'keeping your powder dry' is a huge mistake - one way or the other.)


I'm an AI skeptic when it comes to business cases. I think AI is great at getting to average and the whole point of a business is that you're paying them to do better than average.

But I think current AI (not where it might be in a few months or years) is absolutely amazing for disadvantaged people. Access to someone who's average is so freaking cool if you don't already have it. Used correctly it's a free math tutor, a free editor for any papers you write, a free advice nurse.

This sucks in a business setting but I could see it being incredible in a charitable setting. When businesses try to replace someone great with something average it sucks. But if you're replacing something non-existent with something average, that can be life changing.

I'm an AI skeptic and I can empathize with his AI enthusiasm given the problems he's trying to address (or at least professes to be trying to address).


> But I think current AI (not where it might be in a few months or years) is absolutely amazing for disadvantaged people. Access to someone who's average is so freaking cool if you don't already have it. Used correctly it's a free math tutor, a free editor for any papers you write, a free advice nurse.

Interestingly, I think AI, if its biggest boosters are correct, will end up being an absolute disaster for disadvantaged people.

The fact is that the vast majority of people in the current world are able to survive by selling their labor. If AI makes it so that, say, 50% of the world's population is no longer able to survive by selling their labor, that leads to massive serfdom, not some sort of Star Trek utopia.

And the thing that is shocking to me is that I haven't seen any (like, absolutely zero) credible explanation from AI boosters of how this dystopian end state is avoidable. I've either heard misdirection (e.g. yes, I agree AI is amazing at what it can do, but that doesn't explain how people will eat if they don't have jobs), vague handwavy-ness, or "kumbaya talk" about stuff like basic income that seems to completely ignore human nature.

I would absolutely love to be convinced I'm wrong, but that would need to start with at least something approaching a rational argument as to how the benefits of AI will be more equally distributed, and I have yet to hear that.


I know a few of the leaders designing and developing Microsoft’s AI applications for the Gates Foundation.

I think you’re on the right track, and, alongside the scale of service (reaching more people and more topics with an average level of advice or recognition), there’s a second component to it: scale of analysis. The newly possible solutions that AI advances have created include more than those famous models that answer broad prompts with art, copy, or code.

They also include focused, sometimes incomprehensible tasks which can only be done at an impactful scale due to the creation of deep learning and advances in compute-inexpensive language understanding, computer vision, and audio analysis:

A network of affordable, durable, solar powered, LoRa meshed audio sensors analyzed by a model to diagnose changes in the biodiversity of the Amazon and other rainforests (via ambient bird and animal calls across thousands of species). Visual analysis done on a cheap camera network estimates herd sizes of larger, silent animals.

A model that analyzes satellite imagery to evaluate major shifts in the industrial use of land, including tracking the national development of solar farms to evaluate nations receiving new energy grants.

A social analysis bot that tracks the rapid introduction of propaganda narratives or intentional agitation by foreign state actors (Russian bot farms), including building a map of associated IPs. Sadly, the social networks basically shrugged when given this data, so Msft gave it to LEAs.

These things are being done at a scale that would be incomprehensible to an organization of people.

Scale of analysis tasks are still, IMO the smartest use of AI today, despite the fashionable trend of GPT and the promise of AGI. A few models to spark ideas:

Recognition tasks with a dictionary too deep for human experts to grok when scaled up - like identifying thousands of wildlife

Recognition tasks with a timescale too rapid or sudden for human attention - Amazon Prime Vision predicting a QB sack in a football game before it happens

Recognition tasks when human vigilance or sensitivity would miss an occasional or slight occurrence - measuring eccentricities in electrical signals, vibrations, etc. to predict the failure of industrial equipment


This is a wonderful and detailed response. Thank you!


> a free advice nurse.

It is good for the other use cases, but it is the worst possible source of advice on subjects where the user has no expertise, and where there are serious health or safety consequences for getting it wrong.


I totally agree in situations where folks have access to an advice nurse. Always prioritize an expert over an llm.

Same goes for the others, if you have the means I think you should get a tutor or an editor as well.

However, if you’re choosing between nothing and an llm then the llm starts to become a great option.

I used to work rescue and if I had caught one of my coworkers asking an llm how to treat a patient I would have flipped.

But if I had rolled up to an incident and some Good Samaritan was trying to help out by using an llm that would be awesome!


Most first aid can be broken down very basically.

Call a professional for help. Are they breathing, is their heart beating, are they bleeding.

If you haven’t called someone that can actually save the persons life no amount of first aid will help.

Unfortunately unless something is obviously preventing breathing as someone untrained theres not a lot you can do if they aren’t breathing.

Heart beating is pretty easy, chest compressions…

Bleeding again, pressure, and a lot of it to try to prevent the bleeding.

I would want to check what an AI response is to some situations but as long as it just tackles those cases it can probably only do more good than harm.

Id be more worried some good samaritan would start cutting people to try to “get an airway” or some nonsense. That would significantly increase mortality rates…


My time in rescue gave me a ton a faith in good samaritans. To try to do something in an emergency is productive 99% of the time (imo).

The only case I've experienced where it wasn’t was when someone in our area was actively listening in in emergency channels and trying to preempt ambulances. The issue was that they had training in the basics but often went past that in care they provided. Something that I believe is not covered by Good Samaritan laws.

I’m much more worried about folks like that than people who find themselves in an emergency and are trying to help.


I'd rather see a Good Samaritan being talked through CPR or whatever by a dispatcher who's trained to give that advice over the phone, rather than having a hallucinating LLM tell them to do something deadly.


I believe the situation here is more a matter of they don’t have a dispatcher to guide them.

In some rural area of Africa they came across a car crash. Two people hop out and assist while a third drives off to notify someone to send emergency help.

An on device LLM might be very useful there depending on what it says…


If the LLM tells the untrained passer-by to do a tracheotomy, it's not going to go well.


Emergencies can freak people out but not once in my eight years in rescue have I ever encountered a scenario where a random bystander might do as drastic of an intervention as as a tracheotomy.

I have shown up at scenes where people have googled what to do though and, you know what, it was super helpful.

If someone is dumb enough to perform a tracheotomy because an llm, google, or a passerby told them to. The issue isn’t any of those factors. That person is just so incredibly dumb as to be a danger to everyone around them.


I've been a firefighter for 22 years. I'm sure neither of us will ever cease to be amazed at what otherwise intelligent people will do when they're in a panic.

People also do amazingly dumb things because a piece of software with a tone of authority told them to do it, even when they're not under duress. Look at the number of people who find themselves stranded or dead because they uncritically followed the directions of a navigation app, and who weren't in a panic state when they did it.


Me too!


> The whole point of a business is that you're paying them to do better than average.

...this is a really interesting idea, but I'm not sure if it's entirely true?

If we're talking about a business's core competency, I think the assertion makes sense. You need to be better than your competition.

But businesses also need a whole lot of people to work in human resources, file taxes, and so on. (Not to mention clean bathrooms, but that's less relevant to the generative AI discussion.) I can certainly imagine how having a world-class human resource team could provide a tire manufacturer with a competitive advantage. However, if those incredible HR employees are also more expensive, it might make more sense to hire below-average people to do your HR and invest more in tire R&D.


Completely agree with this.

I think my sense is that the zeitgeist around AI (at least in business circles) is much more “The only way to ensure our continued survival is by embracing ai in all our core competencies” than “your tire company is going to have some adequate hr for a great price.”

An example that springs to mind is the arms race between tech CEOs over who can have more of their code base written by llms.

It’s amazing tech and it seems like it’s being marketed for all the wrong things based off of some future promise of super intelligence.

I really liked the article posted on here a week or two back along the lines of AI is a normal technology. Imo, the most sane narrative I’ve read about where this tech is at.


Right up until those below average HR people break the law or allow managers to break the law and the company gets in trouble and no scientists or other R&D people want to work there.


I would really hope "not breaking the law" doesn't require an "above average" HR team. As long as it isn't bottom of the barrel you should be fine.

...if I was really cynical, I might say that one of the reasons you might want a "world class" HR team is in order to break the law, or come really close to the line, without getting caught, in a way that increases profits.


If only so many companies didn’t have below average executives…


His strategy also may have changed due to recent events affecting foreign aid...


True. But remember that Gates said he could not fill the hole left by funding cuts to USAID: https://www.reuters.com/business/healthcare-pharmaceuticals/....


To be clear, I'm speculating; it does seem plausible that he is trying to slow the bleeding even if he knows full well it won't stop it.


If you have more money than anyone else on earth, the highest leverage use of that money is going to be to fund projects that require more capital than anyone else can afford to fund and that governments are unwilling to fund. That way you know you are actually adding to the opportunity set and not just displacing someone else. The difficult part is, of course, deciding which of those projects that only you can fund will actually be a good bet, but that doesn't change the fundamental calculation. Not sure if that's Gates' strategy, but it would make sense if it was.


Actually the highest leverage would be to bribe electable politicians to get governments to be willing to fund your projects. It’s remarkably cheap apparently to do.


Numbers:

  > but I expect the [Gates] foundation will spend more than $200 billion between now and 2045

  Top 10 individuals in the world worth $1.77 trillion

  The U.S. government has spent $ 3.57 trillion in fiscal year 2025
So Gates spends 10 billion/year which is an astonishing 0.2% of US government budget


Economies of scale could vastly benefit a lot of charity work, but few charities can attain sufficient scale to achieve that. There is an unfortunate amount of overhead and administration in charities that do not directly benefit the cause.

In that sense, I suspect targeted and planned large investments into charities with scalable plans is a lot more efficient than years of trickle donations.


Who manages that trust? There is not shortage of short term needs, and short term value added can compound over time. I think this is a fine approach. He's Bill Gates - his legacy is ensured regardless.


The decision can be read in the larger political context. There was some controversy a while back on certain directions the Foundation took like the project on sanitation (aka the toilet challenge) and the backing of charter schools. Regardless of one's opinion of those, he is taking a stand and drawing a line in the sand.


Isn't the Gates Foundation effectively a trust in itself? I'm no economist, I don't know the exact definitions but the projects they do aren't overnight or one-off donations, they need long term (financial) support and guidance; vaccination development takes years, vaccination programs with the intent to eradicate diseases like polio take generations - e.g. the vaccine was developed in the 50's, it took ~70 years to mostly eradicate the virus in humans (only 30 known new cases in 3 countries in 2022).


Please...Gates has been giving interviews about giving all his money for 24 years! While every year, topping the list of the richest persons on earth.

Gates moved $50 billion into a tax-exempt entity he controls, avoided all capital gains tax, secured over $11 billion in total tax benefits, and only needs to distribute 5% of the foundation’s assets annually, all while retaining effective control and reaping massive reputational returns.

This is nothing more than tax optimization for billionaires and can guarantee you the private plane bills, security costs, hotel suites are invoiced to the foundation...

People who believe this, also believe Warren Buffet makes money by value investing and picking stocks. Warren Buffet who by the way also used the Gates foundation for tax optimization to the tune of avoiding 20 billion in capital gains tax.

If politicians wanted they could set a 95% tax on billionaires tomorrow, just like next Monday, and none of these 250 individuals would have the minor inconvenience of their lifestyle. It seems to be possible for tariffs...But those are a tax on the other 99% tax payers. It happens overnight...

Such a smart guy, but not smart enough to stay away from Epstein...


I’m not sure it matters.

Many will say Bill is attempting to reshape his legacy and narrative post-Epstein.

The most important thing is that money is being returned to society on an accelerated timeline.

Without this redistribution of wealth from billionaires back to especially the middle and lower classes we are headed for violent revolution on a massive scale.

Hopefully other billionaires redistribute during their lifetimes to address housing, education and health issues as well.


You could eradicate a disease by killing all the hosts. I worry that the people who want to "eradicate disease" don't actually care about long term outcomes, they just want to have their likeness cast in bronze, with a nice plaque beneath it, lauding their "oversized" achievements in life.

Anyways, the type of person who can earn a lot of money in this economy, and the type of person who can best decide how to spend it altruistically, are almost certainly not the same person. The person who earned the money certainly understand this. Yet. Here we are.


> The person who earned the money certainly understand this

Your cynicism is failing you.

The psycopaths that have accumilated all the money in the world are certain that they are the type of person who can best decide everything in the world on any topic, especially when it comes to people poorer than them - which is of course everyone.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: