Yeah, this is something I am thinking a lot about. Companies won't be able to sustain this level of spending forever, and one of two things will need to happen:
1. Models become commodities and immensely cheaper to operate for inference as a result of some future innovation. This would presumably be very bad for the handful of companies who have invested that $1T and want to recoup that, but great for those of us who love cheap inference.
2. #1 doesn't happen and the model providers start begin to feel empowered to pass the true cost of training + inference down to the model consumer. We start paying thousands of dollars per month for model usage and the price gate blocks out most people from reaping the benefits of bleeding-edge AI, instead being locked into cheaper models that are just there to extract cash by selling them things.
Personally I'm leaning toward #1. Future models near as good as the absolute best will get far cheaper to train, and new techniques and specialized inference chips will make them much cheaper to use. It isn't hard for me to imagine another Deepseek moment in the not-so-distant future. Perhaps Anthropic is thinking the same thing given the rumors that they are rumored to be pushing toward an IPO as early as this year.
Maybe. I don't think we yet have a good understanding of how many deaths he will have caused as a result of DOGE so abruptly cutting off assistance to so many vulnerable people around the world, but I've heard estimates hover around 600,000.
Assuming that number turns out to be close to reality, how do you weigh so many unnecessary deaths against VTL rockets and the electric cars?
Perhaps a practitioner of Effective Altruism could better answer that question.
> I don't think we yet have a good understanding of how many deaths he will have caused as a result of DOGE so abruptly cutting off assistance to so many vulnerable people around the world
Nor how many deaths will be caused by his support for far right parties across Europe, when they start ethnic cleansings.
There is corruption everywhere. But do you deny that these organizations by-and-large provided aid and therefore saves the lives of folks who may have otherwise died from illness?
This doesn't make corruption OK. But he tore out a lifeline for some people without giving them an alternative way to get aid.
> The US taxpayer has no moral obligation to send welfare "around the world".
I mean, by way of the atrocities we've committed around the world, we kinda do.
Even if we buy your thesis, foregoing morals, geopolitics, and history, it's a useful soft power strategy...
I'm not saying fund USAID before healthcare for all in america. I'm saying of all the insane things our government wastes money on, USAID was far down on the list of most egregious.
>I mean, by way of the atrocities we've committed around the world, we kinda do.
I've committed no atrocities. Going to guess that you've committed no atrocities. What atrocities did occur, most of those who committed those are dead, the rest are senile in nursing homes. I have no guilt and certainly feel no guilt for those events.
>it's a useful soft power strategy.
Sure, if you're some sort of tyrant. I thought the left was against colonialism... but you guys really just one a more clever, subtle colonialism eh? Figures.
>I'm saying of all the insane things our government wastes money on, USAID was far down on the list of most egregious.
What you're saying is that no cuts can or should be made, unless they are your favorite cuts first. And maybe after you get those, no others need be made at all.
>Sure, if you're some sort of tyrant. I thought the left was against colonialism... but you guys really just one a more clever, subtle colonialism eh? Figures
Drastic misrepresentation. I made no value judgements. I simply offered reasons why the above commenter may be wrong. from different points of view. You misunderstand or are naive to the spectrum of how parasitic to symbiotic those soft power relationships can be
> What you're saying is that no cuts can or should be made, unless they are your favorite cuts first. And maybe after you get those, no others need be made at all.
Nope, just saying there's pretty clear science behind where money could be better spent besides billions in forever wars. Maybe start there? 9$ trillion on pointless wars in the middle east comes to mind? google a map of countries we've overthrown the democratic leader of if you want more examples. all the shahs men is useful too. i could go on.
> I've committed no atrocities. Going to guess that you've committed no atrocities. What atrocities did occur, most of those who committed those are dead, the rest are senile in nursing homes. I have no guilt and certainly feel no guilt for those events
It's not about that.
someone simply had to pay that debt. sorry to tell you those bills they wracked up to accumulate wealth are coming due for the rest of us right or not.
Honestly my real fear is ICE agents at polling places on Election Day harassing would-be voters with citizenship checks and aggressive behavior, slowing things down and maybe causing some people to leave.
Regarding voter data though, if it becomes known that registering to vote as a minority will get you extra scrutiny from ICE, and perhaps a visit to your home, that would probably cause some citizens avoid voting altogether, especially if they are associated with people who are not her legally.
Either way, the federal government really has no right to that data or legitimate use for it, so hopefully they don't manage to get their hands on it.
You mean the latest masterpiece of fantasy storytelling from Lucasfilms™ Brian Moriarty™? Why it's an extraordinary adventure with an interface of magic, stunning high-resolution, 3D landscapes, sophisticated score and musical effects. Not to mention the detailed animation and special effects, elegant point 'n' click control of characters, objects, and magic spells. Beat the rush! Go out and buy Loom™ today!
> I recently used a coding agent on a project where I was using an unfamiliar language, framework, API, and protocol.
You didn’t find that to be a little too much unfamiliarity? With the couple of projects that I’ve worked on that were developed using an “agent first” approach I found that if I added too many new things at once it would put me in a difficult space where I didn’t feel confident enough to evaluate what the agent was doing, and when it seemed to go off the rails I would have to do a bunch of research to figure out how to steer it.
Now, none of that was bad, because I learned a lot, and I think it is a great way to familiarize oneself with a new stack, but if I want to move really fast, I still pick mostly familiar stuff.
I'm assuming this is the case where they are working in an existing codebase written by other humans. I've been in this situation a lot recently, and Copilot is a pretty big help to figure out particularly fiddly bits of syntax - but it's also really stupid suggests a lot of stuff that doesn't work at all.
SwiftKotlinDartGo blur together by now. That's too many languages but what are you gonna do?
I was ready to find that it was a bit much. The conjunction of ATProto and Dart was almost too much for the coding agent to handle and stay useful. But in the end it was OK.
I went from "wow that flutter code looks weird" to enjoying it pretty quickly.
Wondering if anyone here has a good answer to this:
what protection does user data typically have during legal discovery in a civil suit like this where the defendant is a service provider but relevant evidence is likely present in user data?
Does a judge have to weigh a users' expectation of privacy against the request? Do terms of service come into play here (who actually owns the data? what privacy guarantees does the company make?).
I'm assuming in this case that the request itself isn't overly broad and seems like a legitimate use of the discovery process.
Larry Ellison and A16Z will invest in Truth Social, buy Trump Coin, help his family, (or something) just asynchronously enough that it can’t be directly connected to the TikTok deal.
Doubt they feel the need to wait. Nobody is going to do anything about it, just like nobody has yet done anything effective about the other blatant corruption. Look at the Saudi “investment” in Trump’s crypto.
1. Models become commodities and immensely cheaper to operate for inference as a result of some future innovation. This would presumably be very bad for the handful of companies who have invested that $1T and want to recoup that, but great for those of us who love cheap inference.
2. #1 doesn't happen and the model providers start begin to feel empowered to pass the true cost of training + inference down to the model consumer. We start paying thousands of dollars per month for model usage and the price gate blocks out most people from reaping the benefits of bleeding-edge AI, instead being locked into cheaper models that are just there to extract cash by selling them things.
Personally I'm leaning toward #1. Future models near as good as the absolute best will get far cheaper to train, and new techniques and specialized inference chips will make them much cheaper to use. It isn't hard for me to imagine another Deepseek moment in the not-so-distant future. Perhaps Anthropic is thinking the same thing given the rumors that they are rumored to be pushing toward an IPO as early as this year.
reply