Low latency is desirable for stock traders. Most of the data center growth isn't driven by that but by non latency critical workloads such as AI.
The reason, data centers choose to be near London is because there is no pricing advantage to go up north. Even though energy is plentiful, readily accessible, and often curtailed when there's too much of it there. If there was a pricing difference, you'd see a lot more economic activity up north.
Basically the physical advantage is there but the lack of economics cover it up and wipe out the advantage.
Zig is distributed under the MIT License. MS is completely with in their rights to clone the git repository from Codeberg and do whatever with the source code. Including feeding it to their AI algorithms. Moving it to Codeberg doesn't really fix that. I get that some people want to restrict what people can do with source code (including using it for capitalist purposes or indeed ai/machine learning). But the whole point of many open source licenses (and especially the MIT license) is actually the opposite: allowing people to do whatever they want with the source code.
The Zig attitude towards AI usage is a bit odd in my view. I don't think it's that widely shared. But good for them if they feel strongly about that.
I'm kind of intrigued by Codeberg. I had never heard of it until a few days ago and it seems like that's happening in Berlin where I live. I don't think I would want to use it for commercial projects but it looks fine for open source things. Though I do have questions about the funding model. Running all this on donations seems like it could have some issues long term for more serious projects. Moving OSS communities around can be kind of disruptive. And it probably rules out commercial usage.
This whole Github is evil anti-capitalist stance is IMHO a bit out of place. I'm fine with diversity and having more players in the market though; that's a good thing. But many of the replacements are also for profit companies; which is probably why many people are a bit disillusioned with e.g. Gitlab. Codeberg seems structured to be more resilient against that.
Otherwise, Github remains good value and I'm getting a lot of value out of for profit AI companies providing me with stuff that was clearly trained on the body of work stored inside of it. I'm even paying for that. I think it's cool that this is now possible.
> Zig is distributed under the MIT License. MS is completely with in their rights to clone the git repository from Codeberg and do whatever with the source code. Including feeding it to their AI algorithms.
MIT license requires attribution, which AI algorithms don’t provide AFAIK. So either (a) it’s fair use and MS can do that regardless of the license or (b) MS can’t do that. In any case, yeah, that’s not the issue Zig folks have with GitHub.
> Zig is distributed under the MIT License. MS is completely with in their rights to clone the git repository from Codeberg and do whatever with the source code. Including feeding it to their AI algorithms. Moving it to Codeberg doesn't really fix that. I get that some people want to restrict what people can do with source code (including using it for capitalist purposes or indeed ai/machine learning). But the whole point of many open source licenses (and especially the MIT license) is actually the opposite: allowing people to do whatever they want with the source code.
MS training AIs on Zig isn't their complaint here. They're saying that Github has become a worse service because MS aren't working on the fundamentals any more and just chasing the AI dream, and trying to get AI to write code for it is having bad results.
Trade wars work both ways. So far the US export market is not doing so great. All those tariffs are raising the cost of exported goods as well. And those were already too expensive before the tariffs. If the US wants more US cars on EU roads, it needs to start making better cars. It's that simple. But in the EU, cars have to compete with domestic cheap cars and imported Korean and Chinese cars. It's a level playing field. Hence not a lot of US cars on the roads. A few Teslas (made in the EU mostly), a few Fords (some made on the VW platform), and a sprinkling of niche imports for things like muscle cars and pickup trucks. They are quite rare but you see one or two once in a while.
Communication overhead is a big thing in teams. If you have a struggling team, halve the size. It's crazy how well that works. It's not the people but the number of them. Once your people are consumed by the day to day frustrations of having to communicate with everyone else and with all the infighting, posturing, etc. that comes with that, they'll get nothing done. Splitting teams is an easy to implement fix. Minimize the communication paths between the two (or more) teams and carve up what they work on and suddenly shit gets done.
In this case, they probably were trying to not just rewrite but improve the engine at the same time. That's a much more complicated thing to achieve. Especially when the original is a heavily optimized and probably somewhat hard to reason about blob of assembly. I'm guessing that even wrapping your head around that would be a significant job.
Amazingly enjoyable game btw. Killed quite a few hours with that one around 2000.
>Communication overhead is a big thing in teams. If you have a struggling team, halve the size. It's crazy how well that works.
I wish my managers would get this. Currently our product shit the fan due to us being understaffed and badly managed due to clueless managers, and what they did was add two more managers to the team to create more meetings and micromanage everrying.
I'm not confused about the acquisition but about the investment. What were the investors thinking? This is an open source development tool with (to date), 0$ of revenue and not even the beginnings of a plan for getting such a thing.
The acquisition makes more sense. A few observations:
- no acquisition amount was announced. That indicates some kind of share swap where the investors change shares for one company into another. Presumably the founder now has some shares in Anthropic and a nice salary and vesting structure that will keep him on board for a while.
- The main investor was Kleiner Perkins. They are also an investor in Anthropic. 100M in the last round, apparently.
Everything else is a loosely buzzword compatible thingy for Anthropic's AI coding thingy and some fresh talent for their team. All good. But it's beside the point. This was an investor bailout. They put in quite a bit of money in Bun with exactly 0 remaining chance of that turning into the next unicorn. Whatever flaky plan there once might have been for revenue that caused them to invest, clearly wasn't happening. So, they liquidated their investment through an acquihire via one of their other investments.
Kind of shocking how easy it was to raise that kind of money with essentially no plan whatsoever for revenue. Where I live (Berlin), you get laughed away by investors (in a quite smug way typically) unless you have a solid plan for making them money. This wouldn't survive initial contact with due diligence. Apparently money still grows on trees in Silicon Valley.
I like Bun and have used it but from where I'm sitting there was no unicorn lurking there, ever.
They don't need Bun to make revenue, but they need Bun to continue existing and growing for their products to make revenue. Now they can ensure its survival, push for growth, and provide resources so that Bun can build the best product rather than focus on making money.
It will be interesting to see how this evolves. It used to be that game developers could safely ignore Linux. But with a growing number of Steam OS, Steam Deck, and Linux + Steam users gaming, it's going to get increasingly more painful in terms of revenue to be telling those users "our game only works on Windows" and just miss out on the revenue and deal with the angry users, forums full of users complaining the game doesn't work, etc.
It might only be a few percent of overall users. But a few percent of a billion $ is a couple of tens of millions. That's a steep price to pay for anti-cheat code.
Most game devs can continue to ignore linux and trust that proton will work it out.
It's only the highly competitive online games that have this issue. While they make up a lot of playtime, they're worked on by a tiny minority of developers.
They are still ignoring Linux, hence why Valve is using Proton, the validation they failed to convince studios to care, studios that happen to target systems like Android NDK, or use platform agnostic engines.
> They are still ignoring Linux, hence why Valve is using Proton...
Eh, maybe?
I'd put forth the notion that game devs might be caring how their game works on both Windows and Proton. That is, that they're still using the Microsoft-provided APIs to build their game, but care about how it runs on Linux just as much as how it runs on Windows.
Not really, otherwise you would be getting SteamOS native builds.
It is up to Valve to sort it up, they are the ones that care, otherwise they will need to pay Windows licenses, which is really what this is all about, while pretending to be some kind of white knights.
This is the type of business that's going to be hit hard by AI. And the type of businesses that survive will be the ones that integrate AI into their business the most successfully. It's an enabler, a multiplier. It's just another tool and those wielding the tools the best, tend to do well.
Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
The expertise and skill still matter. But customers are going to get a lot further without such a studio and the remaining market is going to be smaller and much more competitive.
There's a lot of other work emerging though. IMHO the software integration market is where the action is going to be for the next decade or so. Legacy ERP systems, finance, insurance, medical software, etc. None of that stuff is going away or at risk of being replaced with some vibe coded thing. There are decades worth of still widely used and critically important software that can be integrated, adapted, etc. for the modern era. That work can be partly AI assisted of course. But you need to deeply understand the current market to be credible there. For any new things, the ambition level is just going to be much higher and require more skill.
Arguing against progress as it is happening is as old as the tech industry. It never works. There's a generation of new programmers coming into the market and they are not going to hold back.
> Taking a moral stance against AI might make you feel good but doesn't serve the customer in the end. They need value for money. And you can get a lot of value from AI these days; especially if you are doing marketing, frontend design, etc. and all the other stuff a studio like this would be doing.
So let's all just give zero fucks about our moral values and just multiply monetary ones.
>So let's all just give zero fucks about our moral values and just multiply monetary ones.
You are misconstruing the original point. They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government. There are a few people who might exaggerate these moral issues for self service but they wouldn't matter in the long term.
That is not to suggest there are absolutely no legitimate moral problems with AI but they will pale in comparison to what the market needs.
If AI can make things 1000x more efficient, humanity will collectively agree in one way or the other to ignore or work around the "moral hazards" for the greater good.
You can start by explaining what your specific moral value is that goes against AI use? It might bring to clarity whether these values are that important at all to begin with.
Is that the promise of the faustian bargain we're signing?
Once the ink is dry, should I expect to be living in a 900,000 sq ft apartment, or be spending $20/year on healthcare? Or be working only an hour a week?
While humans have historically mildly reduced their working time to today's 40h workweek, their consumption has gone up enormously, and whole new categories of consumption were opened. So my prediction is while you'll never live in a 900,000sqft apartment (unless we get O'Neill cylinders from our budding space industry) you'll probably consume a lot more, while still working a full week
I don't want to "consume a lot more". I want to work less, and for the work I do to be valuable, and to be able to spend my remaining time on other valuable things.
You can consume a lot less on a surprisingly small salary, at least in the U.S.
But it requires giving up things a lot of people don't want to, because consuming less once you are used to consuming more sucks. Here is a list of things people can cut from their life that are part of the "consumption has gone up" and "new categories of consumption were opened" that ovi256 was talking about:
- One can give up cell phones, headphones/earbuds, mobile phone plans, mobile data plans, tablets, ereaders, and paid apps/services. That can save $100/mo in bills and amortized hardware. These were a luxury 20 years ago.
- One can give up laptops, desktops, gaming consoles, internet service, and paid apps/services. That can save another $100/months in bills and amortized hardware. These were a luxury 30 years ago.
- One can give up imported produce and year-round availability of fresh foods. Depending on your family size and eating habits, that could save almost nothing, or up to hundreds of dollars every month. This was a luxury 50 years ago.
- One can give up restaurant, take-out, and home pre-packaged foods. Again depending on your family size and eating habits, that could save nothing-to-hundreds every month. This was a luxury 70 years ago.
- One can give up car ownership, car rentals, car insurance, car maintenance, and gasoline. In urban areas, walking and public transit are much cheaper options. In rural areas, walking, bicycling, and getting rides from shuttle services and/or friends are much cheaper options. That could save over a thousand dollars a month per 15,000 miles. This was a luxury 80 years ago.
I could keep going, but by this point I've likely suggested cutting something you now consider necessary consumption. If you thought one "can't just give that up nowadays," I'm not saying you're right or wrong. I'm just hoping you acknowledge that what people consider optional consumption has changed, which means people consume a lot more.
> - One can give up cell phones, headphones/earbuds, mobile phone plans, mobile data plans, tablets, ereaders, and paid apps/services. That can save $100/mo in bills and amortized hardware. These were a luxury 20 years ago.
It's not clear that it's still possible to function in society today with out a cell phone and a cell phone plan. Many things that were possible to do before without one now require it.
> - One can give up laptops, desktops, gaming consoles, internet service, and paid apps/services. That can save another $100/months in bills and amortized hardware. These were a luxury 30 years ago.
Maybe you can replace these with the cell phone + plan.
> - One can give up imported produce and year-round availability of fresh foods. Depending on your family size and eating habits, that could save almost nothing, or up to hundreds of dollars every month. This was a luxury 50 years ago.
It's not clear that imported food is cheaper than locally grown food. Also I'm not sure you have the right time frame. I'm pretty sure my parents were buying imported produce in the winter when I was a kid 50 years ago.
> - One can give up restaurant, take-out, and home pre-packaged foods. Again depending on your family size and eating habits, that could save nothing-to-hundreds every month. This was a luxury 70 years ago.
Agreed.
> - One can give up car ownership, car rentals, car insurance, car maintenance, and gasoline. In urban areas, walking and public transit are much cheaper options. In rural areas, walking, bicycling, and getting rides from shuttle services and/or friends are much cheaper options. That could save over a thousand dollars a month per 15,000 miles. This was a luxury 80 years ago.
Yes but in urban areas whatever you're saving on cars you are probably spending on higher rent and mortgage costs compared to rural areas where cars are a necessity. And if we're talking USA, many urban areas have terrible public transportation and you probably still need Uber or the equivalent some of the time, depending on just how walkable/bike-able your neighborhood is.
> It's not clear that it's still possible to function in society today with out a cell phone
Like I said... I've likely suggested cutting something you now consider necessary consumption. If you thought one "can't just give that up nowadays," I'm not saying you're right or wrong. I'm just hoping you acknowledge that what people consider optional consumption has changed, which means people consume a lot more.
---
As an aside, I live in a rural area. The population of my county is about 17,000 and the population of its county seat is about 3,000. We're a good 40 minutes away from the city that centers the Metropolitan Statistical Area. A 1 bedroom apartment is $400/mo and a 2 bedroom apartment is $600/mo. In one month, minimum wage will be $15/hr.
Some folks here do live without a car. It is possible. They get by in exactly the ways I described (except some of the Amish/Mennonites, who also use horses). It's not preferred (except by some of the Amish/Mennonites), but one can make it work.
I've been alive slightly longer than that. And can't say life today is definitively better than 50 years ago in the USA.
It was the tail end of one income affording a house and groceries for a family. So to afford the same things, for many families requires almost double the labor.
A lot of new medical treatments, less smoking and drinking, overall longer life spans. But more recently increases to longevity have plateaued, and an epic of obesity has mitigated a lot of the health care improvements. And the astronomical increases in health care costs means improvements to health care capabilities are not available to a lot of people, at least not without greatly reducing their standard of living elsewhere.
College and university costs have grown exponentially, with no discernible increase in the quality of learning.
Housing prices far outpacing inflation of other goods and services.
Fewer intact families, less in person interactions, and the heroin like addictiveness of screens, have ushered in an epidemic of mental illness that might be unprecedented.
Now AI scaring the shit out of everyone, that no matter how hard you study, how disciplined and responsible you are, there's a good chance you will not be gainfully employed.
I frankly think the quality of life in the world I grew up in is better than the one my kids have today.
But if we take "surprisingly small salary" to literally mean salary, most (... all?) salaried jobs require you to work full time, 40 hours a week. Unless we consider cushy remote tech jobs, but those are an odd case and likely to go away if we assume AI is taking over there.
Part time / hourly work is largely less skilled and much lower paid, and you'll want to take all the hours you can get to be able to afford outright necessities like rent. (Unless you're considering rent as consumption/luxury, which is fair)
It does seem like there's a gap in terms of skilled/highly paid but hourly/part time work.
(Not disagreeing with the rest of your post though)
You aren't wrong and I agree up to a point. But I've watched a couple of people try to get by on just "cutting" rather than growing their incomes and it doesn't work out for them. A former neighbor was a real Dave Ramsey acolyte and even did things like not have trash service (used dumpsters and threw trash out at his mother's house). His driveway was crumbling but instead of getting new asphalt he just dug it all up himself and dumped it...somewhere, and then filled it in with gravel. He drives junker cars that are always breaking down. I helped him replace a timing chain on a Chrysler convertible that wasn't in awful shape, but the repairs were getting intense. This guy had an average job at a replacement window company but had zero upward mobility. He was and I assume is, happy enough, with a roof over his head and so forth, but our property taxes keep rising, insurance costs keep rising, there's only so much you can cut. My take is that you have to find more income and being looked upon as "tight with a buck" or even "cheap" is unfavorable.
Ouch! Man this is a terrible take on the world. I know you mean well and that the majority of the world agrees with this, but to be honest, I have been having real thoughts about letting the make it till you break it mentality go myself. things are getting more expensive and I dont think im willing to live a life running from paycheck to paycheck... Not sure why I am going to do about it, but I know that feeling is there.
I've given up pretty much all of that out of necessity, yes. Insurance and rent still goes up so I'm spending almost as much as I was at my peak, though.
>I'm just hoping you acknowledge that what people consider optional consumption has changed, which means people consume a lot more.
Of course it's changed. The point is that
1. the necessities haven't changed and have gotten more expensive. People need healthcare, housing, food, and tranport. All up.
2. the modern day expectations means necessities change. We can't walk into a business and shake someone's hand to get a job, so you "need" access to the internet to get a job. Recruiters also expect a consistent phone number to call so good luck skipping a phone line (maybe VOIP can get around this).
These are society's fault as they shifted to pleasing shareholders and outsourcing entire industries (and of course submitted to lobbying). so I don't like this blame being shifted to the individual for daring to consume to survive.
Voting in people who can actually recognize the problem and make sure corporationa cant ship all of America's labor overseas. Blaming ourselves for society's woes only pushes the burden further on the people, instead of having them collectively gather and push back against those at fault.
I suppose so, but that takes decades of change. I don't see any solution right now though which is what matters to many.
As an aside, every thread I see here has a comment by you lol, that's some good effort but maybe take a break from such strenuous commenting, I say this sincerely as I also used to get into all these back and forths on HN and then realized, much of the time, it's a waste of my own time.
So you are agreeing with the parent? If consumption has gone up a lot and input hours has gone down or stayed flat, that means you are able to work less.
But that's not what they said, they said they want to work less. As the GP post said, they'd still be working a full week.
I do think this is an interesting point. The trend for most of history seems to have been vastly increasing consumption/luxury while work hours somewhat decrease. But have we reached the point where that's not what people want? I'd wager most people in rich developed countries don't particularly want more clothes, gadgets, cars, or fast food. If they can get the current typical middle class share of those things (which to be fair is a big share, and not environmentally sustainable), along with a modest place to live, they (we) mainly want to work less.
>If you want to live in a high cost of living area, that's a form of consumption.
Not really a "want" as much as "move where the jobs are". Remote jobs are shakey now and being in the middle of nowhere only worsens your compensation aspects.
Being able to live wherever you please is indeed a luxury. The suburb structure already sacrificed the aspect of high CoL for increase commute time to work.
I also do think that dismissing aspects of humanity like family, community and sense of purpose to "luxuries" is an extremely dangerous line of thinking.
In most places (SF may be somewhat of an exception in terms of relatively unaffordable housing in both the city and any accessible suburbs) 30-60 minute commutes are pretty normal. At least a lot of the companies are probably in the suburbs/exurbs anyway. I'm not suggesting living in the middle of nowhere but, in a lot of places, urban vs. exurban living is a choice especially with companies that are often exurban.
I mean, yeah? Does any market work like that? If you want an apple, you pay the person who has the apple to take it from them, you don't pay the other people who want apples. Not really following where this is going
I think FIRE was basically just a fad for awhile. I say this as a 52 year old "retiree" who isn't working right now and living off investment income. It takes a shitload of wealth to not have to work and I'm borderline not real comfortable with the whole situation. I live in a fairly HCoL area and can't up and move right now (wife has medical needs, son in high school, daughter in college). I'd be freaking out if I didn't have a nest egg, we would be trying to sell our house in a crap market. As it stands, I don't really want to go on like I am, my life is a total waste right now.
It's not a "fad," it's a mathematical observation that investing more generates more returns. Maybe the media was covering it more at some point but the concept itself is sound. You are in fact FIREd by the same definition, it's just that in your case it seems you would need more money than you have currently due to the factors you stated, but that's not the fault of the concept of FIRE in general. And anyway, there are lots of stories of people doing regular or leanFIRE too, it doesn't require so much wealth as to be unreachable if you have a middle class job. For example, https://www.reddit.com/r/leanfire/s/67adPxZeDU
If you think your life is a waste right now, do something with it. That's actually the number one thing people don't expect from being retired, how bored they get. They say in FIRE communities that all the money and time in the world won't help if you don't actually utilize it.
Let's kill this myth that people were lounging around before the Industrial Revolution. Serfs for example were working both their own land as well as their lord's land, as well as doing domestic duties in the middle. They really didn't have as much free time as we do today, plus their work was way more backbreaking, literally, than most's cushy sedentary office jobs.
We could probably argue to the end of time about the qualitative quality of life between then and now. In general a metric of consumption and time spent gathering that consumption has gotten better over time.
I don't think a general sentiment matters much here when the important necessitate are out of reach. The hierarchy of needs is outdated, but the inversion of it is very concerning.
We can live without a flat screen TV (which has gotten dirt cheap). We can't live without a decent house. Or worse, while we can live in some 500 sq ft shack we can't truly "live" if there's no other public amenities to gather and socialize without nickel-and-diming us.
pre-industrial? Lots of tending to the farm, caring for family, and managing slaves I suppose. Had some free time between that to work with your community for bonding or business dealings or whatnot.
Quite the leap to go from "pre-industrial people" to "Antebellum US Southerners", and even then the majority of that (hyperspecific) group did not own slaves.
>you'll probably consume a lot more, while still working a full week
There's more to cosume than 50 years ago, but I don't see that trend continuing. We shifted phone bills to cell phone bills and added internet bills and a myriad of subscriptions. But that's really it. everything was "turn one time into subscrition".
I don't see what will fundamentally shift that current consumption for the next 20-30 years. Just more conversion of ownership to renting. In entertainment we're already seeing revolts against this as piracy surges. I don't know how we're going to "consume a lot more" in this case.
Boomers in a nutshell. Do a bunch of stuff to keep from building more housing to prop up housing prices (which is much of their net worth), and then spend until you're forced to spend the last bit to keep yourselves alive.
Then the hospital takes the house to pay off the rest of the debts. Everybody wins!
>They are simply suggesting that the moral qualms of using AI are simply not that high - neither to vast majority of consumers, neither to the government.
And I believe they (and I) are suggesting that this is just a bad faith spin on the market, if you look at actual AI confidence and sentiment and don't ignore it as "ehh just the internet whining". Consumers having less money to spend doesn't mean they are adopting AI en masse, nor are happy about it.
I don't think using the 2025 US government for a moral compass is helping your case either.
>If AI can make things 1000x more efficient
Exhibit A. My observations suggest that consumers are beyond tired of talking about the "what ifs" while they struggle to afford rent or get a job in this economy, right now. All the current gains are for corporate billionaires, why would they think that suddenly changes here and now?
AI is just a tool, like most other technologies, it can be used for good and bad.
Where are you going to draw the line? Only if it effects you, or maybe we should go back to using coal for everything, so the mineworkers have their old life back? Or maybe follow the Amish guidelines to ban all technology that threatens sense of community?
If you are going to draw a line, you'll probably have to start living in small communities, as AI as a technology is almost impossible to stop. There will be people and companies using it to it's fullest, even if you have laws to ban it, other countries will allow it.
The goal of AI is NOT to be a tool. It's to replace human labor completely.
This means 100% of economic value goes to capital, instead of labor. Which means anyone that doesn't have sufficient capital to live off the returns just starves to death.
To avoid that outcome requires a complete rethinking of our economic system. And I don't think our institutions are remotely prepared for that, assuming the people runnign them care at all.
The Amish don’t ban all tech that can threaten community. They will typically have a phone or computer in a public communications house. It’s being a slave to the tech that they oppose (such as carrying that tech with you all the time because you “need” it).
I was told that Amish (elders) ban technology that separates you from God. Maybe we should consider that? (depending on your personal take on what God is)
> AI is just a tool, like most other technologies, it can be used for good and bad.
The same could be said of social media for which I think the aggregate bad has been far greater than the aggregate good (though there has certainly been some good sprinkled in there).
I think the same is likely to be true of "AI" in terms of the negative impact it will have on the humanistic side of people and society over the next decade or so.
However like social media before it I don't know how useful it will be to try to avoid it. We'll all be drastically impacted by it through network effects whether we individually choose to participate or not and practically speaking those of us who still need to participate in society and commerce are going to have to deal with it, though that doesn't mean we have to be happy about it.
A crowd of people continually rooting against their best interests isn't exactly what's needed for the solidarity that people claim is a boon from social media. Its not as bad as other websites out there, but I've see these flags several times on older forums.
It won't be as hard as you think for HN to slip into the very thing they mock Instagram of today for being.
Uh huh, that's always how it starts. "Well you're in the minority, majority prevails".
Yup, story of my life. I have on fact had a dozen different times where I chose not to jump off the cliff with peers. How little I realized back then how rare that quality is.
But you got your answer, feel free to follow the crowd. I already have migrations ready. Again, not my first time.
How about we start with "commercial LLMs cannot give Legal, Medical, or Financial advice" and go from there? LLMs for those businesses need to be handled by those who can be held accountable (be it the expert or the CEO of that expert).
I'd go so far to try and prevent the obvious and say "LLM's cannot be used to advertise product". but baby steps.
>AI as a technology is almost impossible to stop.
Not really a fan of defeatism speak. Tech isn't as powerful as billionaire want you to pretend it is. It can indeed be regulated, we just need to first use our civic channels instead of fighting amongst ourselves.
Of course, if you are profiting off of AI, I get it. Gotta defend your paycheck.
What makes you think that in the world where only the wealthy can afford legal, medical, and financial advice from human beings, the same will be automatically affordable from AI?
It will be, of course, but only until all human competition in those fields is eliminated. And after that, all those billions invested must be recouped back by making the prices skyrocket. Didn't we see that with e.g. Uber?
If you're going to approach this on such bad faith, then I'll simply say "yes" and move on. People can male bad decisions, but that shouldn't be a profitable business.
If it is just a tool, it isn't AI. ML algorithms are tools that are ultimately as good or bad as the person using them and how they are used.
AI wouldn't fall into that bucket, it wouldn't be driven entirely by the human at the wheel.
I'm not sold yet whether LLMs are AI, my gut says no and I haven't been convinced yet. We can't lose the distinction between ML and AI though, its extremely important when it comes to risk considerations.
Machine learning isn't about developing anything intelligent at all, its about optimizing well defined problem spaces for algorithms defined by humans. Intelligence is much more self guided and has almost nothing to do with finding the best approximate solution to a specific problem.
> Machine learning (ML) is a field of study in _artificial intelligence_ concerned with the development and study of statistical algorithms that can learn from data and generalise to unseen data, and thus perform tasks without explicit instructions.
The definition there is correct. ML is a a field of study in AI, that does not make it AI. Thermodynamics is a field of study in physics, that does not mean that thermodynamics is physics.
What parent is saying is that what works is what will matter in the end. That which works better than something else will become the method that survives in competition.
You not liking something on purportedly "moral" grounds doesn't matter if it works better than something else.
Oxycontin certainly worked, and the markets demanded more and more of it. Who are we to take a moral stand and limit everyone's access to opiates? We should just focus on making a profit since we're filling a "need"
Guess you mmissed the post where lawyers were submitting legal documents generated by LLM's. Or people taking medical advice and ending up with hyperbromium consumptions. Or the lawsuits around LLM's softly encouraging suicide. Or the general AI psychosis being studied.
Besides the suicide one, I don't know of any examples where that has actually killed someone. Someone could search on Google just the same and ignore their symptoms.
>I don't know of any examples where that has actually killed someone.
You don't see how botched law case can't cost someone their life? Let's not wait until more die to reign this in.
>Someone could search on Google just the same and ignore their symptoms.
Yes, and it's not uncommon for websites or search engines to be sued. Millenia of laws exist for this exact purpose, so companies can't deflect bad things back to the people.
If you want the benefits, you accept the consequences. Especially when you fail to put up guard rails.
That argument is rather naive, given that millenia of law is meant to regulate and disincentivize behavior. "If people didn't get mad they wouldn't murder!"
We've regulated public messages for decades, and for good reason. I'm not absolving them this time because they want to hide behind a chatbot. They have blood on their hands.
If you were offended by that comment, I apologize. You're 99.99% not the problem and infighting gets us nowhre.
But you may indeed be vying against your best interests. Hope you can take some time to understand where you lie in life and if your society is really benefiting you.
I am not offended. And I'll be the one to judge my own best interests. (back to: "personal responsibility"). e.g. I have more information about my own life than you or anyone else, and so am best situated to make decisions for myself about my own beliefs.
For instance I work for one of the companies that produces some of the most popular LLMs in use today. And I certainly have a stake in them performing well and being useful.
But your line of reasoning would have us believe that Henry Ford is a mass murderer due to the number of vehicular deaths each year, or that the wright brothers bear some responsibility for 9/11. They should have foreseen that people would fly their planes into buildings, of course.
If you want to blame someone for LLMs hurting people, we really need to go all the way back to Alan Turing -- without him these people would still be alive!
>And I'll be the one to judge my own best interests thank you.
Okay, cool. Note that I never asked for your opinion and you decided to pop up in this chain as I was talking to someone else. Go about your day or be curious, but don't butt in then pretend 'well I don't care what you say' when you get a response back.
Nothing you said contradicted my main point. So this isn't really a conversation but simply more useless defense. Good day.
Not yet maybe... Once we factor in the environmental damage that generative AI, and all the data centers being built to power it, will inevitably cause - I think it will become increasingly difficult to make the assertion you just did.
You're entering a bridge and there's a road sign before it with a pictogram of a truck and a plaque below that reads "10t max".
According to the logic of your argument, it's perfectly okay to drive a 360t BelAZ 75710 loaded to its full 450t capacity over that bridge just because it's a truck too.
That's how it works. You can be morally righteous all you want, but this isn't a movie. Morality is a luxury for the rich. Conspicuous consumption. The morally righteous poor people just generally end up righteously starving.
This seems rather black and white.
Defining the morals probably makes sense, then evaluating whether they can be lived or whether we can compromise in the face other priorities?
> when the market is telling you loud and clear they want X
Does it tho? Articles like [1] or [2] seem to be at odd with this interpretation. If it were any different we wouldn't be talking about the "AI bubble" after all.
"Jeez there so many cynics! It cracks me up when I hear people call AI underwhelming,”
ChatGPT can listen to you in real time, understands multiply languages very well and responds in a very natural way. This is breath taking and not on the horizon just a few years ago.
AI Transcription of Videos is now a really cool and helpful feature in MS Teams.
Segment Anything literaly leapfroged progress on image segmentation.
You can generate any image you want in high quality in just a few seconds.
There are already human beings being shitier in their daily job than a LLM is.
2) if you had read the paper you wouldn’t use it as an example here.
Good faith discussion on what the market feels about LLMs would include Gemini, ChatGPT numbers. Overall market cap of these companies. And not cherry picked misunderstood articles.
No, I picked those specifically. When Pets.com[1] went down in early 2000 it wasn't neither the idea, nor the tech stack that brought the company down, it was the speculative business dynamics that caused its collapse. The fact we've swapped technology underneath doesn't mean we're not basically falling into ".com Bubble - Remastered HD Edition".
I bet a few Pets.com exec were also wondering why people weren't impressed with their website.
Do you actually want to get into the details on how frequently do markers get things right vs get things wrong? It would make the priors a bit more lucid so we can be on the same page.
This is a YC forum. That guy is giving pretty honest feedback about a business decision in the context of what the market is looking for. The most unkind thing you can do to a founder is tell them they’re right when you see something they might be wrong about.
What you (and others in this thread) are also doing is a sort of maximalist dismissal of AI itself as if it is everything that is evil and to be on the right side of things, one must fight against AI.
This might sound a bit ridiculous but this is what I think a lot of people's real positions on AI are.
800 million weekly active users for ChatGPT. My position on things like this is that if enough people use a service, I must defer to their judgement that they benefit from it. To do the contrary would be highly egoistic and suggest that I am somehow more intelligent than all those people and I know more about what they want for themselves.
I could obviously give you examples where LLMs have concrete usecases but that's besides the larger point.
> 1B people in the world smoke. The fact something is wildly popular doesn’t make it good or valuable. Human brains are very easily manipulated, that should be obvious at this point.
You should be. You should be equally suspicious of everything. That's the whole point. You wrote:
> My position on things like this is that if enough people use a service, I must defer to their judgement that they benefit from it.
Enough people doing something doesn't make that something good or desirable from a societal standpoint. You can find examples of things that go in both directions. You mentioned gaming, social media, movies, carnivals, travel, but you can just as easily ask the same question for gambling or heavy drugs use.
Just saying "I defer to their judgment" is a cop-out.
> The point is that people FEEL they benefit. THAT’S the market for many things.
I don't disagree, but this also doesn't mean that those things are intrinsically good and then we should all pursuit them because that's what the market wants. And that was what I was pushing against, this idea that since 800M people are using GPT then we should all be ok doing AI work because that's what the market is demanding.
Its not that it is intrinsically good but that a lot of people consuming things from their own agency has to mean something. You coming in the middle and suggesting you know better than them is strange.
When billions of people watch football, my first instinct is not to decry football as a problem in society. I acknowledge with humility that though I don't enjoy it, there is something to the activity that makes people watch it.
> a lot of people consuming things from their own agency has to mean something.
Agree. And that something could be a positive or a negative thing. And I'm not suggesting I know better than them. I'm suggesting that humans are not perfect machines and our brains are very easy to manipulate.
Because there are plenty of examples of things enjoyed by a lot of people who are, as a whole, bad. And they might not be bad for the individuals who are doing them, they might enjoy them, and find pleasure in them. But that doesn't make them desirable and also doesn't mean we should see them as market opportunities.
Drugs and alcohol are the easy example:
> A new report from the World Health Organization (WHO) highlights that 2.6 million deaths per year were attributable to alcohol consumption, accounting for 4.7% of all deaths, and 0.6 million deaths to psychoactive drug use. [...] The report shows an estimated 400 million people lived with alcohol use disorders globally. Of this, 209 million people lived with alcohol dependence. (https://www.who.int/news/item/25-06-2024-over-3-million-annu...)
Can we agree that 3 million people dying as a result of something is not a good outcome? If the reports were saying that 3 million people a year are dying as a result of LLM chats we'd all be freaking out.
–––
> my first instinct is not to decry football as a problem in society.
My first instinct is not to decry nothing as a problem, not as a positive. My first instinct is to give ourselves time to figure out which one of the two it is before jumping in head first. Which is definitely not what's happening with LLMs.
As someone else said, we don't know for sure. But it's not like there aren't some at-least-kinda-plausible candidate harms. Here are a few off the top of my head.
(By way of reminder, the question here is about the harms of LLMs specifically to the people using them, so I'm going to ignore e.g. people losing their jobs because their bosses thought an LLM could replace them, possible environmental costs, having the world eaten by superintelligent AI systems that don't need humans any more, use of LLMs to autogenerate terrorist propaganda or scam emails, etc.)
People become like those they spend time with. If a lot of people are spending a lot of time with LLMs, they are going to become more like those LLMs. Maybe only in superficial ways (perhaps they increase their use of the word "delve" or the em-dash or "it's not just X, it's Y" constructions), maybe in deeper ways (perhaps they adapt their _personalities_ to be more like the ones presented by the LLMs). In an individual isolated case, this might be good or bad. When it happens to _everyone_ it makes everyone just a bit more similar to one another, which feels like probably a bad thing.
Much of the point of an LLM as opposed to, say, a search engine is that you're outsourcing not just some of your remembering but some of your thinking. Perhaps widespread use of LLMs will make people mentally lazier. People are already mostly very lazy mentally. This might be bad for society.
People tend to believe what LLMs tell them. LLMs are not perfectly reliable. Again, in isolation this isn't particularly alarming. (People aren't perfectly reliable either. I'm sure everyone reading this believes at least one untrue thing that they believe because some other person said it confidently.) But, again, when large swathes of the population are talking to the same LLMs which make the same mistakes, that could be pretty bad.
Everything in the universe tends to turn into advertising under the influence of present-day market forces. There are less-alarming ways for that to happen with LLMs (maybe they start serving ads in a sidebar or something) and more-alarming ways: maybe companies start paying OpenAI to manipulate their models' output in ways favourable to them. I believe that in many jurisdictions "subliminal advertising" in movies and television is illegal; I believe it's controversial whether it actually works. But I suspect something similar could be done with LLMs: find things associated with your company and train the LLM to mention them more often and with more positive associations. If it can be done, there's a good chance that eventually it will be. Ewww.
All the most capable LLMs run in the cloud. Perhaps people will grow dependent on them, and then the companies providing them -- which are, after all, mostly highly unprofitable right now -- decide to raise their prices massively, to a point at which no one would have chosen to use them so much at the outset. (But at which, having grown dependent on the LLMs, they continue using them.)
I don't agree with most of these points, I think the points about atrophy, trust, etc will have a brief period of adjustment, and then we'll manage. For atrophy, specifically, the world didn't end when our math skills atrophied with calculators, it won't end with LLMs, and maybe we'll learn things much more easily now.
I do agree about ads, it will be extremely worrying if ads bias the LLM. I don't agree about the monopoly part, we already have ways of dealing with monopolies.
In general, I think the "AI is the worst thing ever" concerns are overblown. There are some valid reasons to worry, but overall I think LLMs are a massively beneficial technology.
For the avoidance of doubt, I was not claiming that AI is the worst thing ever. I too think that complaints about that are generally overblown. (Unless it turns out to kill us all or something of the kind, which feels to me like it's unlikely but not nearly as close to impossible as I would be comfortable with[1].) I was offering examples of ways in which LLMs could plausibly turn out to do harm, not examples of ways in which LLMs will definitely make the world end.
Getting worse at mental arithmetic because of having calculators didn't matter much because calculators are just unambiguously better at arithmetic than we are, and if you always have one handy (which these days you effectively do) then overall you're better at arithmetic than if you were better at doing it in your head but didn't have a calculator. (Though, actually, calculators aren't quite unambiguously better because it takes a little bit of extra time and effort to use one, and if you can't do easy arithmetic in your head then arguably you have lost something.)
If thinking-atrophy due to LLMs turns out to be OK in the same way as arithmetic-atrophy due to calculators has, it will be because LLMs are just unambiguously better at thinking than we are. That seems to me (a) to be a scenario in which those exotic doomy risks become much more salient and (b) like a bigger thing to be losing from our lives than arithmetic. Compare "we will have lost an important part of what it is to be human if we never do arithmetic any more" (absurd) with "we will have lost an important part of what it is to be human if we never think any more" (plausible, at least to me).
[1] I don't see how one can reasonably put less than 50% probability on AI getting to clearly-as-smart-as-humans-overall level in the next decade, or less than 10% probability on AI getting clearly-much-smarter-than-humans-overall soon after if it does, or less than 10% probability on having things much smarter than humans around not causing some sort of catastrophe, all of which means a minimum 0.5% chance of AI-induced catastrophe in the not-too-distant future. And those estimates look to me like they're on the low side.
Any sort of atrophy of anything is because you don't need the skill any more. If you need the skill, it won't atrophy. It doesn't matter if it's LLMs or calculators or what, atrophy is always a non-issue, provided the technology won't go away (you don't want to have forgotten how to forage for food if civilization collapses).
Right. But (1) no longer needing the skill of thinking seems not obviously a good thing, and (2) in scenarios where in fact there is no need for humans to think any more I would be seriously worried about doomy outcomes.
(Maybe no longer needing the skill of thinking would be fine! Maybe what happens then is that people who like thinking can go on thinking, and people who don't like thinking and were already pretty bad at it outsource their thinking to AI systems that do it better, and everything's OK. But don't you think it sounds like the sort of transformation where if someone described it and said "... what could possibly go wrong?" you would interpret that as sarcasm? It doesn't seem like the sort of future where we could confidently expect that it would all be fine.)
We don't know yet? And that's how things usually go. It's rare to have an immediate sense of how something might be harmful 5, 10, or 50 years in the future. Social media was likely considered all fun and good in 2005 and I doubt people were envisioning all the harmful consequences.
Yet social media started as individualized “web pages” and journals on myspace. It was a natural outgrowth of the internet at the time, a way for your average person to put a little content on the interwebules.
What became toxic was, arguably, the way in which it was monetized and never really regulated.
I don't disagree with your point and the thing you're saying doesn't contradict the point I was making. The reason why it became toxic is not relevant. The fact that wasn't predicted 20 years ago is what matters in this context.
I don’t do zero sum games, you can normalize every bad thing that ever happened with that rhetoric.
Also, someone benefiting from something doesn’t make it good. Weapons smuggling is also extremely beneficial to the people involved.
Yes but if I go with your priors then all of these are similarly to be suspect
- gaming
- netflix
- television
- social media
- hacker news
- music in general
- carnivals
A priori, all of these are equally suspicious as to whether they provide value or not.
My point is that unless you have reason to suspect, people engaging in consumption through their own agency is in general preferable. You can of course bring counter examples but they are more of caveats against my larger truer point.
Social media for sure and television and Netflix in general absolutely.
But again, providing value is not the same as something being good. A lot of people think inaccuracies by LLMs to be of high value because it’s provided with nice wrappings and the idea that you’re always right.
This line of thinking made many Germans who thought they're on the right side of history simply by the virtue of joining the crowd, to learn the hard way in 1945.
And today's adapt or die doesn't sound less fascist than in 1930.
You mean, when evaluating suppliers, do I push for those who don't use AI?
Yes.
I'm not going to be childish and dunk on you for having to update your priors now, but this is exactly the problem with this speaking in aphorisms and glib dismissals. You don't know anyone here, you speak in authoritative tone for others, and redefine what "matters" and what is worthy of conversation as if this is up to you.
> Don’t write a blog post whining about your morals,
why on earth not?
I wrote a blog post about a toilet brush. Can the man write a blog post about his struggle with morality and a changing market?
Some people maintain that JavaScript is evil too, and make a big deal out of telling everyone they avoid it on moral grounds as often as they can work it into the conversation, as if they were vegans who wanted everyone to know that and respect them for it.
So is it rational for a web design company to take a moral stance that they won't use JavaScript?
Is there a market for that, with enough clients who want their JavaScript-free work?
Are there really enough companies that morally hate JavaScript enough to hire them, at the expense of their web site's usability and functionality, and their own users who aren't as laser focused on performatively not using JavaScript and letting everyone know about it as they are?
I understand that website studios have been hit hard, given how easy it is to generate good enough websites with AI tools. I don't think human potential is best utilised when dealing with CSS complexities. In the long term, I think this is a positive.
However, what I don't like is how little the authors are respected in this process. Everything that the AI generates is based on human labour, but we don't see the authors getting the recognition.
Website building started dying off when SquareSpace launched and Wix came around. WordPress copied that and its been building blocks for the most part since then. There are few unique sites around these days.
Only in exactly the same sense that portrait painters were robbed of their income by the invention of photography. In the end people adapted and some people still paint. Just not a whole lot of portraits. Because people now take selfies.
Authors still get recognition. If they are decent authors producing original, literary work. But the type of author that fills page five of your local news paper, has not been valued for decades. But that was filler content long before AI showed up. Same for the people that do the subtitles on soap operas. The people that create the commercials that show at 4am on your TV. All fair game for AI.
It's not a heist, just progress. People having to adapt and struggling with that happens with most changes. That doesn't mean the change is bad. Projecting your rage, moralism, etc. onto agents of change is also a constant. People don't like change. The reason we still talk about Luddites is that they overreacted a bit.
People might feel that time is treating them unfairly. But the reality is that sometimes things just change and then some people adapt and others don't. If your party trick is stuff AIs do well (e.g. translating text, coming up with generic copy text, adding some illustrations to articles, etc.), then yes AI is robbing you of your job and there will be a lot less demand for doing these things manually. And maybe you were really good at it even. That really sucks. But it happened. That cat isn't going back in the bag. So, deal with it. There are plenty of other things people can still do.
You are no different than that portrait painter in the 1800s that suddenly saw their market for portraits evaporate because they were being replaced by a few seconds exposure in front of a camera. A lot of very decent art work was created after that. It did not kill art. But it did change what some artists did for a living. In the same way, the gramophone did not kill music. The TV did not kill theater. Etc.
Getting robbed implies a sense of entitlement to something. Did you own what you lost to begin with?
The claim of theft is simple: the AI companies stole intellectual property without attribution. Knowing how AIs are trained and seeing the content they produce, I'm not sure how you can dispute that.
Statistics are not theft. Judges have written over and over again that training a neural network (which is just fitting a high-dimensional function to a dataset) is transformative and therefore fair use. Putting it another way, me summarizing a MLB baseball game by saying the Cubs lost 7-0 does not infringe on MLB's ownership of the copyright of the filmed game.
People claiming that backpropagation "steals" your material don't understand math or copyright.
You can hate generative tools all you want -- opinions are free -- but you're fundamentally wrong about the legality or morality at play.
False equivalence - a random person can't go to a museum and then immediately go and paint exactly like another artist, but that's what the current LLM offerings allow
See Studio Ghibli's art style being ripped off, Disney suing Midjourney, etc
That's not exactly how LLMs learn either, they require huge amounts of training data to be able to imitate a style. And lots of human artists are able to imitate the style of one another as well, so I'm not sure what makes LLMs so different.
Regardless of whether you think IP laws should prevent LLMs from training on works under copyright, I hardly think the situation is beyond dispute. Whether copyright itself should even exist is something many dispute.
But DID the Luddites overreact?
They sought to have machines serve people instead of the other way around.
If they had succeeded in regulation over machines and seeing wealth back into the average factory worker’s hands, of artisans integrated into the workforce instead of shut out, would so much of the bloodshed and mayhem to form unions and regulations have been needed?
Broadly, it seems to me that most technological change could use some consideration of people
it's not the "exactly same sense". If an AI generated website is based on a real website, it's not like photography and painting, it is the same craft being compared.
It's also important that most of AI content created is slop. On this website most people stand against AI generated writing slop. Also, trust me, you don't want a world where most music is AI generated, it's going to drive you crazy. So, it's not like photography and painting it is like comparing good and shitty quality content.
Photography takes pictures of objects, not of paintings. By shifting the frame to "robbed of their income", you completely miss the point of the criticism you're responding to… but I suspect that's deliberate.
Robbing implies theft. The word heist was used here to imply that some crime is happening. I don't think there is such a crime and disagree with the framing. Which is what this is, and which is also very deliberate. Luddites used a similar kind of framing to justify their actions back in the day. Which is why I'm using it as an analogy. I believe a lot of the anti AI sentiment is rooted in very similar sentiments.
I'm not missing the point but making one. Clearly it's a sensitive topic to a lot of people here.
Portrait photography works whether or not there is a painting of the subject... LLMs cannot exist unless specifically consuming previous works! The authors of those works have every right to be upset about not being financially compensated, unlike painters.
I think it's just as likely that business who have gone all-in on AI are going to be the ones that get burned. When that hose-pipe of free compute gets turned off (as it surely must), then any business that relies on it is going to be left high and dry. It's going to be a massacre.
The latest DeepSeek and Kimi open weight models are competitive with GPT-5.
If every AI lab were to go bust tomorrow, we could still hire expensive GPU servers (there would suddenly be a glut of those!) and use them to run those open weight models and continue as we do today.
Sure, the models wouldn't ever get any better in the future - but existing teams that rely on them would be able to keep on working with surprisingly little disruption.
Do you remember the times when "cargo cult programming" was something negative? Now we're all writing incantations to the great AI, hoping that it will drop a useful nugget of knowledge in our lap...
Hot takes from 2023, great. Work with AIs has changed since then, maybe catch up? Look up how agentic systems work, how to keep them on task, how they can validate their work etc. Or don't.
I don't know about you, but I would rather pay some money for a course written thoughtfully by an actual human than waste my time trying to process AI-generated slop, even if it's free. Of course, programming language courses might seem outdated if you can just "fake it til you make it" by asking an LLM everytime you face a problem, but doing that won't actually lead to "making it", i.e. developing a deeper understanding of the programming environment you're working with.
Actually, I already prefer AI to static training materials these days. But instead of looking for a static training material, I treated it like a coach.
Recently I had to learn SPARQL. What I did is I created an MCP server to connect it to a graph database with SPARQL support, and then I asked the AI: "Can you teach me how to do this? How would I do this in SQL? How would I do it with SPARQL?" And then it would show me.
With examples of how to use something, it really helps that you can ask questions about what you want to know at that moment, instead of just following a static tutorial.
There's a simple and effective escape hatch: study abroad. Europe, Australia, South America, Canada even. Some countries are more affordable than others but the most expensive (by far) option is staying in the US.
From the point of view of developing your brain, leaving your country is a free education in itself. There is also the effect of embedding yourself in a network of expats made up of the best and brightest from countries all over the world. That all comes on top of the education you receive. And if you are less in it for the intellectual stuff and are more into drinking and partying, college life in the US is pretty lame compared to some university towns across the world. Cheaper, wilder, better.
There is an actually easy an effective escape hatch right here in the US:
Community college to state school path.
You can get a full bachelors degree for ~$35k. All four years, $35k. Not per year. Full degree. $35k.
And that's before any scholarships or grants.
Kids and parents are just insane though, and want to flex about the college they are going to from day one. Its become a ritualistic practice with social shame attached to going to community school.
A degree from the "right" college surely helps for certain firms? Sure, it must be a small number of top ones as most can't afford to be that choosy about their candidates.
Whether that is a sensible strategy for the firm (a candidate bias towards those who can pay the top college fees) is another question.
In those cases the sensible strategy for students/parents is to get most of the degree in the local college and then move to a prestigious college at the very end of one's studies.
If I remember that right. It is not that easy to get into the state school in our state. UW engineering departments required GPA 4.0 last year. Kids who had GPA 3.9 or less had 0 chances getting into the UW engineering schools.
That is probably not true if you are transferring from a community college after two years. It's entering as a freshman direct from high school that has all the barriers.
I can vouch for studying abroad. But can you get loans and scholarships for it as easily as studying at home? Even if the university is free you must pay for food and housing.
Studying abroad in Canada is not nearly as affordable. Tuition alone for international students here is exorbitant ($40,000/year and up). We don’t give any subsidies whatsoever for international students. Instead, we use their tuition fees to subsidize the tuition of our domestic students.
Yeah, Germany must be one of the few still attracting foreign students with no/low fees? I know a lot of courses have teaching in English, landing a job afterwards needs fluent German though.
I wonder how long it will last? UK Universities are now for rich foreigners only. It does mean great options for Chinese food near student halls though.
That can still be too much. Someone studying abroad usually isn’t allowed to work, so they’re making zero income. If they come from a poor family, they have almost if not zero reserves. So everything must be either provided by the college or covered by grants/loans.
Except some universities may allow foreign students to take on-campus jobs, which would probably pay enough. Or for a PhD, usually the university pays you.
> Someone studying abroad usually isn’t allowed to work
Citation needed, because I'm almost certain not being allowed to work as a foreign student is the exception to the rule. A surface level Google search for Western European countries (BE/NL/FR/DE, typical places to go study abroad) shows me all of them allow non-EU students to get a job. You'll typically see these student workers in bars, restaurants, grocery stores, ...
RE the parent comment stating 500 EUR rent is potentially too much for a foreign student to afford, I can imagine it might be. But it's also too much to afford for plenty of native students, and a large share of them get these student jobs to be able to afford their student housing and the likes.
The university is the signal. Studying at Stanford or MIT gives you a better (professional) future in the US, while the average American doesn't know universities such as the L'École polytechnique, UNAM, or UBA exist. They will clearly hire from the top US ranked ones.
> There's a simple and effective escape hatch: study abroad. Europe, Australia, South America, Canada even. Some countries are more affordable than others but the most expensive (by far) option is staying in the US.
I mean, good luck finding a job in the US when your degree is not from the us (or maybe Canada). Most industries don't hire folks with overseas degrees.
There are lots of reasons why keeping data centers on the ground might be cheaper but the article seems to be skipping over a few things.
1) ISS is about 30 years old. It's hardly the state of the art in solar technology. Also, it's much easier to get light to solar panels far a larger part of the time. Permanently in some orbits. And of course there is 0% chance of clouds or other obstructions.
2) We'll have Starship soon and New Glenn. Launching a lot of mass to orbit is a lot cheaper than launching the Space Station was.
3) The article complains about lack of bandwidth. Star Link serves millions of customers with high speed, low latency internet via thousands of satellites.
4) There have been plans for large scale solar panels in space for the purpose of beaming energy down in some form. This is not as much science fiction as it used to be anymore.
5) Learning effects are a thing. Based on thirty years ago, this is a bad idea. Based on today, it's still not great. But if things continue to improve, some things become doable. Star link works today and in terms of investment it's not a lot worse than a lot of the terrestrial communication networks it replaces. The notion would have been ridiculous a few decades ago but it no longer is.
In short, counter arguments to articles like this almost write themselves.
Solar panel performance is not the limiting factor in space. Thermal management is. Better solar panels don't help you here. Neither does permanent sunshine -- without the capability to radiate more heat at night, you've made the thermal management problem immensely worse.
Rockets: Launching no mass to orbit is even cheaper still.
Bandwidth: You do realize that even starlink speeds are crazy slow and high latency compared to data center optical connections? Fiber and copper always win out over wifi. With space, you are stuck with wifi. (Oversimplified, but accurate.)
Space solar power: there has been talk of this for half a century, yes. It never materialized because, like space data centers, it doesn't make economic sense.
The thermal budget is impossible to escape. Maybe in an asteroid it could be possible, the whole surface becomes a thermal radiator and the whole asteroid a thermal mass. But still no convection.
1) ISS is about 30 years old. It's hardly the state of the art in solar technology.
Domestic solar panels are heavy, and dont need to deal with hypersonic sand blasting. even at that height, you are in shadow every 90 minutes.
> 3) The article complains about lack of bandwidth. Star Link serves millions of customers with high speed, low latency internet via thousands of satellites.
Right. First power and heat are a massive pain to deal with. You need megawatts to run a datacentre. A full rack of GPUs (48u, 96 GPUs) is around 40-70kw. It also weighs a literal ton.
You also need to be able to power that in the time when you are in darkness. BUT! when you are zooming around the earth every 90 minutes, you can't maintain a low latency connection, because the distance between you and the datacentre.
That means geostationary, as that solves most of your power issues, but now you have latency and bandwidth issues. (oh and power, inverse square law and bandwidth are related)
> Special cases of the Sun-synchronous orbit are the noon/midnight orbit, where the local mean solar time of passage for equatorial latitudes is around noon or midnight, and the dawn/dusk orbit, where the local mean solar time of passage for equatorial latitudes is around sunrise or sunset, so that the satellite rides the terminator between day and night.
The dawn dusk orbit is in constant sunlight. The noon-midnight orbit isn't.
Those orbits (and their corresponding constellations) lack 100% availability for a ground station.
Furthermore, a polar orbit launch is quite a bit more expensive since it requires a significant change in inclination.
It’s not about things improving. This isn’t a great idea that’s not yet feasible, the way ubiquitous satellite communication was. This is a fundamentally bad idea based on the physics, not the technology.
Satellites are so much more expensive than just running a wire, so why is satellite communication desirable? Because one satellite can serve many remote places for less than it costs to run a wire to all of them, it can serve the middle of the ocean, it can serve moving vehicles. These are fundamental advantages that make it worthwhile to figure out how to make satellite communication viable.
Data centers in space offer no fundamental advantages. They have some minor advantages. Solar power is somewhat more available. They can reach a larger area of ground with radio or laser communication. And that’s about it. Stack those advantages against the massive disadvantages in cooling, construction, and maintenance. Absent breakthroughs in physics that allow antigravity tech or something like that, these advantages are fundamental, not merely from insufficient technology.
It's less about who is right and more about economic interests and lobbying power. There's a vocal minority that is just dead set against AI using all sorts of arguments related to religion, morality, fears about mass unemployment, all sorts of doom scenarios, etc. However, this is a minority with not a lot of lobbying power ultimately. And the louder they are and the less of this stuff actually materializes the easier it becomes to dismiss a lot of the arguments. Despite the loudness of the debate, the consensus is nowhere near as broad on this as it may seem to some.
And the quality of the debate remains very low as well. Most people barely understand the issues. And that includes many journalists that are still getting hung up on the whole "hallucinations can be funny" thing mostly. There are a lot of confused people spouting nonsense on this topic.
There are special interest groups with lobbying powers. Media companies with intellectual properties, actors worried about being impersonated, etc. Those have some ability to lobby for changes. And then you have the wider public that isn't that well informed and has sort of caught on to the notion that chat gpt is now definitely a thing that is sometimes mildly useful.
And there are the AI companies that are definitely very well funded and have an enormous amount of lobbying power. They can move whole economies with their spending so they are getting relatively little push back from politicians. Political Washington and California run on obscene amounts of lobbying money. And the AI companies can provide a lot of that.
A vocal minority led to the French Revolution, the Bolshevik Revolution, the Nazi party and the modern climate change movement. Vocal minorities can be powerful.
The reason, data centers choose to be near London is because there is no pricing advantage to go up north. Even though energy is plentiful, readily accessible, and often curtailed when there's too much of it there. If there was a pricing difference, you'd see a lot more economic activity up north.
Basically the physical advantage is there but the lack of economics cover it up and wipe out the advantage.
reply