Hacker Newsnew | past | comments | ask | show | jobs | submit | xiphias2's commentslogin

The amount of times ChatGPT o3 helped me with medical issues makes me think that it already saved much more lives.

Of course I'm not trying to suggest that these deaths are not tragedy, but the help it gives is so much more.


The original code is really nice:

  // golfed minslides, 173 bytes
  let a=document.getElementsByClassName("slide"),b=0,c=a.length-1;
  document.addEventListener("keypress",({key:d})=>{b+=("j"==d)-("k"==d),b=0>b?0:b>l?l:b,a[b].scrollIntoView()})


Even for non-developer use cases o3 is a much better model for me than GPT5 on any setting.

30 seconds-1 minute is just the time I am patient enough to wait as that's the time I am spending on writing a question.

Faster models just make too many mistakes / don't understand the question.


Completely agree. This is why they brought back the “legacy models” option.

GPT-$ is the money gpt in my opinion. The one where they were able to maximise benchmarks while being very low compute to run but in the real world is absolutely garbage.


An important advantage of aliases was not mentioned: I see everything in one place and can easily build aliases on top of other aliases without much thinking.

Anyways, my favourite alias that I use all the time is this:

    alias a='nvim ~/.zshrc && . ~/.zshrc'
It solves the ,,not loaded automatically'' part at least for the current terminal


I'm sure he said bad things about Danya, and even now he's protecting himself when he should be just sorry for what terrible thing has happened, but generally Danya was more liked than Kramnik, so he can't be responsible / accountable for something he doesn't have power over (Danya's life).

If you are famous, you will have haters, it's part of the deal.

I agree with the crowd who say that mental health issues should be taken seriously instead of kept under the rug.


It was a very clear point: when Amit Singhal was kicked out for sexual harassment in the me too era. He was the heart of search quality but he went too far when he was drinking.


I'm not sure it has changed.

Maybe autovectorization works, but can I just write a few ARM64 instructions on my Mac in Rust stable (notcexperimental/nightly) as I can do it in C/C++ by just including a few ARM specific header files?


My guess is that they moved the systolic arrays inside the GPU cores just like how it's done in modern NVIDIA chips.

That's the only way to speed up MLX 4x compared to M4.


While it's all true, I think it's exciting that there's a country that's not afraid of betting when all the signs point to AI being accerelated.

USA already bet on software when it let China overtake manufacturing.

Now the only worse thing to betting on AI would be slowing it down inside USA.


> all the signs point to AI being accerelated

Feels like a feedback loop. It's exciting that we're all in on AI because we're all in on AI!


What is teamwork if not a feedback loop?


What signs point to any sort of acceleration besides in spending?

Most major companies have released new versions in the past year, but if people were asked to blindly determine whether they were using the newer or older version, I suspect the results would be close to random. It seems to me that the difference between versions is sharply decreasing in a way that seems to be asymptotic, similar to what happens in literally every other domain with neural networks.

I also think it's clear that the difference between the various choices is also diminishing. Aside from certain manually designed idiosyncrasies (like ChatGPT's obsequiousness), I think people assessing which model they're using would also be mostly random. Somewhat surprisingly, even in the 'LLM arena' [1], where you get to compare output side by side, the difference between models is approaching statistical 0!

[1] - https://huggingface.co/spaces/lmarena-ai/lmarena-leaderboard


Mainly growing energy demand of AI deployments leading to capacity increases becoming the bottleneck. The key challenge is going to be scaling up energy production in the US.


China's also betting on AI. There's massive domestic effort towards model parity and custom ICs.


Additionally, to my knowledge, China is also doing significantly better on energy production and clean energy at that versus the US.

In the US, we are terrified of nuclear and the administration is trying to make economically worse energy production the norm because they are stuck in the past.

If there is an AI race, I have zero doubt that China WILL win simply because of energy production and the government's willingness to pour money in. It's a foregone conclusion at this point in my opinion because the time to build new energy sources was yesterday. The only way I see China losing is: major debt crisis or getting into a war.

There is no chance, in my opinion, that the US federal government will get off their butts suddenly to fund AI or infrastructure because they are so busy worrying about less than 1% of the population who don't affect them but that they find 'icky'.

Woke isn't destroying the US, it's the people who are busy ' "judging their neighbors' porches" meanwhile the neighborhood is burning down.


> If there is an AI race, I have zero doubt that China WILL win.

A race to what? What does the winner get? What does the loser get? What does 'winning' even fucking mean? My statistical next token predictor is better than your statistical next token predictor?


Using AI in commercial applications to gain a strategic advantage over your competitors. China will use AI to do things that the USA will shudder at, they already have automated ports that are 10 times more efficient than ours, and we can’t even think about upgrading because our longshoremen unions are too strong.


As long as you realize a strong Chinese dollar is the worst thing that could happen to China right now, and as long as we just excuse away the whole “LLMs make more money than they cost” (which they absolutely do not) sure thing.


Strong Chinese yuan isn’t that much of a concern, only 15% of the economy is exports.


This is something I don’t get. The general cost of living is sky rocketing, largely due to this push and some of the population thinks there is prosperity for all on the otherside of the meat processing machine we’re pushing everyone toward.

Groceries are going to get more expensive. Every dollar we spend on over building AI infrastructure is a dollar we never get back.

We could use this money for other things such as healthcare and fixing our broken parts of our education system. Instead people are getting ancy to chat with a summarized version of all the garbage on the internet.


the idea is to use AI to build super productive farms and greenhouses, improve the capability to do that in urban areas, automated and super efficient transportation. but it's not just AI doing all that, it's someone who wants to start a business using AI himself to figure out how to best start up a greenhouse in his community and setup the tech infra needed including the API for people to be able to view available produce, estimates on availability, initiate trades, etc. (this greenhouse thing is just one example).

another example could be someone wants to build an ecosystem monitoring station to monitor the nearby ravine (pollution levels with rainfall and other events etc.) and air quality over time. this is just a small datapoint but if people all over the place build their own ecosystem/weather monitoring things using basic electronics ordered from the internet and all plug them in to a standard observability software system then that could provide some pretty awesome outcomes including figuring the best way to clean polluted water (because some of the places will surely have implemented varying methods of sanitizing their own water).


Okay this is even more pie-in-the-sky. You can build productive greenhouses today, we don’t need AI to summarize the internet to figure out how to do it. There are no secrets hidden in the generative token tea-leaves that reveals better greenhouses. Urban farming will never be profitable or sustainable in a dense urban center. It’s been done.

“Standard observability software” whatever that is also does not require AI to build. We need 10GW to calculate rainfall for who? What benefit over how we currently calculate rainfall? This rainfall is hallucinated through summarization?


i can't come up with all the examples. i'm not a farming or ecology expert. so thanks for the information. do some thinking


> the idea is to use AI to build super productive farms and greenhouses

How? What are the mechanisms in which AI will lead to farms and greenhouses being more productive? How will AI improve the existing automation that already exists for the farming sector, and has existed for a hundred years?


fully automated with robots. the AI designs thousands of experiments and deploys them at scale. idk, i'm not an agriculture expert. it was just one example. what other possibilities are there?

electronics recycling, disassembling old computers to get the raw materials into a form that can be used again. we'll need programs to automate the production and testing and analysis of the robots that will recycle the components.


> idk, i'm not an agriculture expert.

That much is obvious. The fact that you’re straining so hard to come up with these bongcloud “ideas” should clue you in that maybe this isn’t the revolutionary tech that the suits are selling it as


> The general cost of living is sky rocketing, largely due to this push

How are cost of living increases tied to AI investment?


There are only two ways it could: either power cost increases which only make up a small portion of cost of living or somehow believing it responsible for inflation. But the latter doesn't make any sense, especially as they believe the money spent to be 'lost'.


How does that not make sense? Inflation is lost money from a consumer viewpoint.


Money spent on AI is taking GPUs and software engineers off the market, as well as the energy needed to drive AI data centers. AI doesn’t need food, well, beyond the SWEs, it shouldn’t be inflating food at all. The only thing I could think of is AI investment drawing away agriculture investment or something. Or maybe AI data centers replacing farmland?


This is an intentionally disingenuous question right?

Where do you think the tax money being given away to these huge datacenters is going?

What downstream effects does major increase to limited consumer services and goods do across the board?


No, it was a genuine question. Sorry if I’m asking basic questions, but I actually don’t find your questions as responses help make things any clearer.


The collective west enjoyed a period of prosperity because it had massive technological advantage. The Spanish had guns, native Americans had sticks. The British had steam engines, the Indians had cows. The Americans had computers, the soviets had abascus. Now the Chinese have AI, we have if statements. Losing the AI race is an existential threat to our civilization.


Yet no one can detail how it’s an existential threat. No real usecases for larger society from AI models but we must make everyone’s life worse so we don’t lose the race to nowhere.


> The collective west enjoyed a period of prosperity because it had massive technological advantage. The Spanish had guns,

as it was said in 1898 by Hilaire Belloc: "Whatever happens, we have got. The Maxim gun, and they have not."

In that case it was the British who had just slaughtered a lot of Matabele in Africa.

> Losing the AI race is an existential threat to our civilization.

Lol. If winning it means a sea of hallucinated "factual" text and deepfake videos, then that death of truth is also a threat. Being rid of that is no threat at all.


> The general cost of living is sky rocketing

there is no evidence of this


What's not to get? Groceries, healthcare, and education aren't expensive for them to afford, why would they want to invest their money and time in making those things more affordable for poor people? Better to invest all their resources making a robot slave army. Then it doesn't matter how unaffordable anything is to the poors, they can just die. Win win.


Just you switching away from Google is already justifying 1T infrastructure spend.

Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.


> Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.

Optimistic view: maybe product quality becomes an actually good metric again as the LLM will care about giving good products.

Yea, I know, I said it's an optimistic view.


Has a tech company ever taken 10s or 100s of billions of dollars from investors and not tried to a optimize revenue at the expense of users? Maybe it's happened but I literally can't think of a single one.

Given that the people and companies funding the current AI hype so heavily overlap with the same people who created the current crop of unpleasant money printing machines I have zero faith this time will be different.


What does it mean for the language model to "care" about something?

How would that matter against the operator selling advertisers the right to instruct it about what the relevant facts are?


I think it might be like when Grok was programmed to talk about white genocide and to support Musk's views. It always shoehorned that stuff in but when you asked about it it readily explained that it seemed like disinformation and openly admitted that Musk had a history of using his business to exert political sway.

It's maybe not really "caring" but they are harder to cajole than just "advertise this for us."


For now anyways. There’s a lot of effort being placed into putting up guardrails to make the model respond based on instructions and not deviate. I remember the crazy agents.md files that came out from I believe Anthropic with repeated instructions on how to respond. Clearly it’s a pain point they want to fix.

Once that is resolved then guiding the model to only recommend or mention specific brands will flow right in.


Golden Gate Claude says they know how to do that already.

https://www.anthropic.com/news/golden-gate-claude


Optimistic view #1: we'll have AI butlers between the pane of glass to filter all ads and negativity.

Optimistic view #2: there is no moat, and AI is "P=NP". Everything can be disrupted.


large language models don't "care" about anything, but the humans operating openai definitely care a lot about you making them affiliate marketing money


1 Trillion US dollars?

1 trillion dollars is justified because people use chatGPT instead of google sometimes?


Yes. Google Search on its own generates about $200b/y, so capturing Google Search's market would be worth $1t based on 5x multiplier.

GPT is more valuable than search because GPT has more control over the content than Search has.


Why is a less reliable service more valuable?


It doesnt matter if its realiable.


Google search won’t exist in the medium term. Why use a list of static links you have to look through manually if you can just ask AI what the answer is? Ai tools like chatgpt are what Google wanted search to be in the first place.


Because you cannot trust the answers AI gives. It presents hallucinated answers with the same confidence as true answers (e.g. see https://news.ycombinator.com/item?id=45322413 )


Aren't blogspam/link farms the equivalent in traditional search? It's not like Google gives 100% accurate links today.


exactly. AI is inherently more useful in its form.


for now


google search engine is the single most profitable product in the history of civilization


In terms of profit given to its creators, “money” has to be number one.


ChatGPT will have access to a tool that uses real-time bidding to determine what product it should instruct the LLM to shill. It's the same shit as Google but with an LLM which people want to use more than Google.


> Just think about how much more effective advertisements are going to be when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.

This has been the selling point of ML based recommendation systems as well. This story from 2012: https://www.forbes.com/sites/kashmirhill/2012/02/16/how-targ...

But can we really say that advertisements are more effective today?

From what little I know about SEO it seems nowadays high intent keywords are more important than ever. LLMs might not do any better than Google because without the intent to purchase pushing ads are just going to rack up impression costs.


> Just you switching away from Google is already justifying 1T infrastructure spend.

How? OpenAI are LOSING money on every query. Beating Google by losing money isn't really beating Google.


How do we know this?


Many of the companies (including OpenAI) have even claimed the opposite. Inference is profitable; it's R&D and training that's not.


It's not reasonable to claim inference is profitable when they've also never released those numbers. Also the price they charge for inference is not indicative of the price they're paying to provide inference. Also, at least in openAI's case, they are getting a fantastic deal on compute from Microsoft, so even if the price they charge is reflective of the price they pay, it's still not reflective of a market rate.


OpenAI hasn't released their training cost numbers but DeepSeek has, and there's dozens of companies offering inference hosting of open weight models for the very large models that keep up with OpenAI and Anthropic, so we can see what market rates are shaking out to be for companies that have even less economies of scale. You can also make some extrapolations from AWS Bedrock pricing and can also investigate inference costs yourself on local hardware. Then look at quality measures of quantizations that hosting providers do and you get a feel for what hosting providers are doing to manage costs.

We can't pinpoint the exact dollar amount OpenAI categorically spends but we can make a lot of reasonable and safe guesses, and all signs points to inference hosting being a profitable venture by itself, with training profitability being less certain or being a pursuit of a winner-takes-all strategy.


DeepSeek on GPUs is like 5x cheaper then GPT

And TPUs are like 5x cheaper then GPUs, per token

Inference is very much profitable


You can do most anything profitability if you ignore the vast majority of your input costs.


Statistically this is obvious. Most people use the free tier. Their total losses are enormous and their revenue is not great.


No, it’s not obvious. You can’t do this calculation without having numbers, and they need to come from somewhere.


Sam has claimed that they are profitable on inference. Maybe he is lying but I don't think speaking so absolutely about them losing money on that is something you can throw around so matter of fact. They lose money because they dump an enormous amount of money on R&D.


> when LLMs start to tell you what to buy based on which advertiser gave the most money to the company.

isn't that quite difficult to do consistently? I'd imagine it would be relatively easy to take the same LLM and get it to shit talk the product whose owners had paid the AI corp to shill. That doesn't seem particularly ideal.


I mean I think Ads will be about as effective as they are now. People need to actually buy more and if you fill LLMs with ad generation well the results of results will just get shitty the same way googles search results had. Its not a Trillion dollar return + 20% like you'd want out of that investment


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: