Hacker Newsnew | past | comments | ask | show | jobs | submit | letmevoteplease's commentslogin

   rmdir /s /q Z:\ETSY 2025\Antigravity Projects\Image Selector\client\node_modules.vite
Running this command in cmd attempts to delete (I ran without /q to check):

Z:\ETSY (-> Deletes if it exists.)

"2025\Antigravity" (-> The system cannot find the path specified.)

"Projects\Image" (-> The system cannot find the path specified.)

"Selector\client\node_modules.vite" (-> The system cannot find the path specified.)

It does not delete the Z:\ drive.


"The latest survey data available, compiled by researchers at CAS, revealed that most experts hold negative attitudes towards LLM development"

Let's check the source...[1]

"The survey was conducted in 2021 from May to July."

...

[1] https://web.archive.org/web/20250903025427/https:/long-term-...


I'd love to be greeted by a robot when arriving at a hotel. Of course there's the novelty factor, but even without that, self-checkouts show that many people prefer interacting with a machine over a human for service.

More importantly, who wants to stand behind a desk 8 hours a day and handle fussy customers? Probably some people, but the main motivation for the average hotel clerk is receiving money. Can we reorganize the economy so robots perform this kind of mundane work, while humans still receive money but can spend their time on more meaningful activities than standing behind a desk? I think a future like that is possible although it remains to be seen whether we will get it.


Maybe we live in different worlds, but I don’t know anyone who is especially happy when they get AI support rather than a human.

Services roles that are high contact / human interaction should have a human to deal with.


The source of this claim is a tweet.[1] The tweet screencaps a mathematician who says they talked to an IMO board member who told them "it was the general sense of the Jury and Coordinators that it's rude and inappropriate for AI developers to make announcements about their IMO performances too close to the IMO." This has now morphed into "OpenAI deliberately ignored the requests of IMO organizers to not publish AI results for some time."

[1] https://x.com/Mihonarium/status/1946880931723194389


The very tweet you're referencing: "Still, the IMO organizers directly asked OpenAI not to announce their results immediately after the olympiad."

(Also, here is the source of the screencap: https://leanprover.zulipchat.com/#narrow/channel/219941-Mach... )


The tweet is not an accurate summary of the original post. The person who said they talked to the organizer did not say that. And now we are relying on a tweet from a person who said they talked to a person who said they talked to an organizer. Quite a game of telephone, and yet you're presenting it as some established truth.


"According to a friend, the IMO asked AI companies not to steal the spotlight from kids and to wait a week after the closing ceremony to announce results." I don't see much reason for the poster to lie here. It also aligns with with what the people on the leanprover forum are saying, and, most importantly, with DeepMind not announcing their results yet. Edit: multiple other AI research labs have also claimed that IMO asked them to not announce their results for some time (i.e. https://x.com/HarmonicMath/status/1947023450578763991 )


I don't agree that textual, fictional explicit content involving minors is "fairly universally considered harmful". Such content is allowed on large platforms like Archive of Our Own or Japan's Shosetsuka ni Naro. I think "don't think it's harmful, but not willing to defend" is a pretty typical attitude.


The article says "expert testers."

"Evaluations by expert testers showed that o3-mini produces more accurate and clearer answers, with stronger reasoning abilities, than OpenAI o1-mini. Testers preferred o3-mini's responses to o1-mini 56% of the time and observed a 39% reduction in major errors on difficult real-world questions. W"


Those are two different sentences. The second sentence doesn't refer to experts explicitly.


ChatGPT, Claude and Gemini get everything correct except who is holding the bucket. Even the QvQ attempt in your screenshot would have seemed like complete magic a couple years ago.


The o1 release blog post contains 8 full examples of o1 chains of thought (not the summarized versions visible to users). They're English.

https://openai.com/index/learning-to-reason-with-llms/#chain...

I have seen the summaries dip into completely random languages like Thai, so it might switch between languages occasionally.


The video (https://www.youtube.com/watch?v=uCvKPBebNTk) was taken down 33 minutes before it was set to premiere, and now the account (https://www.youtube.com/@PepMangione) is deleted. The thumbnail had "The Truth will set me free" written in binary. Screenshot: https://i.imgur.com/ovODSvx.png

Here is a mirror of the short video linked in the OP (no real content, just a countdown and date): https://files.catbox.moe/jxtf97.mkv


Probably was fake, but if it was real, not very smart of him to have it take so long. Should have been an immediate upload so people could download it.

Maybe he was thinking they would leave it up so police could use it as potential evidence.


It would have been a thumbnail of a magnet link to a torrent if anything

I'd be shocked if someone used normal social media to distribute something like that. Meanwhile, drugs and warez and pirated content have developed a well known, well supported, internationally censorship resistant ecosystem.

Why would you not use that?


From the font and awkwarding positioning I'm guessing the date was overlayed on the video with a script or something (i.e. "two days after whenever this deadman's switch gets triggered")


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: