Hacker Newsnew | past | comments | ask | show | jobs | submit | SubiculumCode's commentslogin

I suppose its exciting, but whether that is a good thing depends entirely on how much you think AI technologies pose existential threats to human survival. This may sound hyperbolic, but serious people are seriously thinking about this and are seriously afraid.

I was having a discussion with Gemini. It claimed that because Gemini, as a large language model, cannot experience emotion, that the output of Gemini is less likely to be emotionally motivated. I countered that the experience of emotion is irrelevant. Gemini was trained on data written by humans who do experience emotion, who often wrote to express that emotion, and thus Gemini's output can be emotionally motivated, by proxy.

I definitely would be okay if we hit an AI winter; our culture and world cannot adapt fast enough for the change we are experiencing. In the meantime, the current level of AI is just good enough to make us more productive, but not so good as to make us irrelevant.

I hope this will happen, too. I think it might as soon as investors realize the LLMs will not become the AGI they were sold as an idea.

Please show the evidence of more productive. How did you measure it?

I think negative feedback loops of AIs trained on AI generated data might lead to a position where AI quality peaks and slides backwards.

I would not bet against synthetic data. AlphaZero is trained only on synthetic data and it's better than any human, and keeps getting better with more training compute. There is no negative feedback loop in the narrow cases we have tried previously. There may be trade-offs but on net we are going forward.

There's a pretty big difference between AlphaZero and a "generative AI" program: AlphaZero has access to an oracle that can tell it whether it's making valid moves and winning games.

By comparison, getting accurate feedback on whether facts are correct in a piece of text (for example) is much more difficult and expensive. At least, presumably that's why AI companies publish staged demo videos where the AI still makes factual errors half the time.


Automatic verification (oracle) is being used today to create synthetic data for LLMs. I don't see it as a big difference versus AlphaZero. While there's no way to ensure that a single synthetic reasoning trace is correct, as long as it leads to the correct answer according to the verifier, the law of large numbers should take care of that.

The problem is that it's difficult to create verifiers for many things we care about like architectural taste. So I expect to see superhuman capabilities on the things we can make verifiers for, but for other things it's harder to predict. We may see transfer learning or we may see collapse. My money would be more on transfer learning.


Transfer learning is one of the biggest unsolved problems in AI. And we are nowhere near solving it or even understanding how to go about it from an algorithmic perspective. We will definitely see collapse of the current hype train before we understand and employ effective transfer learning.

AI will radically leap forward in specialized function gain over the next decade. That's what everybody should be focusing on. It'll rapidly splinter and acquire dominance over the vast minutia. The intricacy of the endeavor will be led by the AI itself, as it'll fly-wheel itself on becoming an expert at every little thing far faster than we can. We're just seeding that possibility now. Not only will it not slide backwards, it'll leap a great distance forward from where it's at now.

Mainframes -> desktop computers -> a computer in every hand

Obese LLMs you visit -> agents riding with you whereever you are, integrated into your life and things -> everything everywhere, max specialization and distribution into every crevice, dominance over most tasks whether you're there or active or not

They haven't even really started working together yet. They're still largely living in sandboxes. We're barely out of the first inning. Pick a field and it's likely hardly even at the first pitch for most of them you can name, eg aircraft/flight.

In hindsight people will (jokingly?) wonder whether AI self-selected software development as one of its first conquests, as the ultimate foot in the door so it could pursue dominion over everything else (of course it had to happen in that progression; it'll prompt some chicken or the egg debates 30-50 years out).


Ah the old "everything I'm into is the model-t/iphone" which is why I'm programming in my metaverse home on the block chain.

Thank goodness we have version control systems then.

"Version control systems", in case of AI, mean that their knowledge will stay frozen in time, and so their usefulness will diminish. You need fresh data to train AI systems on, and since contemporary data is contaminated with generative AI, it will inevitably lead to inbreeding and eventual model collapse.

We are just at the beginning of integrating external tools in the process and developing complex cognitive structures. LLM is just one part of it. Till now it was cheaper and easier to improve that part especially if other work would be rendered obsolete by LLM improvements.

The amount of human suffering and death that could be massively mitigated by advanced AI is overwhelmingly worth the unknown risk in my opinion. If you had people close to you die from something where medicine or healthcare resources are close but not quite there to have allowed them to survive then you might feel the same.

I hate this argument, because all you have to do is look around the world today to see that if we have massively powerful technology that is controlled only by a few that it sure ain't leading to the "think of all the diseases we can cure!" utopia you describe.

We have many, many people around the world die all the time from easily curable and preventable diseases, we just choose not to. This is largely not a technology problem. Just look at PEPFAR, which saved tens of millions of lives from HIV/AIDS. We just decided to stop funding it: https://en.wikipedia.org/wiki/President%27s_Emergency_Plan_f...


Yeah, I'm pretty sure that someone could make money by building a cult following of a live streamed AI spouting spiritual nuttery with a synced avatar and voice, even if it is one obsessed follower per million impressions. Already, the only fans type industries depend on just gaining a few "whales" hooked.

This is part of it, something I am sure most celebrities face. However, I also think that the article isn't reporting/doesn't know the full story, e.g. mental illness or loneliness/depression in these individuals.

If this is pointed at home, we have a problem. If this is about using the best in tech across information harvesting, artificial intelligence for use against foreign adversaries, maybe less worrying for u.s. citizens.

I read a copyrighted book, it is lossy encoded into the weights of my brain. Am I a derivative work now? No. If that book inspires me to write another book in its genre, it will also not be a derivative work unless it adheres too closely to the original book.

Yes, if I read a book, memorize some passages, and use those memorized passages in a work without citation, it is plagiarism. I don't see how this is any different without relying on arbitrary but human-centric distinctions.

More to the point, if you steal the book and never even read it, you are still guilty of a crime.

So that is a good point. Demand is related to price. Lower the price, demand increases. Perhaps we've reached the point where all those who want and can afford apartments priced $x and above already have apartments, but there are not apartments priced $x-$z available.

So we will just have to see if prices continue to drop.


I don't know, but just wanted to say that my son got a job there as a mechanical engineer, and I couldn't be more proud. He can't tell me much because of classified status, but I can tell he loves his job and who he works with. Just sending praise to Sandia


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: