Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Tao and Aaronson are optimistic about LLMs. What are they telling their students? That math and science degrees will soon have the same value as a degree in medieval dance theory?

If they are overly optimistic, perhaps it would be good to hear the opinions of Wiles and Perelman.



Tao isn't that optimistic. His opinion on LLMs is rather conservative.

https://www.scientificamerican.com/article/ai-will-become-ma...

> If you want to prove an unsolved conjecture, one of the first things you need to do is to break it up into smaller pieces, each of which has a better chance of being proven. But you will often break up a problem into harder problems. It’s very easy to transform a problem into one that’s harder than into one that’s simpler. And AI has not demonstrated any ability to be any better than humans in this regard.

Not sure if O1 changed his mind tho.


If you look at a lot of people’s PHDs, we now teach these things to 1st years. PhDs today do incredible deep work and the edge of science will just go further.


What does this mean? Of course math AI will take over top research in next ten years but usefulness to society has never been a goal of pure mathematics. I don't know if you understand the motivation for studying pure math. Personally I think it will be mostly good for research math


The "value of a degree" means the employment prospects for the degree holder.

Which is going to zero if the optimistic predictions are correct, so the optimistic professors should warn their students.

I understand the motivation for pure math quite well. It is about beauty, understanding things and discovering things for oneself. If computers do the work, the discovery part is gone and pure math is ruined.

For the non-research part, the AI zealots will want to replace all human labor with software.


Why are you saying this as if it was a bad thing? Just because software becomes better at us at something doesn't mean we can't do it out of fun (e.g. see chess community for example).


do you also value your personal relationships based on employment prospects?


not fully related to what the parent is saying, but I need to get this off my chest:

isn't this development obviously going to result in the deprecation of the value of the human intellect to near-zero? which is the thing that virtually all people on this platform base their livelihood on?

there's such a deafening silence around this topic on the internet where there should be - i don't know what but not this silence. we don't know what to do right? and we're avoiding this topic.

with this version they broke the assumed wall of llms's developed that was the last copium that we could believe in. the wall is broken and now it's just a matter of time until your capacity to think will be completely unneeded. the machine will do it more accurately and more quickly by orders of magnitude.

am I a doomer? I was in the home country of my parents recently, that is completely dysfunctional and war is on the verge of breaking out. what I learned there is that humans stay ignorant of great dangers until the very moment in which it affects them. this must've been the case with all the great wars that we've had. the water is rising but until I start to suffocate I don't agree to see it. i make up copes, or I think some are going to drown but I'm safe, or I distract myself.

what are all the software engineers here thinking? what's your cope for this? or are we all freezing in shock right now? this o1 is solving problems that i know many of my colleagues can never solve. what are we hoping for I think? I don't have a future because my future was the image that I had of it. and no image of the future that would be nice to keep around seems plausible at this point.


I wouldn't say there is a silence (as in avoidance) there are just folks convinced it's going to completely replace people (with 2 main subgroups: utopia and dystopia), folks convinced it's a parlor trick and never going to result in more, and folks convinced it's just the next efficiency increasing tool where some busy-ness we have will go away but only to make room for increasing total output not for replacing everyone wholesale.

Generally these folks have all said their piece and are tired of talking about it every time LLMs come up -> silence (as in nothing more to say) as each group is self convinced and most don't necessarily feel the need to get 100% of folks on board with their view. The dystopia or "doomer" group are the main ones left feeling like they need more of an answer, the rest move on quietly in either excitement or disinterest.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: