Hacker Newsnew | past | comments | ask | show | jobs | submit | Apocryphon's commentslogin

And here we once again see an example of misaligned incentives baked into another one of our most hallowed institutions.

The problem is that what the “hallowed institutions” are trying to do is extremely ridiculous: turn the kind of work that scientific geniuses did into something that can be replicated by following a formula.

It’s as if a committee of middle managers got together and said, “how can we replicate and scale the work of people like Einstein?”


> The problem is that what the “hallowed institutions” are trying to do is extremely ridiculous: turn the kind of work that scientific geniuses did into something that can be replicated by following a formula.

> It’s as if a committee of middle managers got together and said, “how can we replicate and scale the work of people like Einstein?”

Or are they trying to require enough rigor and discipline so that out of 100,000 people who want to be the next Einstein, the process washes out the 99,000 who aren't willing or able to do more than throw out half-baked 'creative' ideas and expect the world to pick them up and run with them.

There's only finite attention and money for funding research, so you gotta do SOMETHING to filter out the larpers who want to take it and faff around.

I think at this point the system has eaten its own tail a bit, but there's good reason to require some level of "show me" before getting given the money to run your own research.


The opposite, actually. They hardly want to give away tokens for free!

They want the grand total of humanity's knowledge, from which they create tokens, to be given to them for free, though..

For the tech bros, the tokens are the actions and the prompts are the words.

> the hallmark of LLM style

That's just because LLMs were likely trained on a decade plus of human-generated Medium, Substack, Quora, and LinkedIn post slop.


Heck, they might even use AI to do it.

At this point, it's going to be some sort of post-Christian, culturally Christian social media influencer-driven, conspiracy theory-laden melange that incorporates everything from Tartarian giants to simulation hypothesis to Flat Earth. Q Gospel indeed.


How do non-LLM based World Models behave?


Not sure, can you tell? I feel like you are saying that they may be able to move etc..


I don’t think there’s an inherent modern bias against the laconic traditional style. It actually sounds more in line with the simple sentences children learn in grade school. Really, that ‘traditional’ version is only missing a noun for the second part and then that’s sufficient for modern use. Could remove the last character, even.


I wonder how this accounts for regionalisms, let alone different Chinese dialects. Taiwanese Mandarin uses 研究 as a verb easily enough.


Entropyless? So you’re saying they’re highly efficient?


Rather unfortunate timing that the original Apollo moon landing also happened in the middle of the Vietnam War.


Well, when you zoom out a bit, it’s not a stretch to say that both Apollo and Vietnam shared the same goal of countering the USSR.


The Vietnam War was us violating Vietnamese sovereignty and self-determination and losing.


…and why did the United States feel the need to do so?


Honestly, that coincidence was NOT lost on me.

Part of me finds it inappropriate to do the two things at once. Advancement in scientific knowledge being somewhat at odds with blowing up one of the oldest civilizations in the World.


Your life must pass by really slowly with a lot of waiting if you don’t do more than one thing at a time.


It's a game of priorities I guess when resources are limited. And no: I can't do everything, everywhere, all at once. Can you?

Big rocks in the pickle jar first. For you that includes wars when talking was working?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: