Hacker Newsnew | past | comments | ask | show | jobs | submit | jhrmnn's commentslogin

I’m guessing what was meant is that the price of things that are to be invested in is growing wrt the price of things that are to be consumed. Which naively makes sense to me in an economy based on growth where the total consumption starts to stagnate—the surplus still has to go somewhere. Is it so or is reality more complicated than that?

I think this is a key observation.

Apparently consumables have become incredibly cheap.

But then again, consumables will like start to rise in price now people need more money to buy a house, etc.

You could also say that real salaries have gone down a lot, which is probably also true.

These effects have to go through very complex value chains.


I think it should have been “If you need oxygen and have a CNS, then you need sleep.” Other tissues can take oxidative break during wakefulness, but since CNS is _generating_ wakefulness, if it takes a break, by construction there is sleep.


I think it comes down to how much society is an entity in its own right vs just a collection of individuals. Proportionaly, saving society may be worth restricting the absolute freedom of individuals to some degree.


> useless ritual fluff

I believe that LLMs can be very useful to identify this stuff in our processes. The solution shouldn't be then to fill them with LLMs but strip them entirely away. I tend to think the same about everyone freaking out about LLM misuse in education.


Of course, but identification has never been the problem. You don't need LLMs for that, you could just ask the scientists themselves and I'm sure over 90% of us would agree on the parts I mentioned being useless.

The problem is the bureaucracy. And if it asks for useless fluff, I'm happy to feed it with LLMs.


I want to ask about the bureaucracy aspect. I have never written a science grant application, but expect that some of it comes about because the applications want to ensure good governance around the proposals. Do you agree? For the fluff that genuinely has no productive value, do you have any explanation for why it is there?

Could LLM participation be blowing holes in good-governance measures that were only weakly effective, and therefore a good thing in the long-term? Could the rise in the practice drive grants arrangements to better governance?


These are very good questions, and I only have vague answers because it's not easy to understand how bureaucratic systems come to be, grow and work (and not my speciality), but I'll try to do my best.

Indeed, some of the fluff is due to the first reason - for example, the data management plan (where you need to specify how you're going to handle data) has good intentions: it's there so that you explain how you will make your data findable, interoperable, etc. which is a legitimately positive thing; as opposed to e.g. never releasing the research software you produce and making your results unreproducible. But the result is still fluff: I (well, Gemini and I) wrote one last week, it's 6 pages, and what it says can be said in 2-3 lines: that we use a few standard data formats, we will publish all papers on arXiv and/or our institutional repository, software on GitHub, data on GitHub or data repositories, and all the relevant URLs or handles will be linked from the papers. That's pretty much all, but of course you have to put it into a document with various sections and all sorts of unnecessary detail. Why? I suppose in part due to requirements of some disciplines "leaking" into others (I can imagine for people who work with medical data, it's important to specify in fine detail how they're going to handle the data. But not for my projects where I never touch sensitive data at all). And in part due to the trend of bureaucracies to grow - someone adds something, and then it's difficult to remove it because "hey, what if for some project it's relevant?", etc.

Then there are things that are outright useless, like the Gantt chart. At least in my area (CS), you can't really Gantt chart what you're going to do in a 5-year project, because it's research. Any nontrivial research should be unexpected, so beyond the first year you don't know what you'll exactly be doing.

Why is that there? I suppose it can be a mix of several factors:

- Maybe again, spill from other disciplines: I suppose in some particular disciplines, a Gantt chart might be useful. Perhaps if you're a historian and you're going to spend one year at a given archive, another year at a second archive, etc... but in CS it's useless.

- Scientists who end up at bureaucratic roles are those that don't actually like doing science that much, so they tend to focus on the fluff rather than on actual research.

- Research is unpredictable but funding agencies want to believe they're funding something predictable. So they make you plan the project, and then write a final report on how everything turned just as planned (even if this requires contorting facts) to make them happy.



> The solution shouldn't be then to fill them with LLMs but strip them entirely away.

You don't need language models to identify useless processes. The problem, however, is that people tend to be more comfortable with a process that exists whose product is ignored rather than no process at all.

For example, in the case of the grants here it's easier to imagine giving money to someone with a Gantt chart – even if that chart will never really represent reality – rather than someone who says 'trust us to use the money effectively.'

For an alternative view, a lot of the information supplied in such processes isn't related to the happy path, but rather it creates a paper trail for blame when things go wrong.

> I tend to think the same about everyone freaking out about LLM misuse in education.

The difference for education is that students need to practice, so the repetition is the point. The AI might ultimately be better at writing the book report, particularly compared to a student in 6th grade, but there's few other ways to train skills of reading comprehension and analysis.


When we write source code for compilers and interpreters, we “engineer context” for them.


Is this essentially the same difference as between vanilla regular-grid and importance-sampling Monte Carlo integration?


This is how language develops, I’m afraid. But imagine that the age is 10^k where k is something like “age class”. Then indeed the age grows exponentially :)


It still doesn’t grow exponentially, it is just orders of magnitude older.

Possibly, because if I read between the lines, their answer is “huh I dunno”.


Orders of magnitude is an exponential measure.

1*10^n


Yes, but where is the growth? They just said that the age of iceberg is 1000 years or maybe older 100.000.

There is no exponential growth there, just someone not having any clue about the iceberg wanting to sound knowledgeable about the subject.


so then every change can be called exponential


> This chair is 4 years old. Or, maybe 5 years old.

Yeah, exponential growth!!!


One thing that'd be hard to give up is the camera. I have two small kids. I captured so many beautiful moments only thanks to a camera always being around.


I just have cheap digital cameras laying around the house anyone can use. If anything you may get -more- pictures this way.


Crucially, the intuitive thinking can be often wrong, and that’s precisely what science aims to avoid with all the extra effort.


I have a theory that this focus on ideas vs solutions also divides individual researchers, in what drives them. Agreed that academia celebrates and rewards ideas, not solutions. And maybe that’s ok and how it should be, solutions can be done in industry? But the SNR of ideas feels too high at this point.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: