Hacker News new | past | comments | ask | show | jobs | submit login

As a scientist in academic research, I can only see this as a bad thing. The #1 valued thing in science is trust. At the end of the day (until things change in how we handle research data, code etc...) all papers are based on the reviewers trust in the authors that their data is what they say it is, and the code they submit does what it says it does.

Allowing an AI agent to automate code, data or analysis, necessitates that a human must thoroughly check it for errors. As anyone who has ever written code or a paper knows, this takes as long or longer than the initial creation itself, and only takes longer if you were not the one to write it.

Perhaps I am naive and missing something. I see the paper writing aspect as quite valuable as a draft system (as an assistive tool), but the code/data/analysis part I am heavily sceptical of.

Furthermore this seems like it will merely encourage academic spam, which already wastes valuable time for the volunteer (unpaid) reviewers, editors and chairs time.




Maybe the #1 valued thing in "capital S Science" -- the institutional bureaucracy of academia -- is trust. Trust that the bureaucracy will be preserved, funded, defended.. so long as the dogma is followed. The politics of Science.

The #1 valued thing in science is the method of doing science: reason, insight, objectivity, evidence, reproducibility. If the method can be automated, then great!


> The #1 valued thing in science is [...] reproducibility.

If only. Papers rarely describe their methods properly, and reproduction papers have a hard time being published, making it hard to justify the time it takes. If reproducibility was valued, things like retractionwatch wouldn't need to exist.


Well, agreed! I'd say that's good evidence the political bureaucracy of big-Science has substantially corrupted that (your?) culture.

It's not the only way though. There's a bright light coming from open source. Stay close to the people in AI saying "code, weights and methods or it didn't happen"

The code that runs most of the net ships with lengthy howto guides that are kept up to date, thorough automated testing in support of changes/experimentation, etc. Experienced programmers who run across a project without this downgrade their valuation accordingly

It doesn't solve all problems, but it does show there's a way that's being actively cultivated by a culture that is changing the world


Trust is the primary value, because it covers everything you listed.

Most people who read research papers only skim through the paper to get the big picture. They trust that the authors and the publication system did a good-faith effort to advance science. If they can't trust that, they almost certainly won't read the paper, because they don't have the time and the interest to go through the details. Only a few people read the technical parts with the intent to understand them. Even fewer go through supplementary materials and external reproducibility instructions with a similar attention to detail.


Also, there's quite a lot of value in figuring out what doesn't work. A colleague of mine says that his entire PhD could have been completed in about 4 months of work if only he knew what to do to begin with. Perhaps some ai system can try a bunch of different pathways and explain why they went wrong. Perhaps that's educational for human scientists. I dunno.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: