Hacker Newsnew | past | comments | ask | show | jobs | submit | statuslover9000's commentslogin

Great job giving the government a live dossier of all the political volunteers canvassing out there. This makes me feel so much safer!


This makes sense. Israel seems to have used WhatsApp metadata to target Palestinians in Gaza: https://www.972mag.com/lavender-ai-israeli-army-gaza/

> The solution to this problem, he says, is artificial intelligence. The book offers a short guide to building a “target machine,” similar in description to Lavender, based on AI and machine-learning algorithms. Included in this guide are several examples of the “hundreds and thousands” of features that can increase an individual’s rating, such as being in a Whatsapp group with a known militant, changing cell phone every few months, and changing addresses frequently.


The short-term effect is a harbinger of the long-term risk, since capitalism doesn’t inherently care for people who don’t provide economic value. Once superintelligent AI arises, none of us will have value within this system. Even the largest current capital holders will have a hard time holding on to it with an enormous intelligence disadvantage. The logical endpoint is the subjugation or elimination of our species, unless we find a new economic system with human value at its core.


There are a lot of assumptions going on here. One of them is that superintelligent AI will arise. We have no reason to believe this will happen in our lifetimes. I posit that we are about as close to superintelligent AI as James Watt was to nuclear fusion.

The other assumption is that wealth and power are distributed according to intelligence. This is obviously false, wealth and power are largely distributed according to who you or your father plays golf with. As long as AIs don't play golf and don't have fathers, we are quite safe.


> There are a lot of assumptions going on here. One of them is that superintelligent AI will arise. We have no reason to believe this will happen in our lifetimes. I posit that we are about as close to superintelligent AI as James Watt was to nuclear fusion.

This is a perfectly reasonable response if nobody is trying to build it.

Given people are trying to build it, what's the expected value from ignoring the problem? E($Damage_i) = P(BadOutcome_i) * $Damage_i.

$Damage can be huge (there are many possible bad outcomes of varying severity and probability, hence the subscript), which means that at the very least we should try to get a good estimate for P(…) so we know which problems are most important. In addition to it being bad to ignore real problems, it is also bad to do a Pascal's Mugging on ourselves just because we accidentally slipped a few decimal points in our initial best-guess, especially as we have finite capacity ourselves to solve problems.

Finally, let's assume you're right, that we're centuries off at least, and that all the superintelligent narrow that AI we've already got some examples of involve things that can't be replicated in any areas that pose any threat. How long would it take to solve alignment? Is that also centuries off? We've been trying to align each other since laws were written like 𒌷𒅗𒄀𒈾 at least, and the only reason I'm not giving an even older example is that this is the oldest known written form to have survived, not that we weren't doing it before then.

> The other assumption is that wealth and power are distributed according to intelligence. This is obviously false, wealth and power are largely distributed according to who you or your father plays golf with. As long as AIs don't play golf and don't have fathers, we are quite safe.

Nepotism helps, but… huh, TIL that nobody knows who was the grandfather of one of the world's most famous dictators.

Cronyism is a viable alternative for a lot of power-seekers.


So I propose the Musk supremacy criterion to be the following.

Suppose that a wealthy and powerful human (such as Elon Musk) were to suddenly obtain the exact same sinister goals as the hypothetical superintelligent AI in question. Suppose further that this human was able to convince/coerce/bribe another N (say 1000) humans to follow his bidding.

A BadOutcome is said to be MuskSupreme if it could be accomplished by the superintelligent AI, but not by the suddenly-evil Musk and his accomplices.

Obviously[citation needed] it is only the MuskSupreme BadOutcomes we care about. Do there exist any?


Not sufficiently detailed to be answerable.

For example 1000 people — but only if you get to choose who — is sufficient to take absolute control of both the US congress and the Russian State Duma (or a supermajority of those two plus the Russian Federation Council), which gives them the freedom to pass arbitrary constitutional amendments… so your scenario includes "gets crowned King of the USA and Russia, 90% of the global nuclear arsenal is now their personal property" as something we don't care about.


> As long as AIs don't play golf and don't have fathers, we are quite safe.

Until it becomes 'who you exchange bytes most efficiently with" and all humans are at a disadvantage against a swarm of even bellow average intelligence AGI agents.


But why would it become this?


Because, as unlikely as it is, if we're discussing risk scenarios for AI getting out of hand. Well then a monolithic superintelligence is just one of the possibilities. What about a swarm of dumb AIs that are nonetheless capable of reasoning and decision making and they become a threat?

That's pretty much what we did. There's no super intelligent monkey in charge. As much as some have tried to pretend, material or otherwise. There's just billions of average intelligence monkeys and we overran all Earth's ecosystems in a matter of centuries. Which is neither trivial nor fully explained yet.


Hey some of us are below average.

But yes, the corporations and swarm behavior are already doing poorly for our biosphere.


The difference is that we have 100% complete control of these AIs. We can just go into the power grid substation next to the data center and throw the big breaker, and the AI ceases to exist.

When humans developed, we did not displace an external entity that had created us and that had complete power to kill us all in an instant.


If shutting down datacenters is your solution to AI getting out of hand, it isn't as much a solution as fall out.


Look at the measures that were implemented during covid. Many of them were a lot more extreme than shutting down datacentres, yet they were aimed to mitigate a risk far less than "existential".


That's a post hoc conclusion. At the time we were all uncertain.


Friedrich Nietzsche might disagree



That data is in fact orthogonal to my point, for two reasons:

1. When we are talking about wealth and power that actually can influence the quality of the lives of many other people, we are talking about way less than 0.01% of the population. Those people aren't covered in this survey, and even if they were it would be impossible to identify on an axis spanning 0-100%.

2. Your linked article talks about income. People with significant wealth and power frequently have ordinary or below-ordinary income, for tax reasons.


Or higher income leads to higher IQs because of better education, nutrition and opportunities.


Actually, it will have the opposite effect, at least in the short term.

People who own high value assets (everything from land to the AI) will continue to own them and there will be no opportunities for people to earn their way up (because they can be replaced by AI).

"The logical endpoint is the subjugation or elimination of our species"

Possibly, but it would be by our species (those who own and control the AI) rather than by the AI.


I would venture to say that transhumanism will be the path and goal of the capital class, as that will be a tangible advantage potentially within their grasp.

I suppose then that they would become “homo sapiens sapiens sapiens” or some other similarly hubris laden label, and go on to abandon, dominate or subjugate the filthy hordes of mere Homo sapiens sapiens.


Yes, a theme in cyberpunk SF, but also in much older works, such as CS Lewis's That Hideous Strength.


As Antonio Gramsci said: “Pessimism of the intellect, optimism of the will.”

The forces of blind or cynical techno-optimism accelerating capitalism may feel insurmountable, but the future is not set in stone. Every day around the world, people in seemingly hopeless circumstances nevertheless devote their lives to fighting for what they believe in, and sometimes enough people do this over years or even decades that there’s a rupture in oppressive systems and humanity is forever changed for the better. We can only strive to live by our most deeply held values, within the circumstances we were placed, so that when we look back at the end of our lives we can take comfort in the fact that we did the best we could, and just maybe this will be enough to avert the inevitable.


> The expert believes that “asking for regulations because of fear of superhuman intelligence is like asking for regulation of transatlantic flights at near the speed of sound in 1925.”

This assessment of the timeline is quite telling. If supersonic flight posed an existential threat to humanity, we certainly should have been thinking about how to mitigate it in 1925.


1925 of course would have been a great time to put limits on fossil fuel use in aviation along with the rest of the fossil fuel applications to manage the biggest current threat to human civilization. (Arrhenius did the science showing global warming in 1896 or so)


Given the dual use of fossil fuels between military and civilian purposes, I wonder whether any state that deliberately handicapped car/aero/petrochemicals would’ve been able to survive the early twentieth century.

Both the USA and Nazi Germany benefited massively from have a civilian industrial base that was complementary to military production.


Of course you could also argue that Germany wouldn't have had the early successes in war, (if they had even started it). Or at a third juncture, would have fared worse against USSR.


There's a book called Freedom's Forge that I'm a fan of, it makes the argument that the Auto Industry (And assembly lines, mechanization in general) were the single most important reason the Allies won WWII. In fact all the big auto manufacturers of the time retooled their assembly lines to build tanks and airplanes. It's conceivable that if we never mass produced cars, the US wouldn't have had the capability to win the war.


Miami is still above water.

Would you shut down the powerhouse of our economy -- travel, transportation, energy -- for something hypothetical that hasn't even happened and doesn't appear to be close to happening?

I'm pro-clean energy, but you can't do without fossil fuels. Not if you want society to keep climbing up and up and up.


Before you jump to policy making, you should get the implications right.

"The current rate of sea level rise at Pensacola Bay has accelerated rapidly since 2010."

"The difference in sea level rise over the last 100 years has been approximately 10 inches—but in the next 75-100 years, the increase in sea level rise could be close to 48 inches." https://blogs.ifas.ufl.edu/escambiaco/2023/04/12/weekly-what...


That sounds like a great way to lose an upcoming world war to some people who DGAF about pollution, climate, or other people in general.


Well it could be argued that it does, what about supersonic nuclear missiles?


But AI doesn't pose an existential threat to humanity, so we're all good.


I really can't grasp how people think that a system that doesn't have a need to preserve itself will somehow start thinking for itself.

AI is quite troublesome for privacy though. How much privacy humans need is a question we'll probably have answered the hard way.


Who said anything about thinking for itself?

A thing does not require intent or consciousness to be dangerous. How many chemists have blown themselves up because they didn't realize an experiment was dangerous? How many production systems have crashed because the developer didn't accurately predict what the code they wrote will do?

Alkali metals and C++ code do not require ill intent, but they will still obliterate your limbs / revenue if you build and use them wrong.

One of my more tangible hypotheses is a sort of runaway effect. Economic, geopolitical, and military competitive pressures will quickly push out anyone and anything that still relies on last era human-in-the-loop processes, the same way any organization that doesn't utilize artificial lighting, electricity, and instant communication will obviously be left far behind. You have to just trust that the machine running stock market transactions will do its math right.

But unlike transaction software failure modes, which quickly result in outright crashes or verifiably incorrect errors, failure modes of non-bayesian decision making software probably looks something like what happens when existing economic, geopolitical, and military decision-makers make decisions that are harmful, unethical, or otherwise undesirable for humanity. This time augmented with, if not superhuman intelligence, at least superhuman speed and superhuman knowledge breadth.


Love that observation on C++. That's the reason I love C++. It's a language for those who need, nay crave, absolute raw performance. No training wheels. Short of assembly, it's just as close to the machine as you can get.


> No training wheels.

Very cute for hobby projects, a huge liability for commercial projects.

Use as many training wheels there as humanly possible, please.


Sure. I use Java, Python and Javascript all the time. But when I need the performance, for demanding VR/ graphics, nothing comes close to combination of speed and expressive power of abstraction of C++.


C++ has many things, but expressive power is not one of them.

But Stockholm syndrome I guess. :P


C++ has enough expressive power to make you wish it had less.


Haha. My point exactly.


Rust?


Does a prion have a need to preserve itself?

If you make enough varied AIs, some will have self-replicating behavior, just like if you make enough random proteins, some will self-replicate.


Does a prion think for itself? Who said self-replication is sufficient for AGI?


Oh yeah, it will replicate after the computer is shut down and then reinstalled from scratch. Especially when it's much simpler than that i.e. the whole thing lives in a throwaway container.


> I really can't grasp how people think that a system that doesn't have a need to preserve itself will somehow start thinking for itself.

Society exists because cooperation outperforms the alternatives. If you have human level AI at some point there is no benefit to cooperation and a major incentive to prevent anyone else gaining access to equal/better AI.

AI itself does not need to have any motivation - people in charge have plenty of incentives to eliminate the rest once they don't need them anymore.


Sure, in the prisoner's dilemma we could trust that all other parties will do the right thing, but that seems very unlikely.


that’s precisely the point: the technical geniuses lack creativity to predict how things can go wrong in a thousand of different ways.


What makes you think they're predicting the apocalypse correctly, then?

Another thing the technical geniuses tend to be good at is exploiting the power they suddenly obtain in their own interest, either directly or with regulations and collusion with those who hold actual hard power.

Evil AI owners seem to be much closer and far more material than an evil AI, and coincidentally it's something that is almost entirely lacking from the discourse, as public attention is too focused on sci-fi hypotheticals.


The bar is different - saying "there is no risk of apocalypse" requires you to be ~100% certain, because it you're saying "I'm 99% certain that there won't be a an apocalypse" then you're on the side of the AI-risk people, because a low-probability extinction event does justify action; the risk argument isn't that apocalypse is certain but rather that it is sufficiently plausible to take preventive measures.


I am only 99% certain that we won't be invaded by hostile aliens. Therefore we should take measures like building a giant space laser to prevent that apocalypse.


It is somewhat similar, but substantially different - we can make a solid argument that the likelihood of getting invaded by hostile aliens in the nearest century is far lower than 1%, and also if such an invasion does happen, then building a giant space laser won't make any difference at all.

The key difference between powerful alien invaders and us creating a powerful alien entity that we can't control is that the former either will or won't happen due to external circumstances, but the latter is something we would be doing to ourselves and can avoid if we choose to.


Bullshit. You can't presume to quantify the probability of either event. You're just making things up. All of the arguments are built on a foundation of sand. This stuff falls in the realm of religion and philosophy, not hard science and math.


The issue is that the doomsday scenario is extremely vague. The actual mechanism of action of a hypothetical rogue AGI is usually handwaved away as "it will be self-improving, superhumanly persuasive, and far smarter than us, so it will somehow figure out how to do something, or convince us to do it". What exactly will happen? How exactly will it happen? Will the world do nothing until that moment? How do society, politics, military fit into that scenario? All that rationalist navel gazing I've seen so far is either hilariously unaware of the existence of the outside world or assumes it won't change in the process.

You can't fight what you can't even see, let alone not sure if it exists at all. You don't invent a pair of wings because 1900s' you thinks that "the scientists will invent an anti-aging cure in the next decade, and surely personal flight will be ubiquitous in 2000's". You don't design a plasma gun for your Mars landing just in case you land in a city between Martian canals and see an army of little green men there. The world doesn't work like that, by the time you reach the Mars surface the context will be wildly different. You get burned and put guardrails, maybe. Not the other way around. Nobody can see through higher order effects, no matter how smart they are. And as the threat becomes progressively more clear there will be more caution, if needed. Premature optimization yada yada.

What actually happens right now is everybody and my aunt seriously discussing the evil robots that will come and kill us. That's pure mass hysteria, caused by the scaremongering and the cult-like beliefs of very smart people with disproportional influence who can't contain their own conjectures and bullshit in the realm of science fiction.

On the other hand,the end goal of OpenAI is the major job replacement, according to their current charter. [1] "Broadly distributed"... will they distribute their utopia to North Korea? Not happening, isn't it? I think it's obvious that if the actual job replacement rate will ever get anywhere close to the levels of late 19th early 20th century industrialization, this will produce major societal shifts and struggles, wealth and power redistribution, and a lot of blood and wars. Because the dependence on your job is the only ephemeral influence you (as a worker) have on this world. And of course, the companies that control the AI will be gatekeepers, and they will be more than happy to close the open research and open source models, and pull the regulation ladder and lie in bed with politicians and military, like OpenAI already does for years, of course they realize that and their utopical self-contradicting "charter" is nothing more than marketing hogwash that they already changed and will change in the future.

This is far more realistic and will happen much earlier than the rogue AI science fiction, if happens at all. In fact it's slowly happening now, and it's not talked about nearly enough, because the attention is mostly misdirected onto the vague superhuman AI red herring.

[1] https://openai.com/charter


The actual mechanism of action is handwaved away because there are many options, we don't expect to ever have an exhaustive list and specifics of those are largely irrelevant with respect to preventing them, so IMHO it's not worth spending time and effort analyzing specific scenarios as long as we assume that there exists at least one plausible (even if unlikely) scenario. A hypothetical specific scenario of a rogue AI engineering and launching a deadly supervirus is effectively equivalent to a specific scenario resulting in a world consumed by 'grey goo' nanobots - you don't (can't) fix the former by implementing some resilience or detection for diseases, you don't (can't) fix the latter by doing extra research on nanorobotics, you approach both (and any others) by tackling the core issues of, for example, ensuring that you can control what goals artificial agents have even if they are self-modifying in some aspects.

Like, "What exactly will happen? How exactly will it happen?" is worth discussing if and only if one party seriously believes they can convince the other that none of the imaginable scenarios are even remotely plausible; and if we assume that there is at least one scenario where we can say "I'm 99% certain it won't happen and 1% it could" then that discussion is pretty much over, the existential risk is plausible (and the consequences of that are so much incomparably larger than e.g. major job displacement that it justifies attention even if it's many orders of magnitude less likely) and we should instead talk about how to prevent it.

I'm not making the argument that the existence of stronger-than-human general AI will result in a catastrophe, but I am asserting that the mere existence of a stronger-than-human general AI (without some controls we currently can't figure out how to make or even if they are possible) carries at least some plausible chance of existential risk - for the sake or argument, let's say at least 1%; and I am asserting that a 1% of existential risk is a totally absolutely unacceptably high risk that must not be allowed to happen, because it is far more important[1] than e.g 100% certainty of major job displacement and social unrest.

"Will the world do nothing until that moment?" I think that what we saw from the global reaction to things like start of Covid-19 or climate change is completely sufficient to assume that we can't rely on the world stopping a major-but-stoppable issue in a timely manner, so "surely the world will do something" is not a sufficiently convincing argument to discount the risk; I don't think you can plausibly deny that even for a clearly catastrophic problem there is at least a 10% chance that the world could still delay sufficient action until it's too late; and this means that it doesn't really matter what the exact likelihood of that is based on society, politics, military aspects, we should work with the assumption that the world actually might do nothing to prevent any specific scenario from unfolding, and we should de-risk it in other ways.

[1] Looking at other posts, perhaps this is where we'd disagree, and in that case it's probably the core of the discussion which also doesn't really depend on any details of specific scenarios.


So...who isn't lacking the creativity? In fact, I would say techies are unreasonably gloomy because they grew up obsessing over sci-fi.


Which also means they can't predict the apocalyptic scenarios.

Q.E.D.


but all the thoughts about it in 1925 would have been way off to how it actually turned out


I was thinking one reason Yann LeCun would make such a terrible analogy is because he knows something the rest of us don't.


The thing that he knows that (most of) the rest of us don’t is quite a lot about AI.


If you read what Yann writes you'll pretty quickly see that he's rather ignorant about AI. His opinion is probably worse on average than the typical technical generalist's


You'll have to be way more convincing than this if you want anyone to believe that about Yann haha.


This is ignorant.

He won a Turing award for his work on deep learning.

Lots of people reasonably disagree with him about the future of AI/ML, but he's the opposite of ignorant.


That’s hilarious. I have read a few things he has written which suggests he’s definitely better than the average technical generalist. I haven’t read everything obviously but he has written quite a lot https://scholar.google.com/citations?user=WLN3QrAAAAAJ


What an absurd idea to say this about a leading AI researcher


And, what, you think he's working against humanity's interest in service of the secret AI overlords?


You're missing the point and context. Person above me says this analysis is quite telling, then points out the counterfactual historical hypothetical, which makes no sense. Yann thinks supersonic flight is not worthy of precautionary principle ethics in 1925? I'm saying the same thing--Yann's terrible, nonsense analogy is indeed poorly argued, but plausibly would make sense as a Freudian slip inconsistency of some sort. Ergo, "it is quite telling". As to what contents in his mind, or his motivations, I don't care to speculate.

The fact that you make insinuations about what I think is pretty aggressive and terrible similarly, this forum ought to have better manners than that when writing replies to complete strangers. Not everyone who has a different opinion is some crypto conspiracy theorist, and you are wrong to jump to such a suggestion.


His funny insinuation pushed you to write a nice long argumented explanation, it worked great


> I guess things like recursive self-improvement. You wouldn’t want to let your little AI go off and update its own code without you having oversight. Maybe that should even be a licensed activity—you know, just like for handling anthrax or nuclear materials.

He’s clearly aware of the risks of runaway, self-improving AI, and the idea that we can prevent this with regulation is laughable. The car is barreling towards the edge of the cliff, and many of our best and brightest have decided to just put a blindfold on and keep flooring it.


Lots of people I know dislike the taste of Beyond Meat specifically, but like Impossible and other fake meats. This article seems to be sensationalizing and over-generalizing from a single company’s failure.


They also used massive doses of LSD combined with electroshocks and managed to entirely wipe some patients’ memories, this person describing their experience is pretty mind-blowing:

- relevant section ends at 1:01:15: https://youtu.be/4-DMH_myil8?t=1h20s

- continues for another crazy 20s: https://youtu.be/4-DMH_myil8?t=1h3m30s


GDP has been growing fine, but life expectancy in the US has been in historic decline for the past decade, largely driven by alcoholism, drug overdose, and suicide. Meanwhile over 2 billion people worldwide are food insecure, and that number is rising.

Technology has certainly improved the standard of living of many many people over the last couple centuries, but there is no guarantee that it will continue to do so. Perhaps instead of a blind focus on “growing the pie”, at this point in history we should be asking ourselves “what pie?” and “did everyone get a slice?”

Edit: Updated “malnourished” to “food insecure”, my mistake for using the wrong terminology. According to the UN Food and Agriculture Organization, there were 2.37 billion food insecure people in 2020, a number which has been steadily rising since 2014 with a larger jump since the start of the pandemic: https://www.fao.org/3/cb4474en/online/cb4474en.html


I think your data is a little old. The undernourished, defined as fewer than 1800 calories per day, has steadily declined from 2000 until 2019, and then risen very slightly as a result of Covid. There's about 660 million that meet this threshold. Global malnutrition really has gotten significantly better over the past few decades. https://ourworldindata.org/hunger-and-undernourishment


Sadly we will know in the next years how fragile is the current food production system. High centralised food system based mainly on fossil fuels gives high yields until it doesn't.

Every time I listen to Steven Pinker I think that he would learn a lot with Nassim Taleb books.


What do food systems have to do with fossil fuels?


Food production uses fertilizers (from fossil fuels) and pesticides (from fossil fuels) and machinery (running on fossil fuels) to grow foods which are then processed (by machines running on fossil fuels) and shipped around the world (using fossil fuels).


Sure, energy. I just don't get the point.

We are talking about long-run trends of food insecurity. Russia's invasion of Ukraine will probably be (and hopefully) a short temporary turn in the wrong direction



Fertilizer production uses a great deal of natural gas.

The claims of fragility are unfounded though as natural gas supply isn't going anywhere and the use of natural gas in the process could be trivially (at great expense) replaced entirely by [nuclear powered] water electrolysis.

We also grossly overproduce calories, both as a policy decision for anti-fragility and for raising meat. Both of which leave a lot of slack before starvation levels kick in. At least in the US, other countries may be closer to the limit.


Your 2 billion number is incorrect. It was 500 million until 2020 and then slightly rose given the pandemic. That number has been dropping precipitously for decades. At this rate (ignoring 2021), there should be almost no malnutrition by 2030.



Your 2 billion number is incorrect. It was 500 million until 2020 and then slightly rose given the pandemic. That number has been dropping precipitously for decades.


[flagged]


> and why must a normal person give a damn about the misfortunes of junkies and alcoholics?

You don't have to give a damn about anyone, but I'm friends and family with junkies and alcoholics, so I care. And even if you don't know any junkies or alcoholics and never will, can you not have sympathy for a person you've never met, even if some of their problems are to some degree their own fault?

I know alcoholics who were binge-drinking at the age of 13 because their parents were alcoholics too and encouraged it. I have cousins that often threw up when they were toddlers years old because their parents gave them too much to drink. If you were in these circumstances, how sure are you that you wouldn't end up a junkie or an alcoholic yourself?


>and why must a normal person give a damn about the misfortunes of junkies and alcoholics?

for starters because they're also normal people. If you think there's no drug users or alcoholics even among all the people you know you'd be very surprised. It takes a lot of callousness to think that people who struggle with addiction or really any other thing aren't exactly what 'normal' people are like.

What I wonder about is when it became normal to put that much lack of empathy and compassion proudly on public display.


> and why must a normal person give a damn about the misfortunes of junkies and alcoholics?

I give a damn about living in a society suffering from a surfeit of junkies and alcoholics. That affects me.


> and why must a normal person give a damn about the misfortunes of junkies and alcoholics?

Try giving up your smartphone for whole one day, and then get back to us on "junkies". :)


So your theory is that poor people in poor countries just keep having babies when they have less and less to feed them. And that’s it. What a genius you are.


I have a very grim outlook on the future.

> “did everyone get a slice?”

I don't believe so, but it makes sense: we've always been at (economic) war with Eastasia/Oceania


I'm from Australia and I agree! give me pie!


For chemical reaction prediction, see the Open Reaction Database, a collaboration including the Coley lab at MIT (surprisingly not cited by OP):

Paper: https://pubs.acs.org/doi/10.1021/jacs.1c09820

Docs: https://docs.open-reaction-database.org/en/latest/overview.h...

It’s an incredible effort to collate and clean this data, and even then a substantial portion of it will not be reproducible due to experimental variability or outright errors.

For computational methods development it’s extremely useful, maybe even necessary, to have a substantial amount of money and one’s own lab space to collect new data and experimentally test prospective predictions under tightly controlled conditions. The historical data is certainly useful but is not a panacea.


Relatedly (and also not citing) from a couple weeks ago https://news.ycombinator.com/item?id=31566200 Call for a Public Open Database of All Chemical Reactions


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: