The urgency was faked and less true of the Manhattan Project than it is of AGI safety. There was no nuclear weapons race; once it became clear that Germany had no chance of building atomic bombs, several scientists left the MP in protest, saying it was unnecessary and dangerous. However, the race to develop AGI is very real, and we also have no way of knowing how close anyone is to reaching it.
Likewise, the target dates were pretty meaningless. There was no race, and the atomic bombs weren't necessary to end the war with Japan either. (It can't be said with certainty one way or the other, but there's pretty strong evidence that their existence was not the decisive factor in surrender.)
Public ownership and accountability are also pretty odd things to say! Congress didn't even know about the Manhattan Project. Even Truman didn't know for a long time. Sure, it was run by employees of the government and funded by the government, but it was a secret project with far less public input than any US-based private AI companies today.
> However, the race to develop AGI is very real, and we also have no way of knowing how close anyone is to reaching it.
It seems pretty irresponsible for AI boosters to say it’ll happen within 5 years then.
There’s a pretty important engineering distinction between the Manhattan Project and current research towards AGI. At the time of the Manhattan Project scientists already had a pretty good idea of how to build the weapon. The fundamental research had already been done. Most of the budget was actually just spent refining uranium. Of course there were details to figure out like the specific design of the detonator, but the mechanism of a runaway chain reaction was understood. This is much more concrete than building AGI.
For AGI nobody knows how to do it in detail. There are proposals for building trillion dollar clusters but we don’t have any theoretical basis for believing we’ll get AGI afterwards. The “scaling laws” people talk about are not actual laws but just empirical observations of trends in flawed metrics.
Matt Garman said 2 years for all programming jobs.
And I think most relevant to this article, since SSI says they won’t release a product until they have superintelligence, I think the fact that VCs are giving them money means they’ve been pretty optimistic in statements about about their timelines.
> There was no nuclear weapons race; once it became clear that Germany had no chance of building atomic bombs, several scientists left the MP in protest
You are forgetting Japan in WWII and given casualty numbers from island hopping it was going to be a absolutely huge casualty count with US troops, probably something on the order of Englands losses during WW1. Which for them sent them on a downward trajectory due to essentially an entire generation dying or being extremely traumatized. If the US did not have Nagasaki and Hiroshima we would probably not have the space program and US technical prowess post WWII, so a totally different reality than where we are today.
I'll try to argue his point. The idea that Japan would have resisted to the last man and that a massive amphibious invasion would have been required is kind of a myth. The US pacific submarine fleet had sunk the majority of the Japanese merchant marine to the point that Japan was critically low on war materiel and food. The Japanese navy had lost all of its capital ships and there was a critical shortage of personnel like pilots. The Soviets also invaded and overran Manchuria over a span of weeks. The military wing of the Japanese government certainly wanted to continue fighting but the writing was on the wall. The nuclear bombing of Japanese cities certainly pressed the issue but much of the American Military command in the Pacific thought it was unnecessarily brutal, and Japanese cities had already been devastated by a bombing campaign that included firebombing. I'm not sure that completely aligns with my own views but that's basically the argument, and there are compelling points.
Nimitz wanted to embargo Japan and starve them out.
The big problem that McArthur and others pointed out is that all the Japanese forces on the Asian mainland and left behind in the Island Hopping campaign through the Pacific were unlikely to surrender unless Japan itself was definitively defeated with the central government capitulating and aiding in the demobilization.
From their perspective the options were to either invade Japan and force a capitulation, or go back and keep fighting it out with every island citadel and throughout China, Indochina, Formosa, Korea, and Manchuria.
I am looking at the numbers from operation downfall that Truman and senior members of the administration looked at which had between 500,000 to 1,000,000 lives lost on the US side for a Japan invasion/defeat. 406k US soldiers lost their lives in WW2 so that would have more than tripled the deaths from its current numbers. And as for WWI and British casualties which I mentioned earlier, the British lost around 885k troops during WWI so US would have exceeded that number even on the low end of casualties.
Yeah it would have been a bloody invasion. I'm saying it probably would not have been necessary since Japan was under siege and basically out of food already.
> the atomic bombs weren't necessary to end the war with Japan either. (It can't be said with certainty one way or the other, but there's pretty strong evidence that their existence was not the decisive factor in surrender.)
Well, you didn't provide any evidence. Island hopping in the Pacific theater itself took thousands of lives, imagine what a headlong strike into a revanchist country of citizens determined to fight to the last man, woman and child would have looked like. We don't know how effective a hypothetical Soviet assault would have looked like as they had attacked sparsely populated Sakhalin only. What the atom bomb succeeded was in convincing Emperor Hirohito that continuing the war would be destructively pointless.
WW1 practically destroyed the British Empire for the most part. WW2 would have done the same for the US in your hypothetical scenario, but much worse.
> The urgency was faked and less true of the Manhattan Project than it is of AGI safety.
I'd say they were equal. We were worried about Russia getting nuclear capability once we knew Germany was out of the race. Russia was at best our frenemy. The enemy of my enemy is my friend kind of thing.
Pretty sure the military made it clear they aren’t launching any nukes, despite what the last President said publicly. They also made it clear they weren’t invading China.
Well, not exactly “we all”, just the citizens of the country in possession of the kill switch. And in some countries, the person in question was either not elected or elections are a farce to keep appearances.
The President of the United States has sole nuclear launch authority. To stop him would either take the cabinet and VP invoking the 25th amendment and removing him from office, or a military officer to disobey direct orders.
Are you under the impression the president can actually do it? It's not true, someone else at least needs to at least push another button. I'm 100% sure of what I said in regards to the USA, just not hidden nuke programs I wouldn't know about. No person in the USA can single handedly trigger a nuclear weapon launch. What he has authority to do is ask someone else to launch a nuke, and that person will then need to decide to do it.
Even the president needs someone else to push a button (and in those rooms there's also more than one person). There's literally no human that can do it alone without convincing at least 1 or 2 other people, depending on who it is.
The fact that the world hasn't ended and no nuke has been launched since the 1940s shows that the system is working. Give the button to a random billionaire and half of us will be dead by next week to improve profit margins.
Bikini atoll and the islanders that no longer live there due to nuclear contamination would like a word with you. Split hairs however you like with the definition of "launch" but those tests went on well through the 1950s.
Well-defined goal is the big one. We wanted a big bomb.
What does AGI do? AGI is up against a philosophical barrier, not a technical one. We'll continue improving AI's ability to automate and assist human decisions, but how does it become something more? Something more "general"?
"General" is every activity a human can do or learn to do. It was coined along with "narrow" to contrast with the then decidedly non-general AI systems. This was generally conceived of as a strict binary - every AI we've made is narrow, whereas humans are general, able to do a wide variety of tasks and do things like transfer learning, and the thinking was that we were missing some grand learning algorithm that would create a protointelligence which would be "general at birth" like a human baby, able to learn anything & everything in theory. An example of an AI system that is considered narrow is a calculator, or a chess engine - these are already superhuman in intelligence, in that they can perform their tasks better than any human ever possibly could, but a calculator or a chess engine is so narrow that it seems absurd to think of asking a calculator for an example of a healthy meal plan, or asking a chess engine to make sense of an expense report, or asking anything to write a memoir. Even in more modern times, with AlexNet we had a very impressive image recognition AI system, but it couldn't calculate large numbers or win a game of chess or write poetry - it was impressive, but still narrow.
With transformers, demonstrated first by LLMs, I think we've shown that the narrow-general divide as a strict binary is the wrong way to think about AI. Instead, LLMs are obviously more general than any previous AI system, in that they can do math or play chess or write a poem, all using the same system. They aren't as good as our existing superhuman computer systems at these tasks (aside from language processing, which they are SOTA at), not even as good at humans, but they're obviously much better than chance. With training to use tools (like calculators and chess engines) you can easily make an AI system with an LLM component that's superhuman in those fields, but there are still things that LLMs cannot do as well as humans, even when using tools, so they are not fully general. One example is making tools for themselves to use - they can do a lot of parts of that work, but I haven't seen an example yet of an LLM actually making a tool for itself that it can then use to solve a problem it otherwise couldn't. This is a subproblem of the larger "LLMs don't have long term memory and long term planning abilities" problem - you can ask an LLM to use python to make a little tool for itself to do one specific task, but it's not yet capable of adding that tool to its general toolset to enhance its general capabilities going forward. It can't write a memoir, or a book that people want to read, because they suck at planning or refining from drafts, and they have limited creativity because they're typically a blank slate in terms of explicit memory before they're asked to write - they have a gargantuan of implicitly remembered things from training, which is where what creativity they do have comes from, but they don't yet have a way to accrue and benefit from experience.
A thought exercise I think is helpful for understanding what the "AGI" benchmark should mean is: can this AI system be a drop-in substitute for a remote worker? As in, any labour that can be accomplished by a remote worker can be performed by it, including learning on the job to do different or new tasks, and including "designing and building AI systems". Such a system would be extremely economically valuable, and I think it should meet the bar of "AGI".
>But they can't, they still fail at arithmetic and still fail at counting syllables.
You are incorrect. These services are free, you can go and try it out for yourself. LLMs are perfectly capable of simple arithmetic, better than many humans and worse than some. They can also play chess and write poetry, and I made zero claims at "counting syllables", but it seems perfectly capable of doing that too. See for yourself, this was my first attempt, no cherry picking: https://chatgpt.com/share/ea1ee11e-9926-4139-89f9-6496e3bdee...
I asked it a multiplication question so it used a calculator to correctly complete the task, I asked it to play chess and it did well, I asked it to write me a poem about it and it did that well too. It did everything I said it could, which is significantly more than a narrow AI system like a calculator, a chess engine, or an image recognition algorithm could do. The point is it can do reasonably at a broad range of tasks, even if it isn't superhuman (or even average human) at any given one of them.
>I think that LLMs are really impressive but they are the perfect example of a narrow intelligence.
This doesn't make any sense at all. You think an AI artifact that can write poetry, code, play chess, control a robot, recommend a clutch to go with your dress, compute sums etc is "the perfect example of a narrow intelligence." while a chess engine like Stockfish or an average calculator exists? There are AI models that specifically and only recognise faces, but the LLM multitool is "the perfect example of a narrow intelligence."? Come on.
>I think they don't blur the lines between narrow and general, they just show a different dimension of narrowness.
You haven't provided an example of what "dimension of narrowness" LLMs show. I don't think you can reasonably describe an LLM as narrow without redefining the word - just because something is not fully general doesn't mean that it's narrow.
This argument generalises to all possible AI systems and thus proves way too much.
>[AI system]s are not general, but they show that a specific specialization ("[process sequential computational operations]") can solve a lot more problem that we thought it could.
Or if you really want:
>Humans are not general, but they show that a specific specialization ("neuron fires when enough connected neurons fire into it") can solve a lot more problem that we thought it could.
This is just sophistry - the method by which some entity is achieving things doesn't matter, what matters is whether or not it achieves them. If it can achieve multiple tasks across multiple domains it's more general than a single-domain model.
Still, you’d have to be quite an idiot to wait for the third time to listen eh?
Besides, the winners get to decide what’s a war crime or not.
And when the US started mass firebombing civilian Tokyo, it’s not like they were going to be able to just ‘meh, we’re good’ on that front. Compared to that hell, being nuked was humane.
By that point, Japan was already on its way out and resorted to flying manned bombs and airplanes into american warships. Nuking Japan wasn't for Japan, it was a show of force for the soviets who were developing their own nukes.
Neutralizing Japan the rest of the way would have cost millions of additional American lives, at a minimum. Japan was never going to surrender unless they saw the axe swinging for their neck, and knew they couldn’t dodge. They didn’t care about their own civilians.
As made quite apparent by, as you note, kamikaze tactics and more.
The Bomb was a cleaner, sharper, and faster Axe than invading the main island.
That it also sent a message to the rest of the world was a bonus. But do you think they would have not used it, if for example the USSR wasn’t waiting?
Of course not, they’d still have nuked the hell out of the Japanese.
Minus the urgency, scientific process, well-defined goals, target dates, public ownership, accountability...