The phrase "grift behind AI doomerism" suggests that either the book author or the reviewer (or both) don't have a clue. AI will cause real and huge problems.
Cars have killed millions of people. Add to that the consequences of electricity, industrialization, urbanization, and even capitalism itself. But billions and billions of people are not only better off -- living lives of outrageous luxury when measured against recent history -- but they wouldn't have existed at all.
Everything good comes with tradeoffs. AI will likely also kill millions but will create and support and improve the lives of billions (if not trillions on a long enough time scale).
That's one vision of how things play out. But I do think it's possible that AI ends up killing every last person, in which case I think "everything good comes with tradeoffs" is a bit too much of an understatement.
Even if AI doesn't kill every last person, I think it will almost certainly increase the wealth gap. I agree that the tradeoffs will most likely not be worth it.
But the main figures behind the Ai doomerism are nutjobs either applying bayesian math in a bad way or right wing extremist believing that black people are inferior for genetics reason (I know it's an overreach that doesn't represent all the population of Ai doomers, but the most important people in that sphere are represented by what I said).
Furthermore, they're people without a history in academia or a specific past in philosophy. Although i do agree that investigating Ai dangers should be done, but in an academic context
I wasn't explicitly referring to the more "sane" people expressing doubts regarding Ai.
Hinton at least says that other issues in Ai should be dealt with, rather than just being an Ai doomer who only fears Ai takeover he actually realizes that there are other current issues as well
At this point, how many times should we have been dead for eliezer?
Like almost all the other doomers, Eliezer never claimed to know which generation of AIs would undergo a sudden increase in capability resulting in our extinction or some other doom, not with any specificity beyond saying that it would probably happen some time in the next few decades unless the AI project is stopped.
Idk, a few years ago when chatgpt came out he was saying things like "if we're still alive in 3 years (...)" where chatgpt 3.5 was still a glorified transformer. And modern llms still are. It's the constant fear mongering that stings my nerves.
And well, I'm not surprised nobody knows which generation of Ai could undergo an increase causing our extinction, it's not even sure if there could exist such a thing, let alone know which generation
He has been saying for a couple of years that it is possible any day now, but in the same breath he has always added that it is more likely 10 or 20 or 30 years from now than it is today.
It is not a contradiction be confident of the outcome while remaining very uncertain of the timing.
If an AI is created and deployed that is clearly much "better at reality" than people are (and human organizations are, e.g., the FBI), that can discover new scientific laws and invent new technologies and be very persuasive, and we survive, then he will have been proved wrong.
Ok, I think I might get a heart attack sooner or later, it's a possibility, although not very high.
If I said so, you might ask me if I saw a doctor or something similar to make me suspect that, and that's my issue with him. He's a Sci fi writer that's scared of technology without a grasp of how it works, and that's OK. He can talk about what he fears, and that's OK. It still doesn't mean we should take him seriously just because.
My pet peeve is that when trying to make laws regarding Ai - at least in Europe - some considerations were done regarding how it worked, what it was (...), how it's being talked in academic literature. I had a lawyer in a course explaining that, and while not perfect you eventually settle on something that more or less is reasonable. With yudkowsky, you have a guy that is scared of nanotech and yada yada. Sure, he might be right. But if I had to act based on something, it would look much more the eu process to make laws and less the "Ai will totally kill us from now to 30 years trust me". Perhaps now I'm more clear
And don't get me started with the rationalist stuff that just assumes pain is linear and yada yada
Eliezer has written extensively on why he thinks AI research is going to kill us all. He has also done 3-hour-long interviews on the subject which are published on Youtube.
And perhaps YouTube is the appropriate place to talk about the probabilities he pulls out of thin air such as chatgpt 5 killing us with less than 50% chance, the badmath he showed to us a few years ago on reddit, and him proposing to trust bayes rather than the scientific method
At least he could learn to use some confidence intervals to make everything appear more serious /s
I'm very much in favor of research in Ai safety, maybe done with less scare and less threats of striking countries outside of the gpu limit agreement (and less bayes, God)
Putting aside the nebulous notion "contribution to hard science"...
She became famous for adopting a strain of strident and problematic activism, using it to attack her colleagues and making claims just as wild as some of the ones she cherry picks to critique.
It's not at all surprising that she ended up an extremely divisive figure. And meanwhile, the state of the art sped far ahead of where she drew her line in the sand.
It's hard to find discussion of her that isn't strongly biased in one direction or another (surely, my own comment included). In my experience (sample size 1), when she gets brought up (or involved), the quality of the discussion usually plummets.
I'm looking to do exactly this. I'm currently doing embedded software for aerospace, so somewhat similar with regards to documentation and standards (it's critical software, just not human safety related). Can you share how you found the work?
I'm also curious to know how you can do small quantities of hourly work in this field rather than take on projects that would require much larger time commitments. Are you just consulting as a domain expert or something?
He presents a compelling case for why you should use automated tools to check compliance against coding standards (the standards don't do you any good otherwise).
As far as style goes, pipe your code through indent if you have hangups about formatting. That way you can spend your time on issues that are known to be correlated with risk.
With respect to his point that Uber requires passengers to identify themselves, I think a completely anonymous transportation system is not what is needed. Why do conventional taxi services and public transit systems have cameras? The answer is ostensibly to provide safety not only to the rider but also to the driver and other passengers [1]. But I'm sure Stallman would say "ban the cameras everywhere, Big Brother has no business tracking us!" If I were a cab driver though, I would feel safer knowing that a video of the transaction was at least being captured. And as a passenger I would feel less violated if I knew that the video was stored offline and took some amount of effort to retrieve, could only by accessed by certain individuals, and even then only with proper oversight.
What is needed is a balance between privacy and safety. Uber violates user's privacy, but fully anonymous transportation, especially in a one-on-one ridesharing situation, is unsafe for both parties. Law enforcement should be hard, not impossible.
Follow the link to the description of the New York City surveillance system. There you will see that he actually shares your viewpoint:
"New York City has a long history of oppressive surveillance. Taxicabs in New York transmit the passengers' photos by radio to the thugs, so I never take taxicabs there. By contrast, car service cars only store passengers' photos; that system is tolerable since, if you don't attack the driver (something I never do), the photos are ignored."
Relevant in the context of the recent FBI / iphone case.
Where I live in California, these boxes are everywhere.
One interesting aspect is that they can be alarmed, so that you can know when someone has accessed the key. It would be interesting to at least know when someone has accessed your encryption keys.
I hope the Riverside fire marshall (in the video) isn't holding up the real key on camera.