> but it feels like AI is an especially strong instance of this. It seems like a lot of folks just want to pooh pooh any possible outcome. I'm not sure why this is.
I presume the amount of hype AI research has been getting for the past 4 decades might be at least part of the reason. I also think AI is terribly named. We are assigning “intelligence” to basically a statistical inference model before philosophers and psychologists have even figured out what “intelligence” is (at least in a non-racist way).
I know that both the quality and (especially) the quantity of inference done with machine learning algorithms is really impressive indeed. But when people are advocating AI research as a step towards some “artificial general intelligence” people (rightly) will raise questions and start poohing you down.
The naming does indeed matter here. The concept of general intelligence is filled with pseudo-science and has a history of racism (see Mismeasure of Man by Stephen J. Gould). Non-linear statistical inference with very large matrices could be assigned intelligence as it is very useful, but it is by no means the same type of intelligence we ascribe to humans (or even dogs for that matter).
If your plant actually looks like a moss you probably shouldn’t call it a rose. (Even though your moss is actually quite amazing).
I presume the amount of hype AI research has been getting for the past 4 decades might be at least part of the reason. I also think AI is terribly named. We are assigning “intelligence” to basically a statistical inference model before philosophers and psychologists have even figured out what “intelligence” is (at least in a non-racist way).
I know that both the quality and (especially) the quantity of inference done with machine learning algorithms is really impressive indeed. But when people are advocating AI research as a step towards some “artificial general intelligence” people (rightly) will raise questions and start poohing you down.