>If generative AI can repeatedly test physics theories faster than humans, then we may witness progress in physics. AI could generate thousands of theories and conduct experiments successfully, possibly leading to new physics models. However, I am uncertain whether this will be achievable soon, particularly for theories requiring costly experiments.
I've long felt that this may be the strongest argument against an AI singularity.
The technical ability to emulate the minds of the world's theoretical physicists and run accelerated simulations of their thought processes may be developed, but the generation of valid new insights in physics might depend strongly on observations and experiments conducted in the physical world, as seems to have been the case historically, and the virtual equivalents of those experiments may prove to be inadequate or impractical to implement.
Steven Pinker made a similar argument in this 2018 discussion with Sam Harris (the remarks begin at 65m03s in this recording [1]; the full conext begins at around 50m36s [2]). Harris is concerned about existential risks posed by advances in artificial intelligence, whereas Pinker is less so, in part for this reason. I agree with Harris that there are risks associated with artificial general intelligence, but I agree with Pinker and the parent comment about the dependence of the scientific process on experiment, and that an inability to conduct accelerated experiments in the physical world may undermine the standard argument about the inevitability of an AI singularity.
An AI capable of interfacing with the physical world might develop the ability to conduct accelerated physical experiments, but it would presumably face the same fundamental and contingent limits as human researchers, and the history of human science suggests those limits may impede exponential progress.
I've long felt that this may be the strongest argument against an AI singularity.
The technical ability to emulate the minds of the world's theoretical physicists and run accelerated simulations of their thought processes may be developed, but the generation of valid new insights in physics might depend strongly on observations and experiments conducted in the physical world, as seems to have been the case historically, and the virtual equivalents of those experiments may prove to be inadequate or impractical to implement.
Steven Pinker made a similar argument in this 2018 discussion with Sam Harris (the remarks begin at 65m03s in this recording [1]; the full conext begins at around 50m36s [2]). Harris is concerned about existential risks posed by advances in artificial intelligence, whereas Pinker is less so, in part for this reason. I agree with Harris that there are risks associated with artificial general intelligence, but I agree with Pinker and the parent comment about the dependence of the scientific process on experiment, and that an inability to conduct accelerated experiments in the physical world may undermine the standard argument about the inevitability of an AI singularity.
An AI capable of interfacing with the physical world might develop the ability to conduct accelerated physical experiments, but it would presumably face the same fundamental and contingent limits as human researchers, and the history of human science suggests those limits may impede exponential progress.
[1] https://www.youtube.com/watch?v=hofkL0RJJtM&t=3903s
[2] https://www.youtube.com/watch?v=hofkL0RJJtM&t=3036s