Schimidhuber says that his task is "to create an automatic scientist, and then retire."
Not long ago it was mildly insulting for someone to suggest that your writing sounded like the output of GPT, already (for most of us) it has become mildly complementary. GPT may be hallucinating, but it writes well.
So what if you connect it to empirical feedback? Make it a scientist. I don't think it will be long before these machines think better than us (at least most of us.)
The thing is, they don't have glands. They don't have historical trauma in their flesh. What I'm getting at is that these machines won't have human hangups. They won't be neurotic. They won't be HAL. They will be sane.
The interesting thing here is that life on Earth is actually pretty straightforward. In video game terms Earth is very easy and all the walkthroughs and cheats are known and available. The only reason it seems hard is that people are kinda messed up. Once we have sane intelligent machines to make our decisions for us things should get better rapidly.
Your comment reminded me about a discussion G.K Chesterton has in "Orthodoxy" on madness. My take is that when facts and information is divorced from experience we risk being unmoored from reality:
> If you argue with a madman, it is extremely probable that you will get the worst for it; for in many ways his mind moves all the quicker for not being delayed by the things that go with good judgement. He is not hampered by a sense of humor or by charity, or by the dumb certainties of experience. He is the more logical for losing certain sane affections. Indeed the common phrase for insanity is in this respect a misleading one. The madman is not the man who has lost his reason. The madman is the man who has lost everything except his reason.
Chat gpt is a terrible writer. At least, every example I have seen has had poor information density and was generally worse than the input prompts people used when they wanted to 'fluff up' a statement or opinion. I would be genuinely interested in a counter example.
Side note - your main point seems overly optimistic. How would we recognize, value, or design a 'sane' machine when by your argument we dont have access to sanity? Seems way more likely to generate a distillation of our neuroses.
- - - -
Schimidhuber says that his task is "to create an automatic scientist, and then retire."
Not long ago it was mildly insulting for someone to suggest that your writing sounded like the output of GPT, already (for most of us) it has become mildly complementary. GPT may be hallucinating, but it writes well.
So what if you connect it to empirical feedback? Make it a scientist. I don't think it will be long before these machines think better than us (at least most of us.)
The thing is, they don't have glands. They don't have historical trauma in their flesh. What I'm getting at is that these machines won't have human hangups. They won't be neurotic. They won't be HAL. They will be sane.
The interesting thing here is that life on Earth is actually pretty straightforward. In video game terms Earth is very easy and all the walkthroughs and cheats are known and available. The only reason it seems hard is that people are kinda messed up. Once we have sane intelligent machines to make our decisions for us things should get better rapidly.