That's a position I do share, but let me play the devil's advocate for moment:
What if an AI research company doesn't want to be a supervillain or do bad things but they (or their investors) decide it's safer to just do the research in a jurisdiction where they face less risk of being subjected to regulatory burden especially during the early exploratory stage. Surely they'll care about abiding to the rules of one of the major world's markets, but they'll so it after they have a mature product.
At least that's how I interpreted GP's comment. EU will eventually get the product that confirms to the safety specifications that we all agree they should, but it will be a foreign product.
"What if an oil research company doesn't want to be a supervillain or do bad things but they (or their investors) decide it's safer to just do the research in a jurisdiction where they face less risk of being subjected to regulatory burden especially during the early exploratory stage."
Edit: In case anyone is confused, Post 1880, Land in modern day Oklahoma was considered "unassigned" according to the genocidal government and led to a "Land run" where vigilantes, crooks, scammers and generally unsavory people flooded the area - creating "Boomers" and "Sooners" that were given license to go and steal land from native people's where there were no govt regulations and they just started drilling for oil basically immediately
The idea of searching for places to do product development that have fewer structural protections for residents is unconscionable.
This kind of "externality washing" is talked about as though it were some kind of amoral or practical business matter separate from ethical decisions, rather than an explicit attempt to avoid responsibility for the community that you are moving into.
That is to say, organizations look for economic areas that do not have power structures that will scrutinize, push back or otherwise enforce community social standards on said organizations. Capital will always find a downtrodden population to exploit for it's own development BECAUSE they do not have other protections.
What if an AI research company doesn't want to be a supervillain or do bad things but they (or their investors) decide it's safer to just do the research in a jurisdiction where they face less risk of being subjected to regulatory burden especially during the early exploratory stage. Surely they'll care about abiding to the rules of one of the major world's markets, but they'll so it after they have a mature product.
At least that's how I interpreted GP's comment. EU will eventually get the product that confirms to the safety specifications that we all agree they should, but it will be a foreign product.