* systems which establish priority
in the dispatching of emergency services
* systems determining access to or
assigning people to educational institutes
* recruitment algorithms
* those that evaluate credit worthiness
* those for making individual risk assessments
* crime-predicting algorithms
While I'd also like to see autonomous military devices banned, banned AI that makes opaque life-changing decisions about individuals seems reasonable. We already say that these shouldn't discriminate and we've seen ways AI can allow discrimination through the back door.
I think the tradeoff is that at least the AI discrimination is systemized, and there's one place you can manipulate to reduce that discrimination, while with pre-AI human discrimination, it's not at one place, so it's harder to eliminate.
As an example, it's the difference between being rejected by a central agent for a loan, versus going to your local branch, and being rejected by a random employee at the local branch. It's obviously much easier to change the central agent than it is to change every distributed employee.
Now, whether this is actually the case in practice, and whether this is a good or bad thing is open to interpretation.
"I think the tradeoff is that at least the AI discrimination is systemized"
What does systematized mean in this context? The specific problem is that modern deep learning systems are unsystematic - they heuristically determine a result-procedure based on some goodness measure and this result-procedure is a black box.
You already have criteria-based algorithms for things like loans - the individual employees aren't making arbitrary decisions or just pen-and-paper calculations. You have a central algorithm now in a given bank, one that can be looked and understood. The question is whether to go from that to an opaque, "trained" algorithm whose criteria can't analyzed directly.
As far as I can tell the law does not prohibit algorithm assisted decision making. So as long there is a human rendering the final decision we are good. Which seems to be a reasonable balance IMO.
Good? This list is basically a rehash of popular sci-fi dystopias. I bet setting it up did not involve much thought beyond "Oh yeah, I remember this was the premise behind Minority Report. Let's ban it"
> An AI program called COMPAS has been used by a Wisconsin court to predict the likelihood that convicts will reoffend. An investigative piece by ProPublica last year found that this risk assessment system was biased against black prisoners, incorrectly flagging them as being more likely to reoffend than white prisoners (45% to 24% respectively). These predictions have led to defendants being handed longer sentences, as in the case of Wisconsin v. Loomis.
Also, I'd dispute your injection of the "more racist than humans" framing (which is also moving the goalposts a bit). The problem with racist algorithms isn't necessarily their "degree of racism" but the fact that they mask very real racism behind a veneer of false computerized "objectivity."
One I can think of off the top of my head (statistics, not AI, although AI would also allow it) is that the actuarial calculations for home/car insurance quotes rely on risk data by zip code, education level, income, and any and all other socioeconomic variables not including protected class, but which often correlate/group by protected class, and which are also reliable indicators of risk.
Depending on who you talk to these algorithms either are or are not discriminating against protected classes "through the back door".
Sure but my point is that, while you could argue that decisions about some topics could be discriminatory by definition, that has nothing to do with AI (and saying that AI is at fault is pure anti-AI FUD).
Parent mentioned that AI is used to sneak in discrimination through the back door, implying that discrimination wouldn’t be there (or there would be less) without AI.
Here's an example: mortgages (in the USA) used to be approved or denied by humans, but there were certain neighborhoods where only white people were allowed.
Now, there's a law against that.
In the future, there will be an AI system to approve or deny mortgages, based off of historical training data. Since that data includes the redlining era, the AI will learn to make racist decisions.
Most people do not understand how it is possible for a computer to be racist. (Other than against all humans like in Terminator 2.) This is why it's "through the back door", because it's not obvious how it's possible or where it's coming from.
"Since that data includes the redlining era, the AI will learn to make racist decisions."
This is a crude assumption.
AI researchers are well aware of these potentialities, and you'd (or the government) would have to provide evidence that these systems are racist before banning them.
The basic premise you're making is: "The world is unfair -> AI uses data from the real world -> the AI is racist".
Insurance fiduciaries already use an incredible amount of 'training data' in their work, and we don't have hugely material problems there.
The OP mentioned "AI that makes opaque life-changing decisions". In that context, "through the back door" was more likely meant in the sense of "without anyone noticing".
It doesn't really matter if there is "less" discrimination without AI. While AI is not there, there is no discrimination from AI. If there is some after introducing AI, then it's a problem with AI.
Bonus points if they name the overseeing organization the Turing Police.And if Switzerland goes its own way and allows citizenship for AIs.
In all seriousness, I’m not sure if these legal restrictions will actually be effective. They are too broad, vague, and will likely just result in technological stagnation.
My perspective is that our technological advancement has well outpaced our ability to adapt to the changes or bring our legal and social tools effectively to bear on them.
A decade or two of stagnation would be frustrating for those in the field but probably overall a good thing. Plus I don't think this would affect research at all so not even.
> A decade or two of stagnation would be frustrating for those in the field but probably overall a good thing.
Is a decade or two of a head start given to high-tech totalitarian regimes like China overall a good thing?
> Plus I don't think this would affect research at all so not even.
Limiting use of AI reduces interest of the public and young researchers and engineers, contributes to brain drain and limits availability of large datasets that are an important asset for AI development.
>> Limiting use of AI reduces interest of the public and young researchers and engineers, contributes to brain drain and limits availability of large datasets that are an important asset for AI development.
I disagree. As a senior-year PhD student, I am relieved that the EU is taking a stance on this matter and hope that others in the West will follow suit (it's probably too late for China). I am relieved because I personally have grave concerns about the uses of AI in society and have thought for some time that some kind of formal and official framework is needed. AI researchers haven't yet managed to establish such a framework, so legislators have stepped in. The framework still seems pretty "green" and like it will take a lot of development and improvement, but that a first step was made is important.
So in fact you might say that having a legal framework in place makes AI research more attractive, because the student is not left to wonder about the ethics of her research on her own.
As to the availability of large datasets- how do you see that this would be affected by the legislation being considered?
I should also point out that the reliance on large datasets is a bug, not a feature, of the currently dominant AI techniques and that an alternative is sorely needed. If large datasets became less easily available, that would give a good incentive to researchers to go do something new, rather than throw a ton of data to an old benchmark to improve it by 0.2%.
Agreed 100%. I appreciate the hacker mindset, but when things approach society-altering scale, "just because you can doesn't mean you should" should be the mantra, not "move fast and break things".
That being said.. I don't trust the gov to decide what should be on the internet. I have to resist all attempts to do so, while acknowledging some gov suppression is probably beneficial. It's a duality and despite being no one side of it you can acknowledge the importance of the other side.
And who decides when that "should" condition is met? Over the past few years, I've seen far too many activists react to algorithms that notice true but politically inconvenient things by trying to shut down the algorithms, to pull wool over our eyes, to continue the illusion that things are other than what they are. Why should we keep doing that?
I have zero faith in the ability of activists or states to decide when it's safe to deploy some new technology. Only the people can decide that.
> to algorithms that notice true but politically inconvenient
I don't know that I agree that there exists an algorithm that can determine what is factually true, so I'm not sure I agree that an algorithm can "notice a true thing".
Do you have an example of when an algorithm noticed something that was objectively true but was shut down? Can you explain how the algorithm took notice of the fact that was objectively true (in such a way that all parties agree with the truth of the fact)?
I can't think of a single example of an algorithm determining or taking notice of an objective fact that was rejected in this way. But there are lots of controversies I'm not aware of, so it could have slipped by me.
For example gender stereotyping for jobs or personal traits, that is politically incorrect but nevertheless reflects the corpus of training data. (He is smart. She is beautiful. He is a doctor. She is a homemaker.)
> nevertheless reflects the corpus of training data
I don't think "reflects the corpus of training data" gets entails "is therefore an objectively true fact". In fact, I think a lot of people that complain about AI gender stereotyping for jobs or personal traits would _exactly describe the problem_ as AI is "reflecting the corpus of training data".
I don't think anyone disagrees that the AI/ML learns things reflected in the corpus of training data.
My request for an example is one where the AI/ML is reflecting an "objective truth" about reality and people then object to that output. But "the AI/ML is reflecting an objective truth about the training data" fails to satisfy my request for an example, because it falls short of demonstrating that the training data was an accurate and objective reflection of reality.
I think you're assuming that if it's in the data, it's "factually true" as the OP puts it. It doesn't work that way. There is such a thing as sampling error, for example.
Decisions like these need to made slowly and societally and over time.
Tension between small-c conservatism that resists change and innovators who push for it before the results can be known is very important!
No one person or group needs to or will decide. Definitely not states. "Activists" both in favor of and opposed to changes will be part of it. The last few decades in tech the conservative impulse has been mostly missing (at least in terms of the application of technology to our society lol) and look where we are. A techno-dystopian greed powered corporate surveillance state.
We're not going to vote on it. Arguments like the one happening in this comments section _is_ the process for better or worse.
We also don't have to make the same decision for all use of AI.
For example, we should be much more cautious about using AI to decide "who should get pulled over for a traffic stop" or "how long a sentence should someone get after a conviction". Many government uses of AI are deeply concerning and absolutely should move more slowly. And government uses of AI should absolutely be a society-level decision.
For uses of AI that select between people (e.g. hiring mechanisms), even outside of government applications, we already have regulations in that area, regarding discrimination. We don't need anything new there, we just need to make it explicitly clear that using an opaque AI does not absolve you from non-discrimination regulations.
To pick a random example, if you used AI to determine "which service phonecalls should we answer quicker", and the net effect of that AI results in systematically longer/shorter hold times that correlate with a protected class, that's absolutely a problem that should be handled by existing non-discrimination regulations, just as if you had an in-person queue and systematically waved members of one group to the front.
We don't need to be nearly as cautious about AIs doing more innocuous things, where consequences and stakes are much lower, and where a protected class isn't involved. And in particular, non-government uses of AI shouldn't necessarily be society-level decisions. If you don't like how one product or service uses AI, you can use a different one. You don't have that choice when it comes to hiring mechanisms, or interactions with government services or officials.
Reading the article, it sounds like many of the proposals under consideration are consistent with that: they're looking closely at potentially problematic uses of AI, not restricting usage of AI in general.
Not 100% sure what the parent is talking about, but my first thought is the predictive policing algorithms used in some jurisdictions to set bail and make parole decisions. My hazy understanding of the controversy is that these algorithms have "correctly" deduced that people of color are more likely to reoffend, thus they set bail higher or refuse release on parole disproportionately. At one fairly low level this algorithm has noticed something "true but politically inconvenient", but at a higher level, it is completely blind to the larger societal context and the structural racism that contributes to the racial makeup of convicted criminals. I'd argue that calling this simply "true" is neglecting a lot of important discussion.
Of course, perhaps the parent is referring to something else. I'd also like to see some examples.
> My perspective is that our technological advancement has well outpaced our ability to adapt to the changes or bring our legal and social tools effectively to bear on them.
> A decade or two of stagnation would be frustrating for those in the field but probably overall a good thing. Plus I don't think this would affect research at all so not even.
I agree. And frankly, technological "progress" for its own sake reeks of technopoly [1]. Technology should serve society, not the other way around.
> Postman defines technopoly as a "totalitarian technocracy", which demands the "submission of all forms of cultural life to the sovereignty of technique and technology".
In an arms race it's not the best strategy to take a step back and look at angles. In AI there's an arms race going on and one likely outcome of this might be increased brain drain and even less likelihood of great companies rising in the EU.
I mean I don't think the damn NSA or whatever the EU has for that is going to stop doing whatever they're already planning to do with AI.
And I absolutely could not care less about "great companies rising" either now or hypothetically in the future.
Some applications of AI tech are very clearly MORALLY WRONG and cause harm. Currently that harm is limited because the reach of these tools is limited and that is the only thing holding them back from doing worse.
If companies need that dynamic to rise and be great then they can just not as far as I'm concerned.
Laws aren’t immutable and right now this is just a proposal. We’ll have to see what its final form ends up being.
> will likely just result in technological stagnation.
Nothing in this stops any kind of research nor does it ban its use. It just limits how effortlessly you can invade people’s privacy and discriminate against them. It may very well help research in underinvested areas of AI, and it’s ethical consequences.
You could've fooled me. About the only thing that seems to change laws in places like the EU is when the courts decide to strike something down. Everything else seems to just go ahead the way the EU politicians envisioned. Consequences be damned.
It's a good thing that it is brought into the light and discussed though, because most people probably don't realize how much power "automated decision making" already has over their lives, and it will only get worse if tech giants and oppressive governments have their way.
But, what good is technology for the sake of technology if it’s detrimental to people’s well-being? Technology to conquer ills, yes. Technology which alienates people, no.
It’s kind of like the goths vs the romans or the inuit vs europeans. Is the value of progress more than the value of self-worth?
I'm more concerned about these two aspects of it from the reporting:
> Experts said the rules were vague and contained loopholes.
> The use of AI in the military is exempt, as are systems used by authorities in order to safeguard public security.
Sounds like it's keyed to build stagnation in public tools, but state-actor tools can go right on ahead becoming more sophisticated (and harder to understand or predict).
There are almost always these exemptions for military/law enforcement use cases in EU Directives and Regulations, because while the constituent countries in the EU have miltary and law enforcement co-operation, they would veto new legislation that impacts their independence in those areas.
That’s a different question. Even if you think certain technologies are overall negative, you may be forced to adopt them in order to remain competitive with other nation states. Nuclear weapons are probably the classic example here. AI seems similar to me.
Nuclear weapons are a huge net positive for humanity. Have we had another big industrial war since WWII? No. Why not? Because the combatants understand that nowadays total war means total annihilation. Now we resolve our disputes in other ways. Nuclear weapons gave us an age of peace.
Their point is that anyone who can develop a nuclear weapon also is aware of this. It might be a survivorship bias, but it is also self-fulfilling.
The biggest threat, as I can see it, is a truly irrational actor. Luckily they're hard enough to build that this prerequisite has so far filtered out anyone truly irrational.
Progress brings good: vaccines, decreased infant mortality, etc. But we also get to live in cities, disconnected from some realities, like food and self sufficiency. In civilization you are counting on other people doing things for you. Farming, housing, transportation, care, education, etc. Yes it’s fancy and advanced and most of us will choose modern life, but not everyone has gone that route (goths in roman times, inuit in present times). Some people see value in being connected to nature, not necessarily in some artificial romantic way, but more visceral ways and forgoing modern progressive life.
How do you know that these technologies are detrimental to people's well-being?
Some activists claim that things like facial recognition, ad targeting, and personalized risk scoring are detrimental, but are these activists correct? I don't think so! All these technologies give us new capabilities and allow us to more precisely understand and shape the world.
Every single time humanity has gained new abilities --- from the Acheulean stone ax to the modern deep neural network --- humans in general have benefited and prospered, because any increase in our ability to understand and manipulate the world is a boon.
There is no such thing as a net-negative technology.
> There is no such thing as a net-negative technology
Explain to me the benefits of a gatling gun, other than being a more effective tool for killing humans. Is all of humanity really better off for all those that have been killed by this invention?
That's a lot of deaths that start out as a massive negative balance against. Tell me the overall improvement to society that the gatling gun brought us that was "worth" those deaths.
Lethality of weaponry has a significant impact on how battles are fought, where increasing lethality generally means fewer participants and, counterintuitively, fewer deaths: https://acoup.blog/2021/02/26/fireside-friday-february-26-20... is a decent discussion.
There's obviously some lag (the bloodiness of WWI) but overall yes, in a weird way, the Gatling gun and other weapons like it are part of why you're a lot less likely to die as a draftee today than in the Napoleonic era.
The obvious difference between current draftee and Napoleon one is that Napoleon set up to conquer other coutries. The peace time draftee is going to have lower mortality.
WWII's civilian/military casualty ratio was way higher than WWI's (~2:1 vs. very very roughly 1:1 or lower), which would affect how the lethality hypothesis affects casualties. When more of the dead are civilians and civilians predominantly die from famine and disease, higher overall death counts in a more-global conflict don't necessarily mean that conflict killed more soldiers per-capita, though some nations definitely suffered higher per-capita military losses due to factors beyond increasing weapon lethality - mostly thinking of Russia there. For instance, furiously crunches Wikipedia numbers in the UK it looks like WWI had a higher proportion of military deaths to population (~2% vs. ~.8%) even though it also suffered significantly more civilian deaths in II, though not enough to outweigh the decrease in military losses as a function of total population.
There might be an argument that increasing weapon lethality can decrease the number of battlefield combatant deaths but also could increase the likelihood of mass civilian atrocity. That said, high-lethality weapons definitely aren't necessary for mass civilian atrocities either.
And sure, I intended a wartime/wartime comparison regarding draftees: even if we go back to WWII as the last real great-power hot war, a French Napoleonic draftee (looking at France as Britain apparently didn't actually conscript in the Napoleonic wars) is significantly more likely to die in battle than a French WWII (or even WWI!) draftee: https://en.wikipedia.org/wiki/Napoleonic_Wars_casualties#cit...
The mere threat of its awesome killing power made an adversary think twice. In one of the most bizarre episodes Ms. Keller recounts, on July 17, 1863, during the draft riots, the New York Times (which supported conscription) mounted three Gatling guns on the roof of its headquarters, with the editor in chief at the trigger, and successfully cowed an angry mob without firing a single shot.
> All these technologies give us new capabilities and allow us to more precisely understand and shape the world.
Allow who to shape the world, exactly? Because it's not me, and it's probably not you. Technology gives power to those who control it, and control over face-recognition tech, personalized risk-scoring and ad tech is in the best cases behind several layers of bureaucratic abstraction. Our world is being shaped by megacorporations and governments, not those whose lives these technologies have the potential of having the most negative impact on.
Our lives are being shaped by powerful organizations so we should shun progress because it helps them too! Let's all burn our phones and dismantle the internet, it's the root of all evil, I tell you.
After we destroy AI we should make sure nobody does any data analysis by hand or other means. Just to be sure. Because there are people who would justify the exact same decisions even without AI. They just use data to do what they want. So let's destroy data too, and math so nobody can do anything biased or wrong.
> There is no such thing as a net-negative technology.
OK, but no one is actually arguing that. The problem starts when the technology gets abused. We need safeguards against abuse of AI in much the same way as we need it for nuclear weapons and energy and, more recently, social media (eg GDPR protections).
> will likely just result in technological stagnation.
I believe that is the goal of the legislation. By stagnating the field of AI within the EU one can encourage any negative effects to happen in other countries, so they can suffer and discover the potential downsides.
The limits are a convenient way to escape the challenge. By opting out, nobody can ask why European companies don't have state of the art AI technology.
If Europe cannot offer more than €6.7 billion to create an alternative infrastructure to AWS, GCP and Azure then they better prepare an excuse for why they haven't managed to create AI.
Am-I the only one that finds is odd how the British government brags about Alan Turing after what they did to him?
The man saved women, men, children, of all races and orientations from an horrible end. I wish the British government had extended the favor to Turing himself.
Everyone is caught up in ridiculous AI mythology, and misunderstanding the nature of the tech.
AI is just one approach to solving a problem, and will invariably make up just a small part of more complex systems involving mostly classically approaches.
Not only is AI not hugely special, nothing we do or use is mostly 'AI' to begin with.
From the article:
"AI systems used for indiscriminate surveillance applied in a generalised manner"
So does this mean as long as we're not using Deep Learning, we can indiscriminately surveil?
And what if the 'surveillance system' doesn't use AI, but the cameras themselves have AI embedded within to adjust focus? Does that count?
What if the system doesn't use AI, but the supporting services do?
It's basically ridiculous.
If the government wants to regulate 'mass surveillance' - that sounds like a good thing so do that.
If they want to ensure privacy in certain domains - great - but it has nothing to do with 'AI'.
Edit:
Futhermore:
"Mr Leufer added that the proposals should "be expanded to include all public sector AI systems, regardless of their assigned risk level".
"This is because people typically do not have a choice about whether or not to interact with an AI system in the public sector.""
This is laughably bad, because again, there is not such thing as an 'AI System'.
A broad ban on on AI in the public sector would almost guarantee European stagnation in every sector, for no good reason at all.
Will they ban Google Search in public service? Google assistant? Google navigation? Those use AI.
Will they ban AI signal processing for anything related to government?
They'll have to ban Tesla as well, there's a ton of AI in every unit.
Will there be a single automobile in 10 years that won't have AI components? The EU is going to ban all of them from use in public service?
Even today, AI is almost universal in every day systems, that is only going to increase quite a lot.
In 5 years, you literally won't be able to use any tech without it touching some form of AI.
Mr. Leufeur has no understanding of what he is talking about.
Does anybody have a link to the actual draft this article is based on?
In my limited experience the proposals by the EU commission are often readable and interesting. I might not agree with them, but I do appreciate that the thought process is made public years before ideas become laws. (As the article states, that is also very much the expectation here.)
I thought this article would be about software in heavy machinery like self driving cars but it's more aimed at applications of AI that are incompatible with human rights: social scores, surveillance, crime-prediction, etc.
This is the problem: they don't really understand what they're trying to regulate. A lot of it is (a) data-privacy issues or (b) using data to make automated decisions. The "AI" part is superfluous, as far as I can see.
Lawmakers appear as caught up in labeling stuff "AI" as investors. It's going to make them less effective by letting them avoid actually defining what they're trying to prevent.
Consider:
"those (AIs) designed or used in a manner that manipulates human behaviour, opinions or decisions ...causing a person to behave, form an opinion or take a decision to their detriment"
It's clearly about advertising and social media. If you want regulation to be effective, specifics are good. Platitudes don't make good regulations.
> No one actually wants a society being ruled by computers, except maybe the people running those computers.
Humans in charge are known for injustice, kickbacks, favor trading, selective enforcement and other forms of corruption and abuse. Properly engineered and regularly reviewed open source systems with balance checks might just get us closer to a rules-based system that provides a level playing field for everyone. Given all the known biases of current AI systems, we are certainly far from ready for it, but the prospect of transforming large parts of government into an open source "social operating system" that automatically and fairly offers basic services according to clearly coded and broadly enforced rules looks like a desirable goal in the (very) long term.
Many laws can be expressed as computer code. Where they cannot is often due to deliberate vagueness built in to leave scope for future interpretation as new cases arise. This suggests that we could express laws in computer code that raises a HumanInputRequiredException in the cases currently handled with deliberate vagueness. The resulting reduction in vagueness would remove a huge amount of discretion that currently facilitates corruption and abuse of power while ensuring ultimate human control and human-directed evolution of the law.
I want to add a historical remark. Very early forms of human government, such as ancient kingdoms in various parts of the world, had one or a few prominent members of society hold full discretion and decision-making power. Later, we codified rules and decided that even monarchs are not above the law. Endowing the written laws with additional power by making them executable seems like a natural next step.
You can't be serious. EU leadership has always been a bit weak, with a lot of compromises, and not a hard stance on foreign policy, but to prefer an unknown oracle? The current state of AI would even have it overfitted on a small relevant corpus padded with arbitrary other material. How can you expect that to produce better government?
Let me try to summarise. Tell me if I've got this roughly right.
A journalist has waffled about someone tweeting that an out-of-date 80-page draft from January of ... something ... the journalist can't be bothered to tell us its proper title ... has been "leaked" but not actually published for us to read. However, an up-to-date version of whatever it is will be officially published next week.
So perhaps I'll just wait till then before trying to form an opinion.
> those designed or used in a manner that manipulates human behaviour, opinions or decisions ...causing a person to behave, form an opinion or take a decision to their detriment
This appears to obviously apply to the Facebook wall. You can find a high-profile example of this in [0], but [1] explains how this manipulation, which optimizes "engagement", is built deep into Facebook's design. I think the case that it causes users to form opinions and take decisions to their detriment is obvious, so these new laws should apply. Am I wrong?
I'll vote for any party that will make tracking illegal. I am sick and tired of being stalked by all those multibillion corporations who don't even give anything back. They are the cancer on the society.
We will have to face everything everybody thinks up. Legislation is whistling in the dark (pissing in the wind; closing the barn door...). Its too easy to create and deploy these things. Anybody who finds a reason to do so, will.
It will take social changes of some kind, to adapt to this new reality. Not draconian laws.
It's been in Irish law for some time (which makes it applicable to the majority of US tech giants). Unfortunately Ireland isn't hq for a lot of the big finance and insurance firms which would probably be more useful in this situation. Anyway, here is the latest version of the law: TLDR; if it impacts you it should require a human decision and be appeal-able. Also there is a right to review code but I'm not sure how that works:
the right of a data subject not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her shall, in addition to the grounds identified in Article 22(2)(a) and (c), not apply where—
(a) the decision is authorised or required by or under an enactment, and
(b) either—
(i) the effect of that decision is to grant a request of the data subject, or
(ii) in all other cases (where subparagraph (i) is not applicable), adequate steps have been taken by the controller to safeguard the legitimate interests of the data subject which steps shall include the making of arrangements to enable him or her to—
(I) make representations to the controller in relation to the decision,
(II) request human intervention in the decision-making process,
These issues should solved by making problematic AI-use in breach of individual rights (e.g. privacy), rather than regulating the technology itself (with govt. exceptions) – which increases the power-imbalance between the state and the citizen.
Government (ab)use of AI is is of course a serious threat. But I’d say big corporations abuse of AI is even worse.
Assuming we are talking about a democratic state, at least there are some checks and balances on governments whereas people cannot elect a FAANG CEO or go to a ‘.gov’ website to read a transcript of board meetings.
Edit: I am by no means advocating for government’s use of AI in any form.
A business can't lock you in a prison cell. I'd argue that the checks and balances at this point are little more than a mirage. Ruling by fiat is becoming more common, accountability less so. Government use of AI is far more menacing to me than a business using it.
> Government use of AI is far more menacing to me than a business using it.
I think both are equally menacing. The problem with business usage is that we're "trusting" them to be good stewards of that capability.
There's not much preventing a business from abusing such power in a covert anti-competitive anti-consumer fashion, or worse, selling access to that power to the highest bidder (as a service!).
Perhaps not, but companies can deny you access to fundamental electronic infrastructure, use of which is increasingly essential in a cashless society where services are online or non-existent. With no right of appeal.
Ruling by fiat is becoming more common, accountability less so.
Well, PG&E cut down a whole bunch of trees in my town just recently, against the objections of the locals and the city council. And Judge Alsop, who supervises their bankruptcy chided them for this slap-dash, crude effort to show they were doing thing (didn't stop them, darn it).
So you see a bunch of actions that look like the state or industry acting by fiat. But it only looks that way. The many institutions of this society are at loggerheads with each other, the parties are in gridlock, etc. The main thing is they've shut out the average person from their debates - which is a bit different.
The propensity of leaders to rule by executive order, or cabinet bill is what I'm getting at. Those sorts of actions are most certainly ruling by fiat.
Although private prisons exist (in strikingly small numbers[1]), private corporations can't just decide to throw you into a private prison on a whim; the government decides that.
They're not presently a thing; it's just a scandal that happened at one time in history. It's also corruption, and corruption exists in both the private as well as public sectors. In the case of the Kids for Cash scandal, there have since been lawsuits, overturned adjudications, and commissions to ensure that it doesn't happen again.
Solving corruption is orthogonal to the question of whether private corporations can perform extra-judicial imprisonment with impunity. That really just doesn't happen at scale, because it can't.
Oh, I absolutely do not doubt that it could happen again more often, but at least we have Racketeering laws and a system to essentially minimize the degree to which it happens with impunity.
Of course, but the person who assigns you to that prison is still acting on behalf of the state. It would be correct to say that a business can keep you in a prison though!
Note that these checks and balances don't apply to non-citizens of the country, who are the people affected the use of AI in military (one of the exemptions listed above). If a EU member state abuses AI in the military against a non-European, what direct recourse do they have?
Any governing body seems to leave a lot of room for "except for pigs" when they write down their rules. The bigger the body, the more potential pigs there are.
Government does regulate itself, sometimes. The proposal seems to be concerned with regulating the government, for example limiting "crime prediction". Also, the private institutions it's talking about are things like credit bureaus and employers large enough to use AI for screening employees.
I saw that too, and ended up wondering if it'll be ignored if it's seen that crime prediction falls under the "public safety" exception, from time to time (or eventually, altogether). That's the problem with vague things like "public safety" being tied to regulation, imo.
The AI race is in an unstable equilibrium. Slight perturbations of the initial conditions (slight advantage) will have consequential, exponential and final implications years later.
I beg to differ. "AI" is already rejecting CVs and untransparently keeping people unemployed. People do care about how to use this tool responsibility.
Since there have been and will be exactly zero useful ai application anytime soon other than bias laundering (aka "systematic discrimination is ok when a computer does it"), I think it's ok.
While I don’t necessarily disagree, can you elaborate a bit more? AI is likely to make some significant impacts especially in computer vision applications among others.
>those designed or used in a manner that manipulates human behaviour, opinions or decisions ...causing a person to behave, form an opinion or take a decision to their detriment
Would this mean that A/B testing of news article headlines would be banned if it was powered by software?
A restriction on computation and processing of information seems like a restriction on speech, expression, and thought. The list named in this article is just bizarre. For example it mentions that the following would be covered by the proposed policy:
> those designed or used in a manner that manipulates human behaviour, opinions or decisions ...causing a person to behave, form an opinion or take a decision to their detriment
Can't all of marketing, politics, and activism be constructed to fall under this broad statement? It feels to me like this unfairly allows only certain means to achieving the same ends, which ends up favoring certain segments of society at the expense of others. As an example, what makes shaping political opinions using AI inappropriate but shaping it via disruptive protesting appropriate? A person who few responsibilities and enough time to spend protesting is allowed to influence society, and someone who wants to do the same through a different means that makes more sense for them isn't permitted to do so? Similarly, credit worthiness and crime risk assessment are plainly logical ways for individuals, corporations, and governments to contain risks, incentivize the correct behavior, and make smart decisions for themselves. Getting rid of credit scoring is equivalent to income redistribution, since less risky individuals will be forced to subsidize others.
I don't think blanket regulation like this is the answer. The answer lies in ensuring healthy markets with sufficient competition (enforcing anti-trust law), in relying on federalism so that local governments can decide which technologies they want to use or not use, and in privacy controls for users to retain control of their data. Not in restricting math.
If you look at the course of history, every attempt to slow the adoption of new technology has been a disaster for human welfare. This policy move is just like trying to ban the telephone or the printing press.
It is humorous that you bring up Luddites, as they were one of the first 19th century movements (among many) that sought to fight the mechanization and industrialization of their labor, and were repressed by military might because god forbid someone sabotage a machine.
Many "innovations" were bought at the beginning not for real use, but in order to threaten the working class with this new tool, so that they do not ask for more job stability, higher wages, or reduced working hours (e.g. grain harvesters, industrial looms, etc).
And the new technology when used often resulted in a net negative in terms of life expectancy, environmental pollution, danger or physical load, which was then disregarded as a necessary sacrifice that the poor need to make in the inexorable march of progress.
This is more like banning the production of chlorine gas because someone might put it in an artillery shell --- never mind all the other useful things you can do with chlorine gas. If you want to regulate externalities, regulate externalities: don't ban technology itself.
...which is basically banning people from measuring risk for themselves and deciding how to react to that risk themselves. How is this not authoritarian?
Authoritatian can mean so many things, could you define it further. Cause how I read your comment I could call seatbelt laws authoritatian, and I guess you didn't mean that.