But all that is very controversial and imbued with political bias. This is why CS ethics courses and the 'liberal arts' have a bad reputation: they claim to be teaching some sort of universal ethics but are actually just teaching left-wing propaganda.
For instance, from a libertarian perspective I'd argue:
- Social media makes democracy stronger because people can communicate more. There's little to say beyond that.
- AI surveillance is no different to surveillance powered by people, except cheaper. It has no real impact on anything beyond scale, which may make a practical difference but makes no fundamental difference to any arguments about surveillance. A much better question would be how cryptography affects surveillance and human rights, but very few ask that except the politicians and investigators it actually affects. This is because questions in the media about "AI" are almost always standard leftist anti-capitalism targeting tech firms whilst posing as something new. AI is merely a prop to start tired old conversations, not a real interest.
- "The emergence of tech firms with the resources of nation states" being the very next bullet after "AI" is just re-inforcing my point. You aren't interested in tech ethics. You want to debate capitalism itself, which universities and academics have wasted far too much time on over the centuries. The resulting "debates" have yielded no insight but lots of violence, subterfuge and general chaos.
It's also - like the other question - based on false premises. There are no tech firms with the resources of nation states. A very few might appear that way if you compare arbitrary and incomparable statistics like GDP vs market cap, but those don't measure the same thing. As a simple example showing how false this is, tech firms can't make law or raise an army, which even the tiniest and most useless nation states can and do.
However, good luck arguing any of these points in a paper required by such a course and not getting a fail grade. Universities are the absolute last places that should be trying to teach ethics.
> It has no real impact on anything beyond scale, which may make a practical difference but makes no fundamental difference to any arguments about surveillance.
Scale makes a huge difference: The obvious example, discussing this on HN, is computers. Fundamentally, the massive network of computers making up the internet is no different from the original Zuse computer - both are just number crunching machines. But in the same way that AI surveillance will allow you to control everyone, not just a few individuals, the difference the scale makes is just so huge that the two aren't even remotely comparable.
Yes, it makes a practical difference but in an academic debate setting, arguments for or against pervasive surveillance are usually scale invariant. If it wasn't you could end up with arguments like this one:
"The USSR compiling detailed files on millions of citizens was not unethical because it was small scale, but Google compiling detailed files on billions of citizens is, because it's large scale."
or alternatively
"Surveilling terrorists is fine because there are very few of them, but CCTV to catch thieves in public places isn't because that's much larger scale."
These would be very odd and brittle arguments though. In fact I've never seen anyone make an argument like that. The tolerability of surveillance depends very much on context like who is doing it to whom and why, with scale being a pretty small aspect. When people debate surveillance it's always in the context of identities and purposes. But the notion that AI+surveillence is special ignores questions of who's the watcher, who's the watched and what the purpose of it all is to focus on the least important part of all: concrete cost to the watchers. After all communist regimes already demonstrated that you can scale mass surveillance up to huge levels without needing any technology at all, just ideology.
That's why I'm suggesting debates about AI in left-wing media are actually not about AI at all, that's just a way to restart the discussion about standard left-wing ideas and topics, like capitalism or sometimes identity politics. This article is an exemplar:
In fairness, it does start by recognising that "big brother" is the primary user. But the thrust of the article is about private (ab)usage. It ends by saying:
Imagine a surveillance system that falsely identifies a black man from his face and then just as falsely attributes to him aggressive intent from his expression and the way he is standing.
Yes, if social media displaces other forms of communication that were previously uncensored.
Don't get me wrong. I think Twitter's turn from "free speech makes us stronger" to censorious partisan prop is terrible, but, it could only be making people weaker if Twitter had completely replaced other forms of communication of the same scale that were less biased. I don't think that's really true. Before Twitter there were blogs, TV, radio, newspapers etc. All still exist and doing fine. Twitter and other sites have been additive.
For instance, from a libertarian perspective I'd argue:
- Social media makes democracy stronger because people can communicate more. There's little to say beyond that.
- AI surveillance is no different to surveillance powered by people, except cheaper. It has no real impact on anything beyond scale, which may make a practical difference but makes no fundamental difference to any arguments about surveillance. A much better question would be how cryptography affects surveillance and human rights, but very few ask that except the politicians and investigators it actually affects. This is because questions in the media about "AI" are almost always standard leftist anti-capitalism targeting tech firms whilst posing as something new. AI is merely a prop to start tired old conversations, not a real interest.
- "The emergence of tech firms with the resources of nation states" being the very next bullet after "AI" is just re-inforcing my point. You aren't interested in tech ethics. You want to debate capitalism itself, which universities and academics have wasted far too much time on over the centuries. The resulting "debates" have yielded no insight but lots of violence, subterfuge and general chaos.
It's also - like the other question - based on false premises. There are no tech firms with the resources of nation states. A very few might appear that way if you compare arbitrary and incomparable statistics like GDP vs market cap, but those don't measure the same thing. As a simple example showing how false this is, tech firms can't make law or raise an army, which even the tiniest and most useless nation states can and do.
However, good luck arguing any of these points in a paper required by such a course and not getting a fail grade. Universities are the absolute last places that should be trying to teach ethics.