How many clearview AIs exist that we don't know about? This is something I could create as a hobby project in my free time so what would someone with real resources be able to do?
You could start with companies that Peter Thiel invests in and then start building a network of known associates. He seems to be keen on investing on this sort of companies, maybe to have an eye on them since Palantir/Facebook can't pull this off without the media attention.
FOIA requests are your friend. Muckrock.com has done some work in this space. Also, don’t give up on legislating and regulating this sort of tech away.
Probably several dark versions of this out there have access to this type of technology already. Its just too tempting for maleficent organizations around the world to ignore.
Just think about the investigators around the world (FBI, DEA etc...) who are combating evil organizations but are not careful with the the information the post online; those people are screwed. Time for a new job.
I'm glad that this one was done in the USA so we have a way to bring more light to these issues.
I suspect (and hope) that clearview AI will not last long here in its present form.
Okay, say they take the code and set it up on servers outside the US. Maybe incorporate as a new name in a non-US country as well. What implications would that have on US law enforcement agencies in particular, but any US based company in general from using this new foreign company?
Not to insult you but... could you? If what the company claims is true, and the NYT says that it is, you can build a fast search over 3 billion distractors? Yourself? Without relying on some random outside service, which a national security or police actor would not allow you to use in order to maintain SEC/SCIF compliance?
Even if Clearview somehow shuts down...this sort of thing is basically inevitable without laws to protect our privacy. And honestly, all of these companies that might at least complain when Clearview does it would gladly share this data with the federal government. So I'd be surprised if at least some federal law enforcement agencies have had access to equivalent (and probably better) technology for quite awhile.
Scott Adams has a Clearview investor on his podcast (ep. 798). Tone deaf really doesn’t adequately describe this investors level of concern about critics. When I was a teenager I once asked a musician what he thought of a certain store for musical instruments, his response is that he supposed someone has to sell peavy gear but he certainly wouldn’t want to see the special room in hell reserved for such a person. That’s kind of how I feel about involuntary biometrics.
You’re right I shouldn’t pick on a company like that but the conversation would have been meaningless if I had just said he’d been referring to a USA musical company without musician endorsements.
If you wish to avoid vigilante justice, you need to provide something better. The government is meant to handle matters like this to protect anybody who might be wrongly accused, but the government is abdicating their duty to do so. When the government abdicates their duty to the common people, the common people have a moral and ethical right to take matters into their own hands.
Some ads now have links to sites where you can do mass opt-outs, like this one: https://optout.networkadvertising.org/?c=1 I have no clue if they're legit, but they certainly look useful.
Now that the CCPA is in full effect, can't somebody just build a mass opt-out service to get out of Sift, Clearview, etc? Sift etc already have to hire a 3rd party to handle ID verification (yeah it's weird you have to send them your ID to get your own data, which is supposed to define your identity...).. No idea if it would be profitable, but if you've got the ID verification bit solved, then launching an opt-out service might be the way to make your product go viral?
> To process a request, however, Clearview is requesting more personal information: “Please submit name, a headshot and a photo of a government-issued ID to facilitate the processing of your request.“
ROFL - they will use your request to add and extract value out of your information. It also clearly, and uniquely identifies individuals who can be considered "trouble makers".
but unfortunately that data is already out there. If clearview could collect/scrape, so can anyone else. If Clearview could hire people to create AI's to identify people/match people, so could anyone else.
And this kind of database, i imagine, would already exists for the NSA or whatever intelligence agency for China/russia.
Kind of surprised how they did this. Facebook, Instagram, YouTube, Twitter, etc. has hard rate limiting and crazy scraping prevention. Wonder how hard they worked on their scrapers.
There’s going to be legislation. That’s more or less a given at this point. The question is: can we actually get good legislation? My fear is that we’ll get something more along the lines of the EU cookie directive than a full-on GDPR.
So instead of worrying that your biometric data could be captured anywhere and used for anything without your knowledge, you now have to click “I accept” before engaging in any activity with a business or government entity, which then establishes your consent to your biometric data being captured anywhere and used for anything without your knowledge.
> Clearview AI, the facial recognition company that claims to have amassed a database of more than 3 billion photos scraped from Facebook, YouTube, and millions of other websites [...]
Let me guess - neither FB nor YouTube will go after them legally, because its probably waste of money anyways, since their clients did not experience immediate/direct harm?
If you keep reading, Twitter, another of Clearview’s scrape targets, did send them a cease-and-desist letter, asking them to “cease scraping and delete all data collected from Twitter”. Not that that guarantees Twitter will actually follows through and file a lawsuit if Clearview doesn’t comply.
Hot take: fighting this is a losing battle, because you’re fighting against technology. Can you think of a single technology that society has successfully rejected? It’s pretty hard.
In general, once the tech to do X exists, then X will find a way to exist.
Instead of crucifying Clearview, it might be better to open a dialogue about regulation in this space. But even that has problems; bitcoin was eventually accepted. Yet ICO regulation shows that it’s worth trying.
The point is, this kind of tech can be helpful. Feynman once cited a quote along the lines of “Science is a key that unlocks the door to heaven, but the same key unlocks the door to hell.” It seems relevant here.
Fun fact: the Protocol on Blinding Laser Weapons, 1995 is "the first time since 1868, when the use of exploding bullets was banned, that a weapon of military interest has been banned before its use on the battlefield and before a stream of victims gave visible proof of its tragic effects."
Now that high energy lasers are finally becoming practical for military use we will start seeing blinded casualties in major conflicts. Lasers will be used to attack vehicles and if one of the occupants happens to be looking toward the emitter then it's tough luck.
The ban on laser blinding weapons was precipitated by the 1995 public announcement of a practical laser blinding weapon created by the Chinese arms manufacturer Norinco: https://en.wikipedia.org/wiki/ZM-87
Is Eugenics a technology or just the targeted application thereof? We openly practice breeding programs with plants and animals, we just don't like doing it to humans.
I mean, I'm pretty interesting in doing embryo selection on my future children. Eugenics on my own family, if you will. I agree that it's wrong to apply eugenics to others without their consent!
Crucifying Clearview is HOW we open a dialog about regulation. We have to attack the clearest misuses of the technology instead of the technology in the abstract.
As always, the public debate (and witchhunt) is focused on the tool, the symptom, the observation, and never the underlying cause.
How are these technologies being abused, and why are the perpetrators allowed to do so?
The surveillance tech already exists, as a component of personalised advertising, because it makes lots of money. It's all any government needs as well. It's flown under the radar wrt. the masses, because most people have absolutely no comprehension of how that kind of thing can work and will ignore anyone talking about it because it's an SEP.
But facial recognition is something which the masses can recognise, if not understand, and therefore react to as a threat.
Personally I think that as long as we haven't solved the first problem then we should avoid piling more possible problems upon it, so roadblocking facial recog tech is not something I would object to. But it does miss the point by rather a large margin.
I don't believe that it'll be as 'awareness-raising' as some people seem to believe either, for the reason implied above: people only care about the comprehensible threat.
I don't believe this tech is going to be a net good to society, but I do agree that (like the fight against piracy) technology is just going to win here.
I don't have a lot of hope that in the next 50 years we'll be able to prevent the ubiquity of (perhaps 3D printed and untraceable) drones broadcasting HD video to AI curated repositories that track your identity and every word (and bowel movement), if only for the lulz. Your only hope is that you'll be able to afford a home (and work in an office) with all the good countermeasures.
There are lots of examples. You are right that technology itself is usually value neutral, it's how it has been applied.
But as you suggest the usual answer to this is has always been regulation, which is only effective with teeth.
Nobody can stop you learning how to build a RF transmission devices, or conversely a blocker. But the FAA has very particular ideas (US specific) on what you are allowed to do and not in that space, and if you step too far out of bounds you will be shut down.
The trick is to make the penalties both likely and severe enough to outweigh any benefit you'll see. That will shut down most (ab)uses of the technology. Note regulation can include "you cannot do this as an individual or corporate entity, at all".
There are legal/regulatory limits placed on technologies that can
- create radio signals and/or interference
- reproduce some things (e.g. financial instruments)
- produce certain types of substances (e.g. nuclear, but also chemical)
- are primarily used for producing restricted things (e.g. some arms and armament)
- explode particularly well
- etc.
In some cases this will limit your ability to purchase or create them, in some cases just use. For many things you can do them as a corporation, but only if you subject yourself to scrutiny that you are playing by the rules - i.e. you will be audited.
There is no way facial rec is going away. Sensationalist outlets like Buzzfeed were, in their day, also against Caller ID. And "databases." Not going to happen. Turning faces into polygons exists/will never unexist.
Companies like this are Evil. Plain and simple. If your business model involves distributing/processing personal information you had no consent to take and sharing it with 3rd parties, then that's evil.
Cynical take: I'm pretty sure we've all agreed to ToSes that has allowed our faces to end up online. Clearview AI is just doing what a detective with an eye for faces and unlimited time is doing.
I agree that I'm really concerned about the way the surveillance state is creeping in. But it kind of feels like the cat is out of the bag, and moralizing like this is useless.
Clearview AI is just doing what a detective with an eye for faces and unlimited time is doing.
This has always been a bullshit argument for this type automation, whether it be facial recognition, license plate readers, recording phone conversations, whatever.
It pretends that scale doesn't matter, when the reality is that scale and network effects are the most important feature of many of these 'advances'.
By "bullshit argument" I mean specifically that this argument is not presented in good faith. Rather it is used to obfuscate or avoid engaging with the issues at hand. At least, that is how it comes from corporate entities etc. From individuals I expect it is more a category error.
A few days ago there was a front page post about Clearview AI where the original post title was, as rules on this forum require, the same as the article's title:
"The Secretive Company That Might End Privacy as We Know It"
A few hours in the post title was changed to:
"Clearview AI helps law enforcement match photos of people to their online images"
The difference to me was night and day and super sketch. Why was this change made? Is ycombinator invested in Clearview?
Well, if the headline is clickbait, the policy here is to change it to something that reflects the actual content of the article, rather than just parroting the clickbait. Whether that is what was done in the case of this particular article, I can't say.