Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Imagine that you run a business where it is important to properly check the ID of your customer. You need to know two things: the photographic ID matches the person, and there is a real living person present there, not just some scammer who stole the ID.

In real life this is a simple task. You look at the photo on the ID, then you look at the person and check that they match. You also would immediately notice if something is weird, like they are wearing a lifelike mask, or they are a plastic doll.

Why does this not work for businesses then? There can be two reason: either they want to check the customers ID remotely through the internet, or the customer is present in their office but the business does not trust their own employees to check them.

Why would a business not trust their own employees? Because the company employees are in the best position to perpetuate some fraud on the business. Very often clerks receive some direct compensation or bonus based on how many new customers they subscribe for example, and if not properly checked this can incentivise the employees invent fake customers for you for example.

So you want to check the ID of a person in some way you can conduct remotely without trusting anyone physically present. Simple! You ask for an image of the ID and a photo of the customer, then a remote employee or machine learning model can decide if they match up.

But there is a problem with that! Anyone who is sophisticated enough to scam you with a fake ID will also be able to give you a matching photo. Ugh oh.

How do you solve this? You ask for the ID as previously, but instead of asking for a photo you ask for a video of the customer. Maybe you even flash the phone’s screen while recording the video with random colours and ask the customer to read up randomly selected digits. This way you can be sure that the video is not just some previous stock footage, or plastic doll or who knows. This is what is being refered to as “app-based liveness verification”

Now of course you might notice that this is an arms race. You make a better verification tool, and the scammers make a better scam. I bet that there is already someone out there training a neural network to create an animated deep fake which is convincing and can reflect the flashing colours appropriately and can read up the digits too. Likewise there is someone who is working on detecting that. What matters is that the business is trying to keep the cost of deception high enough so it is not worth doing at scale.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: