Hacker News new | past | comments | ask | show | jobs | submit login

Since the site seems to be down, I'll mention that there was an article on Medium covering this artists' work a few days ago: https://medium.com/matter/this-is-what-your-face-looks-like-...

Seems like what he did was take Umass' Labeled Faces in the Wild (LFW) dataset, combine all the faces together and generate a composite rendering. Hardly a representation of "What your face looks like to Facebook".

Articles like this do little to advance the discussion of the real concerns around facial recognition other than scare folks into an Orwellian vision of the future a la Hollywood and Minority Report.

In addition to the work that the NTIA is doing around setting privacy standards for facial recognition applications (http://www.ntia.doc.gov/other-publication/2014/privacy-multi...) there are a lot of companies using the technology for good, but it is very difficult to get as much press and attention for those types of positive stories.

For example, my own company, (http://www.kairos.com) is working with some very brave and passionate guys at HelpingFaceless.com in India who are using facial recognition to help combat the growing problem of child trafficking and enslavement in India.

See more about their story here: http://social.yourstory.com/2014/07/helping-faceless/




That's not what I did.

I used a genetic algorithm with facial detection/recognition algorithms as fitness functions

Recognition algorithms like eigenfaces, fischerfaces will reproduce one individual.

More general detection algorithms like YEF realtime object detection as a fitness function will result in more general faces which represent some of the learned features from the LFW database. I'm not just combining images, there's lots of machine learning involved.

Recheck my site in two weeks or so I'll be posting my MS thesis about the subject with more technical details.

(Wrote this on my phone excuse the brevity)


You might want to summarize that at the top of your post, then, if you're going to submit it to HN. When it contains such nuggets as:

These masks are shadows of human beings as seen by the minds-eye of the machine-organism. These DATA-MASKS are animistic deities brought out of the algorithmic-spirit-world of the machine and into our material world, ready to tell us their secrets, or warn us of what’s to come.

It's not surprising that HN viewers are confused about what you actually did.

In any case, seems like a cool project, now that I know what it is :-).


Thanks, but btw someone else submitted this to HN I hadn't planned on fully publishing the project until a few weeks from now when my thesis is due but it was posted to medium.com so I quickly put some documentation together

There are more technical details at the bottom of that page in the form of diagrams


For some reason, I thought that green usernames (like yours) meant you were the poster. Sorry! My mistake; you can't control when someone else posts your work.


Green usernames indicate new, or "green," accounts.


"Think of the children!" Whether it's for backdoors to your data in the cloud or facial recognition, protecting the children is ALWAYS what governments claim they need the tech for. Their track record is terrible, almost always using it for control and surveillance.

I want to see cameras on every police officer in every city, before even considering more giving them more surveillance technology.


I don't think you read the article very far. Yes, he started with an averaged face (which then matched nothing, by the way). Then he added (I'll simplify for you) random dots to it until it matches something. Thus it is a reverse lookup from the way facial recognition is usually used. Usually you give it a face and see if it has a match. In this case you basically feed it random face-like data until you trigger it, then you have some insight into what it is looking for.

The fact that the results look very little like human faces do provide insight most people don't have into how the algorithms work. The algorithms basically work by measuring distances between dots in the end anyway, distance between eyes, etc., and that's very unlike how humans recognize faces which has more depth to it.


Bit of self promotion, but related due to the dataset:

http://deeplearning4j.org/facial-reconstruction-tutorial.htm...

It's pretty neat seeing what a "general" face looks like.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: