I used images of a conference audience because it's the best way I could think of to get a sample of what the hacker community looks like. If you can think of a better one, let me know.
(The point of that link, for anyone who didn't get it, is that the lack of diversity Eric Ries perceives in the output of our filter is also present in the input, which implies it's not caused by bias in the filter.)
I think there is an error in logic here (and I think Ries touched on it): a bias in the input of a filter does not preclude a bias in the filter itself.
A bias in the input may exist because of a bias -- perceived or actual -- in the filter.
Sure, that's possible, but first we have to look at the skewed input before we look at any problems with higher selection processes. Hell, I even see this working shlubbo programming jobs here in Minneapolis - there simply are not many female programmers, or African-American programmers. Until we look at problems in K-12 and college, I don't see that it's meaningful to talk about racism and sexism in the valley.
Hmm. I'm not sure I agree, but I'm still thinking about it. It smells like this approach presupposes that the problem of diversity in tech -- assuming that such a problem exists, which isn't universally agreed-upon -- is rooted in problems in education.
Again, I think one of the points of the article is that post-input selection problems can cause selection problems in the input.
As a thought experiment, if we imagine a situation in which a particular minority were to believe that, even if they followed the rules of their majority peers, they would still be selected against, then we might also imagine that as a consequence they would actually select against themselves.
(For example: I'm terrible at casual party-like get-togethers, so I tend not to go to them, which in turn prevents me from getting any better at them; I'm selecting against myself in this situation because of an expected problem. Likewise, if I were a woman and an entrepreneur, I might not attend certain events because I believed in advance that I wouldn't do well there.)
I'm not at all arguing that this is actually the case. But, I don't think that limiting ourselves to looking at problems in education is taking a complete enough approach to the overall problem.
This "error in logic" is also known as "Occam's razor."
Entities should not be multiplied divergently. A sufficient explanation for the fact that Y Combinator winners are mostly white is the fact that YC applicants are mostly white. As shown by the similar Ruby-enthusiast pool, which has clearly been filtered by the same phenomenon (ie, the vast racist conspiracy - known to some as "The Plan.")
Therefore, it is unnecessary to postulate the additional cause: that YC itself is in on "The Plan." Alas, when tinfoil hats go mainstream, Occam himself is powerless.
Why not describe YC as more of a nonlinear amplifier? In this case nonlinear effects (feedback loops, etc) may have to be considered. (For example, potential applicants see that they are very unlike successful YC applicants, and so decide not to apply.) A related thought is, what if trustworthiness is inherently genetic? We know that YC uses videos to screen for trustworthiness and research suggests that trustworthiness genes can be picked up from behavioral and physical cues. http://blog.united-academics.org/2710/do-you-have-the-trustw...http://www.ncbi.nlm.nih.gov/pubmed/19344725
Another point worth mentioning. If you feel you know what the hacker community looks like, you may fail to correctly recognize hackers that look or act different.
(The point of that link, for anyone who didn't get it, is that the lack of diversity Eric Ries perceives in the output of our filter is also present in the input, which implies it's not caused by bias in the filter.)