Hacker News new | past | comments | ask | show | jobs | submit login

Most people will go out of their away to invent reasons for why a disadvantageous evaluation system is fundamentally broken. For example, people who don't do well on standardized tests (e.g. SATs) typically say "I'm bad at test-taking" or "SATs are stupid" despite overwhelming evidence that SATs are a good predictor of general intelligence.

It's worth considering the possibility that while you may be a great developer, you're not as good as you think you are with respect to the caliber of people that work at Google.

(Note -- I don't work at Google and didn't do spectacularly well on standardized tests; but after working with many algorithms whizzes over the years I've learned that I'm not nearly as good a programmer as I once thought).




If Google primarily worked on developing novel sorting algorithms, I'd agree that this is a great process for them. Heck, maybe it is a great process for them—they just do an astoundingly poor job of explaining who they want to work at Google.

(Also, please recall that literally the only time I personally applied to Google was the freshman year of college. This isn't a case of personal sour grapes.)

By the way, I also don't think their hiring process is fundamentally broken. Just pointing out that this is the reputation it's acquired.

Since you brought up the SAT, it's an absolutely perfect and effective system with zero flaws—which I did spectacularly well on.


> despite overwhelming evidence that SATs are a good predictor of general intelligence

I never heard this before. I thought that SATs and other standardized tests heavily correlate with background / race. Which to me, means it's not a good indicator of intelligence, but rather education.


That depends on whether or not you think intelligence is an inheritable trait.


Who cares how well it does at predicting general intelligence? If we wanted to give people a test to figure out their intelligence we would give intelligence tests. If we are going to use it for college admissions it should predict how well one would do in college. It fails mightily at that job(correlation coefficient of .2 when compared against first year grades).

http://www.fairtest.org/sat-i-faulty-instrument-predicting-c...


If SATs were some sort of general intelligence assessment, it is unlikely that a $500 Kaplan course would significantly increase your scores (which they do).


That depends on whether you consider 30 points significant or not. [0]

[0] http://www.wsj.com/articles/SB124278685697537839


Like any other study, you need to look at the source of that data. I'd trust a study re: SATs sponsored by "National Association for College Admission Counseling" about as much as a study about pollution sponsored by the "American Petroleum Institute".

It's a small anecdote, but my high school got a grant to do a pilot program to incorporate SAT test prep into the school program back in the 90s. IIRC, the average score went up 100 points vs. the PSAT. With the old version of the test, I went from the 1200s (80th percentile) to the 1400s (95th). Writing was an optional test then, and the test prep didn't cover it, but I was already familiar with the writing process from AP courses.

30 points IMO would represent prep focused on test strategy exclusively. For example, with tests like the SAT, answering questions wrong comes with a higher penalty than not answering.

If you drill on vocabulary, tune your writing to line up with the scoring methodology and are familiar with the structure of math problems asked, you're golden. But knowing those things doesn't grant you greater general intelligence.


I don't think NACAC is equivalent to the API. It's not like college counselors are the ETS—in fact, many counselors complain about how much some students focus on the SAT.

As for your anecdotal evidence, it sounds pretty flawed to compare results on the PSAT to the SAT directly—I also got a much higher score on the SAT without doing any studying at all, probably because they're scored and weighted entirely differently. Moreover, if this comparison was done over a year (ex. sophomore to junior year), the results are likely even more flawed—there's too much confounding development in that year to attribute the increase to SAT prep.

Anecdotally, I know I did much better than all of my friends who spent months studying for the SAT and drilling on vocabulary, math, and strategy. If the SAT is so game-able, they should have overcome me.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: