Hacker Newsnew | past | comments | ask | show | jobs | submit | chandler's commentslogin

Consider smartness as "what you can do with what you know," while experience is "what you know"--then, what follows is that having more experience makes up for not being as clever.

    Able to make good choices = Intellect * Experience
Moreover, employers can _try_ to assess intelligence with whiteboard interviews...but the easier factor to evaluate is experience. It's right there on the resume!


Yeah, except 10 years at certain companies could teach you less than 6months to 1 year at one company that's actually doing it right.

I hesitate to try and prescribe an answer for the "how employers should find good employees" question, because it's so difficult, but I imagine the ideal hiring process would:

- Allow candidates a choice on how they want to be tested (take home thing, online code test, whiteboard, whatever else)

- Ask better questions at the abstraction level that would show experience. For example, if you want to know whether someone has experience, give them an architectural diagram (or process diagram) of your current project/workflow/whatever, and ask them how they would improve it (and if they've ever been through the steps they suggest themselves/how they would roll it out).

There was an excellent post on HN a while back detailing what a specific company (whose job is doing interviews, I can't for the life of me remember what company it was), learned from doing thousands and thousands of interviews.

Also I'm not super good at math, but in the equation you posted wouldn't the intellect variable be pretty much equal to experience, and you could make up for one with the other? Or maybe you're implying that the scales for each multiplier are different (like intelligence might only go 1 to 10 but experience might go 1-100?)? I agree with what you're getting at, basically that someone who has seen lot but isn't as clever will often make better choices than someone that's very clever, but completely green.


> Yeah, except 10 years at certain companies could teach you less than 6months to 1 year at one company that's actually doing it right.

Is it common for programmers that have been on a job to not realize the ways in which their software sucks? Knowing what to avoid and the consequences of doing things wrong is an important factor of experience; additionally, maintaining a bad system gives insight into mitigating problems.

That said, I do agree that experience isn't just a passive function of time (as in "interest earned"). Instead, it's more like a landscape that needs to be explored (and to your point, some companies will allow a programmer to explore more productive areas).

> ...the equation you posted wouldn't the intellect variable be pretty much equal to experience, and you could make up for one with the other?

I suppose the answer to your question would depend on just how much variance you place on intellect. Is it:

- 0.0 (rock) to 1.0 (cleverest human on earth)

- 0.0 (rock) to 1.0 (cleverest intelligence across time and space)

Either way, I wouldn't take the equation too seriously--it was just a model to show a trend :/


This is usually delineated by wisdom vs intelligence. Intelligence is raw horsepower, and wisdom is the ability to make good decisions.

In the context of older developers, I don't think you can really assume wisdom has been an outcome of all of those years of experience. Wisdom grows with a certain affinity for introspection and self-correction, and not everyone has those traits.


Agreed. I remember being in my early 20s, watching my more experienced colleagues, and realizing this. I had had a different (incorrect) mental model: I had thought intellect was as good as experience, so Ability = Intellect + Experience. I was wrong.


Agreed--being able to define functions for control flow & data transformation seemed functional enough (compared to having to rely on primitive reserved words).

Of course, even then, C was functional by that definition (as long as you could remember where the parens & asterisk went when defining your function!)...so what do I know.

On the other hand, if you have the attention span and interest, Li Haoyi has an interesting (and, in its conclusion, useful) take on this topic:

http://www.lihaoyi.com/post/WhatsFunctionalProgrammingAllAbo...


A simple thing is to figure out if you can restrict the runtime environment during development (available memory, clock speed, network speed, etc), and develop to that. The nice side effect is there's more of your computer available for IDE's and browser tabs.

Secondly, abstraction doesn't necessarily track with efficiency/inefficiency.


Well, Publix is employee owned, does that count?

Comparing the wikipedia sidebar shows Publix with more stores, revenue, and over twice the number of employees.

https://en.wikipedia.org/wiki/Publix https://en.wikipedia.org/wiki/Whole_Foods_Market

I'm not going near Amazon, though.


Employee-owned means the owners retired by selling the company through an employee stock purchase plan. Publix, Woodman's etc. are good places to work but they aren't communes or cooperatives.


Fun fact: Publix is the largest employee-owned grocery chain in the US.


> ...but why did they do it like this?

...because people are smart. People have _always_ been smart, it's one of the defining attributes of humanity.

It strikes me oddly (not specifically your comment, but the general color accompanying these kinds of articles)--there's a kind of ground assumption that by looking into the past, one sees nothing but a gradually descending IQ.


> people are smart. People have _always_ been smart

I don't disagree, but I think there's more to it than just that.

Our raw mental capabilities may have always been the same, but how smart we are also depends on our learning. Learning gives us extra leverage. If you have two smart people and one of them was trapped by themselves all their life on a desert island and the other learnt a lot about (say) maths and science, then the latter person could in practical terms have greater intellectual capabilities.

Over the centuries we've made great gains in mathematical tools, scientific knowledge, in democratizing education and in disseminating knowledge. And this means over the centuries we've (as a species) obtained more leverage that we can apply to our raw mental capabilities, giving us (overall) greater intellectual capabilities.


> Our raw mental capabilities may have always been the same, but how smart we are also depends on our learning.

FWIW, I tend to equate "smart" with "cleverness," as a separate measure distinct from "experience," for the same reason you describe--so what I usually go by is something along the lines of:

Cleverness is a measure of "what can you do with what you have," whereas experience is a measure of "what do you have?"

> Over the centuries we've made great gains in mathematical tools, scientific knowledge, in democratizing education and in disseminating knowledge. And this means over the centuries we've (as a species) obtained more leverage that we can apply to our raw mental capabilities, giving us (overall) greater intellectual capabilities.

I don't think we've gained greater intellectual capabilities--our intellectual capabilities are the same, we just operate in a completely different mental environment than our priors did.

Moreover, having a different view of the world allows for different connections to be made, and different potentials to be expressed (irrespective of one's individual level of cleverness).

So, for example, by placing a priority on stories & views that encourage greater investigation of the physical world, we get to where we are today. And we can teach the next generation slightly different stories that optimize for different kinds of usefulness.

To bring it to the HN contingent--if I learn a new programming language, I've gained experience in different ideas and operate in a different mental landscape. But I'm not smarter afterwards, and I wasn't dumber before.


> looking into the past, one sees nothing but a gradually descending IQ

Actually, it quite literally might be so: https://en.wikipedia.org/wiki/Flynn_effect

Though it's questionable as to how IQ is related to actual intelligence over time.


Agree with that too. Maybe IQ is out of date? Also, the huge effect that all these new technologies must be having is surely not going to be factored into that.


I think the parent was referring to the fact that the numbering does not resemble any modern definition of binary --- there are 64 ways to arrange 6 2-stated things, but the I Ching numbering doesn't have any clear 32-16-8-4-2-1 (or other set of) weights to each "bit".


I agree. People are also arrogant. We like to look back and think "how quaint" and we are better than them. It's humbling to think that we actually weren't. But it's probably not very adaptive to think that way. Since, believing we are better now, by falsely diminishing the whatever metric of the past, probably helps us keep going forward to create the bright future.

Hopefully without repeating the mistakes of the past tho!

Even that phrase "mistakes of the past" is telling, right? I mean it's not like, IMHO, you hear a similar amount of talk about, "the brilliance of the past", except it a sort of quaint, dismissive way: "oh, look, plumbing in ancient Rome, weren't they sort-of clever!"


I think that it's fair to say that, in the past, people were (overall) a lot more ignorant about the character of the world and the universe, and that this led them to (overall) believe a lot more incorrect things.

This is not to beat our chests in a "we're better" kind of fashion, but just to acknowledge that we have the benefits of the knowledge that people who came before us built.


As time goes on, you have more giants standing on the shoulders of other giants, and so on. Our feeble wetware is going to look pretty inferior to the digital cognitive systems of the next millennium, so we would do well to stay humble :)


> My brain doesn't "point" at something in Vi motions...

In emacs there's a mode called ace jumping. Essentially, it lets you pinpoint exactly on screen where you want to move the cursor ~3 keypresses.

Similar to how the vimium extension to Chrome let's you jump to any link/button/control with two keypresses.

These aren't motion commands as you'd typically get in vim/emacs; instead it's a different interface to allow directly jumping to an arbitrary position (more of a warp than a movement).

http://emacsrocks.com/e10.html

Anyways, the point I wanted to make is that even some instances where the mouse is considered superior can be negated by a different UX.

So consider the idea that FPS games are more effective with a mouse. I believe this is a UX choice to reward developing mouse skills, not a fundamental limitation of the keyboard. As such, consider the following levels of keyboard FPS controls--

Option 1: Aim by directional arrows. Will be beat out by a mouse user for sure.

Option 2: Aim by movement commands (e.g. B skips and tracks the target on the left, F skips and tracks the target on the right, etc). Might be on par? Definitely has a different learning curve than using a mouse.

Option 3: Aim by label. Every target has a letter hovering over its head, and pressing the key will aim & track at that target. I suspect this would be much more _efficient_ control scheme than using a mouse.

Option 1, is akin to navigating text with arrows, Option 2 is akin to navigating with jump commands, and Option 3 is akin to navigating with an ace-jump type interface.

So although #3 would be considered unfair in a typical FPS context, as a programmer I just want to reduce effort as much as possible.


#3 is basically how Typing of the Dead works, and when you reach high WPM it is indeed faster than using a mouse.


AHH TYPING OF THE DEAD! Best typing game ever (helps that it's just a rework of an actual rail gun shooter). From a gameplay simulation perspective though, mouse is the only method that I feel provides any relatable idea of aiming. Just being able to directly jump between targets would definitely be "more efficient", but so is an aimbot, both sort of defeat the general gameplay design.


> Just being able to directly jump between targets would definitely be "more efficient", but so is an aimbot, both sort of defeat the general gameplay design.

Of course - in the context of an FPS, where the basic expectation is for it to be "skill-based".

But this makes the notion a truly great metaphor, IMHO: The goal behind a text editor is for the user to get stuff done. "Fairness" is not a constraint; everything is allowed.

I'd even go to say that if you've developed some AI that allows me to quickly do what I want without getting in my way, I'm in! I don't think this will be the case any soon (at least in a usable manner), but having it would certainly be great.

In fact, I already pondered if it might be doable to use the new, cheap eye tracking hardware one can buy nowadays (e.g. Tobii devices) to develop something like Ace jumping with...


> I imagine piping in bash feels similar...

It is. Piping is akin to an untyped subset of working with Scala's collections, and there's a similar workflow between the two, e.g.:

In bash, you build up a command line with something like (as a stupid example, the number of unique file sizes in a directory):

    Step 1) $ ls -l
    Step 2) $ ls -l | awk '{print $5}'
    Step 3) $ ls -l | awk '/\d+/ {print $5}'
    Step 4) $ ls -l | awk '/\d+/ {print $5}' | sort -n
    Step 5) $ ls -l | awk '/\d+/ {print $5}' | sort -n | uniq
You can see the correspondence with collections--typically building up in the editor in a similar staged manner ending with something like:

    Process("ls -l").lines   // ls -l part
      .map(_.split("\\s+"))  // awk part {
      .filter(_.length > 4)  //   ...
      .map(_.apply(4).toInt) // }
      .sorted                // sort -n part
      .distinct              // uniq part
The tradeoffs between the two are essentially verbosity versus data safety (perhaps not important with a throwaway inspection, but more important when working on larger programs...).

However, the building process is the same--you don't write the latter all at once, you gradually mold the pipeline until you get the output you want.


Yeah, but where are you going to get the skills of interacting with strangers?

Eliminating baseball practice doesn't seem like a way to encourage more games :/


You're going to get practice where you search out for it, when you're open to and ready for interacting with other people.

I can go to baseball practice, work on my game, and go home when it's over. Or I can ask if someone wants to stay and go for drinks after. Everyone's got the choice of what they want to do. If I love the game but not the people then I'll maybe hang out with a different group instead, or with my partner.

The point behind eliminating these side-effect interactions is that now you can decide when and with whom to interact, rather than being forced into it when you just wanted to get something else done.


Baseball is a team game, so interaction is not a side effect, it's part of the activity.


> So while our system would be able to say "I have seen person 1234 at locations 4,7,9,11 on dates x,y,x" we had absolutely no way of knowing who 1234 was or anything about them...

Minor nitpick, but giving someone a nickname isn't the same as anonymization.

"Hey Bob, thanks for logging on. Did you know we've been calling you 1234 these past five years!"

When a passive recognition system _uniquely_ tracks & identifies a person, it just takes time before that gets cross-referenced.

(different story if the data gets aggregated, or you scrub the uid completely after some window)


>> It seems like there's a fine line between being clever and being morally bankrupt.

Actions can only be evaluated as moral/immoral/amoral when you've taken the time to define a system of morals.

It's the difference between a requirements document and an implementation--just as you can't determine the suitability of software without knowing its requirements, you can't determine the suitability of an action without knowing your morals.

Suppose you've decided you want to live in a society that respects the "Don't lie, steal, or cheat" maxims (for whatever reason--I've chosen this here because they're explicatively short!):

Then determining what's moral/immoral/amoral becomes an application of: 1) Does this encourage me (or someone else) to intentionally misrepresent the truth? 2) Does this encourage me (or someone else) to take something that hasn't been given? 3) Does this encourage me (or someone else) to misrepresent the truth for gain?

On the other hand, consider a company's system of "Maximize profit while minimizing legal liability": 1) Does this encourage me (or someone else) to maximize our company's profit? 2) Does this encourage me (or someone else) to expose our company to legal liability?

Notice that these two systems don't address the same concerns.

So the fine line exists because what's moral for the company MAY conflict with what's moral for the individual, and many of us don't like to think about /any/ ethical system (after all we're being paid to work in the latter!).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: