Hacker News new | past | comments | ask | show | jobs | submit | tim_h's comments login

The difference between doctors and programmers is that programmers get to build/leverage tools that are abstractions of other tools, hence you can have orders-of-magnitude differences in productivity between programmers who use the best tools and those who don't. I assume that there's not that much variation between the way two different doctors carry out the same task like there is with programmers.


Actually, doctors do have order-of-magnitude tools for improving certain metrics, such as recovery and infection rates: http://www.newyorker.com/reporting/2007/12/10/071210fa_fact_...


Probably the availability of people with 5+ years of programming experience. It was 1959! :)


I don't know. Essentially every place I've contacted I've asked about that clause. Not one actually expected to find it, but they all listed it.

5+ years seems to me to be a strange expectation. That would mean you've had either multiple failing jobs and might be an undesirable, or you've been somewhere for 5+ years. If you've been there for 5+ years, what's motivating you to leave? Where do people expect to find these vast pools of highly-skilled jobless people who have experience with <software stack X> in <field which employs a couple thousand people nationally> within <narrow time window>? That they never find any seems to underscore how irrational the 'requirement' is in the first place, but I see it everywhere.


I remember reading a job ad back in 2003 that wanted someone with 5+ years of C# and .NET experience. I assume the position was filled before 2006. (It's possible but unlikely that they were solely looking to poach Microsoft employees.)


This happens a lot. HR ends up being in charge and comes up with requirements that don't make much of any sense.


I think it's breaks down like no experience = SDE 1

2 years = SDE 2

5+ = SDE 3, regardless of language.


Basically, in a client-server relationship, use persistent buffering on the client-side so that the client can tolerate server downtime.

I like the simplicity of this approach. It's best to keep the low-level stuff as simple as possible when building distributed systems. It will get complicated soon enough at the higher levels.


If an ISP starts openly inspecting content like this will it become liable for all content that passes through it's network?

Edit: I'm surprised to see this downvoted. Care to comment why?


ISPs aren't common carriers.


Just curious, what specifically is the indicator that a person doesn't care for the company? I didn't get that impression from reading the GP.

Is it not paying attention to the news cycle? Or knowing that the company is on a bad trajectory but still working there? Surely it's not the focus on work-life balance.

I could see myself working at Yahoo. I bet they still have interesting problems to work on. Yahoo Research does awesome stuff with computational advertising. Hypothetically speaking, would this opinion disqualify me from your hiring consideration?


It's not this particular issue. I don't care all that much about patents. It's mostly that Yahoo has been a zombie corporation for a long while and has key symptoms of not being able to foster creativity or innovation.

There may be interesting things going on inside Yahoo, but I'm not aware of them. Every single thing that I found interesting about Yahoo has withered or completely died in the last 5+ years.


Is C++ usage on the decline? I've used a lot of different languages, but I've never had the occasion to learn C++. I work in web search and we tend to use interpreted languages. The argument is that Java or C# is much easier to write/maintain and that we can make up for performance shortcomings by scaling out.


Is C++ usage on the decline?

No it is not. C++11 is a modern programming language that lets you harness every bit of power from your computer.

From a productivity point of view I think Java and C# are as verbose as modern C++. In the end it really depends on what kind of application you develop, for web development you can be a few orders of magnitude more productive than in C++ if you use Python or Ruby. For number crunching nothing beats Fortran.

You should chose your tools (programming language, compiler, OS etc.) based on what you know and on the application domain.


I wasn't suggesting that that C++ is somehow outdated or inferior. I'm just wondering if the percent usage of C++ is declining with respect to other languages.

I'm trying to estimate the chance that me not knowing C++ will pose a problem later in my career. I normally pick up languages quickly, but the C++ learning curve seems a bit steeper, both in terms of language nuances and libraries. So my normal approach of "wait until I need it" may not work that well.


This is a difficult question. If the number of C++ developers is growing, but at a slower rate than the number of total developers, is that a decline?


What are the classes of problems that interest you? What is the standing of C++ in the communities involved in solving these problems? For example, if you are interested in HPC, graphics programming or systems programming, and would like to pursue these areas professionally, then not knowing C++ may pose a problem.


I work in HPC. Every application I've seen has been written in C++, with the exception of proof-of-concept Matlab code hacked together by the researchers. This is then contracted to be rewritten in C++.


Pangool actually seems like a generalization of Hadoop. This doesn't necessarily make it more complex. If a problem maps exactly to the Hadoop API, then it should also map exactly to the Pangool API by setting m=2 (in the extended map reduce model described at http://www.datasalt.com/2012/02/tuple-mapreduce-beyond-the-c...).


I agree with your first sentence, but disagree with the second. That you can find an exact mapping does not prevent the underlying API from being more complex than what you need. That you had to realize "Oh, m=2" is more complexity.

I'm not arguing this is a terrible thing. In fact, I think this is an acceptable level of additional complexity for the power it buys you. But if we're going to make an honest evaluation of the trade-offs, I think we must mention this.

It may be relevant to the discussion to point out that I work on a tuple-based streaming system. Product: http://www-01.ibm.com/software/data/infosphere/streams/ Academic: http://dl.acm.org/citation.cfm?id=1890754.1890761, http://dl.acm.org/citation.cfm?id=1645953.1646061


The right balance between technical and pragmatic is project specific. Here are a couple of heuristics I use to find it.

1) Minimize the overall time investment required to have an acceptable solution to the problem. Time investment includes maintenance and debugging.

2) Make sure the marginal benefit to additional time spent in one area of the project exceeds the opportunity cost of focusing on other areas. E.g. It's good to spend a lot of time on getting the overall architecture right but it matters far less to optimize the business logic (for one thing it tends to change more often and for another it's hardly ever the performance bottleneck).


In software engineering the "thinking beforehand" approach is probably riskier than the "think as you go" approach. For example, often the requirements aren't well-defined when the project starts.

Contrast this with physical-stuff engineering (which I think is what you're referring to by saying "engineering background"), where the requirements tend to be better-defined and the cost of experimentation/refactoring is a lot higher.


Well electrical engineering turned out to be 50% software (much to my dismay) so I'd say it was right between tangible and non-tangible engineering.


This resonates:

> The most important characteristic of a suffering-oriented programmer is a relentless focus on refactoring. This is critical to prevent accidental complexity from sabotaging the codebase.

I can't tell you how many times I've seen accidental complexity creep in because someone adds a new feature without taking the time to refactor to the simplest set of abstractions. But - and here's the flaw in the approach - you have to be an expert in the code to produce such a set of abstractions, which is a potential bottleneck when your codebase is big enough to require multiple developers. Not everyone has the time/capability to be an expert.


What's a good solution then? Have code reviews where the newer developers have to get their changes approved by older developers?


"Flaw" seems strong. If you have a small enough and strong enough team, the approach makes a lot of sense to me. I wonder how big the original Storm team was.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: