Hacker Newsnew | past | comments | ask | show | jobs | submit | asknemo's commentslogin

Am I the only one who think pg's view points appear to be getting more and more extreme, in some sense rather biased compared to his previous essays?

Zynga is definitely all about growth. It is fiercely focused on metrics, fiercely focused on growth. But as someone from game industry, we cannot agree that this model is THE model that gives the world and everyone value. If the game industry worked like the way pg describes in the essay decades ago, we would never have Diablo, Baldur's Gate, Grim Fandango or Minecraft. We would all be left with choices like Farmville, Monsterville, Mineville, forever and ever.

"Growth drives everything in this world."? Does it? All fads grow like wildfire too, but does it drive everything in world? Or a better question would be: should we allow it to?


I don't think he's trying to make any moral judgements. He's just making observations about what actually works within the context of our capitalist system. Capitalism has produced this period of explosive growth centered around technology in the USA and Silicon Valley in particular. And if you are trying to participate in that ecosystem, then you should understand what he says (IMO).

There wasn't any part of the essay which says you should start a startup, or that it is a morally valuable thing to do.

I somewhat agree with you that capitalism doesn't produce optimum value for society. Zynga's maybe an example of that -- I'm sure the are worse ones. But as the saying going, we have the worst system except for all the other ones that have been tried. For all the Zyngas there are some pretty good companies too.

Also, I think your question is essentially hypothetical or philosophical: "should be allow it to?" Who's we? Short of an overthrow of the US government, I think this segment of the economy will exist for a long time.

If you want to have an interesting reflection on capitalism, read "The Idea Factory", about Bell Labs. That is the other end of a spectrum -- a single company holding a monopoly for 50+ years. But it actually produced immeasurable value. It's interesting to think on which model produces more value -- a monopoly where people are free from competitive pressures, or an intensely competitive market.


You made great points, but I disagree about the morality part.

Wall Street had also produced explosive growth in our economy. It was also a ingenious system with participation of lots of hackers and talents. But I think most will agree now, that when Wall Street operates without considering its own morality, it is by itself, immoral.

In other words, I believe the essay's lack of reflection on the morality issue, which is definitely not a small one (e.g. the Zynga example), is what makes it biased, and partly, immoral.


OK, but you're making an observation without a solution... that kind of thing is pretty much irrelevant to people like Paul Graham and CEOs of startups, who have to make decisions about what things to do. You can call people immoral from the sidelines but it will have zero effect.

My opinion is that corporations are essentially "amoral" -- not immoral. Morality simply doesn't enter into any substantive decision. Google's founders often invoke the self-interest argument: "people don't have to trust us to be moral, because if we acted against our users, they would leave us, and we wouldn't make any money". This is what I call an amoral argument -- with no negative connotation to "amoral". PG almost invoked a version of this in footnote 8.

You might think that being amoral is equivalent to being immoral, but morality isn't as well-formed a concept as people think it is. Namely, the most common use of the concept of immorality is to label "stuff I don't like". I mean, what's wrong with millions of people playing Zynga games all day? Would people be curing cancer if they weren't playing Zynga games?

Another issue is that a person isn't moral or immoral; it's well known that the same person will act moral or immoral according to their environment. Paul Graham even said that about HN (in terms of trollish behavior). (See http://en.wikipedia.org/wiki/Fundamental_attribution_error)

In the case of Wall St, there's no possibility of it "considering its own morality". No amount of goading or convincing will make an ounce of difference. The only way I can think of is for voters to make it clear to elected officials that they won't tolerate the status quo, but so far that hasn't happened. Even after the 2009 crash.

(And btw I didn't make any assertion about morality in my original message, other than to say "I somewhat agree" about Zynga, so not sure what you are disagreeing with.)


I appreciate your response, and I fully understand the realistic angle that you have provided.

But I have to clarify that I am not calling pg immoral. I am suggesting the essay could be. People outside the game industry may not get the Zynga problem, but you can also look at, say, Groupon's controversies. "Immoral" could indeed be too strong a word, but I believe few will disagree that aggressive growth strategies has some inevitable side effects, and for this no amount of footnotes is sufficient.

But again, call me naive, "people like Paul Graham and CEOs of startups" should, contrary to what you claim, should care MORE about these problems, because they can certainly afford to, and when they do, it will matter. :)


OK... well I think your point is that PG's essay is "amoral", which is true. It doesn't say anything about whether hyper-growth is a thing we should value (as human beings, not as money making machines).

Actually ALL his essays are amoral. PG is very precise. He doesn't advocate specific things; he lays out a set of deductions. You will come to the same conclusions IF you have the values he supposes. IF you value this, then you should believe that. Which is a true statement regardless of what you believe.

My point is that amoral != immoral. But I think you are saying they're the same -- that all decisions must have a moral component or they are immoral.

I agree that hyper aggressive growth doesn't always produce the kinds of companies that society "should" want... but sometimes it does! It's probably impossible to separate the two, not least because everyone has different opinions on what's valuable.


You're reading too much of your own POV into another person.

Compare this essay with a random earlier one I selected (I just scrolled down and clicked a title that would seem ripe to disprove you)

http://paulgraham.com/opensource.html

Morality is rife within it, justice, monopolies, boss-employee relations.

He may have changed but all of his essays aren't amoral.


I don't read that essay as having much morality. I think YOU are reading your POV into it.

There is an assumption that a monopoly by MS would be dangerous. That's not a particularly judgmental stance. I could imagine someone having a different belief system about monopolies, but it hardly seems like a moral claim.

Then he is saying that he prefers to work with an economic partner rather than under the employer-employee system. He doesn't say it is morally right. He says that business can learn from this, because it would make the business more productive. That's an amoral argument. He's invoking economics to justify a way that people should interact.

It's basically a libertarian argument, and in general this type of argument is agnostic about morals.


to make what is clearly a generalization, though I think an accurate one:

Silicon Valley = makers

Wall Street = takers (sometimes enablers or rewarders for the makers)


This is not true. Minecraft is the best example; it had an insane growth rate both in percent and absolute numbers, exactly what PG is talking about. The other games you mention are also good examples of starups in high-growth terms. And when talking about markets, most people who enjoy Diablo, Baldur's Gate and Minecraft _don't_ enjoy free-to-play games, since these games have no element of art or story to them.

And PG even explicitly said that not all companies should be startups. You are reading things into this essay that aren't there.


Somehow titling Minecraft, Diablo, BG as "Startup"s or "Startup-like" things feels wrong to me.

Minecraft was one guy (probably) enjoying himself while putting together something creative. It then exploded. Did he really target 10% growth per week?

Diablo, BG, etc. were all calculated bets by people with lots of experience in the industry. They weren't building a business model, a new organization, etc. They were doing "projects" they felt would promise high returns. That's it.


Minecraft maybe didn't set out to be a startup, but it achieved all the growth goals of one, and it's certainly fair to call it one now, isn't it?

Games like Diablo and BG are actually run in a fashion closer to movies than startups. A game publisher finances games that it expects will succeed. The publisher's competitive advantage comes largely from brand equity and high capital cost of creating a new game. Thanks to brand equity, they are primarily expanding or upselling into a market where they already have a lot of reach. Of course, there is some risk of failure in each project, but it's closer to that of making a blockbuster sequel than that of solving a novel problem with technology.


Growth is the litmus test, but the article seems agnostic about how you achieve it. Do you go full on psychological predation like Zynga? Or do you make a tool that is undeniably better than the competition by orders of magnitude such as Google? The article seems non-prescriptive on this point. But, if you do not achieve growth by any means, then your company is dead by definition, so you should probably be measuring it.


For modern startups, growth can be optimized on an ongoing basis but not if you're not building something that people love. There's a reason why Zynga games' growth aren't sustainable and they need to rely on the novelty of games to grow.

I'm sure even Blizzard and Steam teams are very focused on growth as a metric. It's just that unlike Zynga, they believe that creating fun mechanics in their games and improving on them through patches make for a great way to move their needle. Therein lies the difference. I guess the focus shouldn't just be growth at all cost, but sustainable growth.


"Growth drives everything in this world."

I think you're choosing one single sentence and taking it out of context, in other words, misunderstanding. By "this world", I took pg to mean the high-tech area of economic activity, with silicon valley serving as a figurative stand-in. I don't think pg means that every single thing in the world is driven by growth. The essay is about what makes a company a startup, and how a startup can be successful. The essay is not about what is most important in the world.


Growth is for startups. This determines the value of startups. Startups are only a minority but they get alot of attention. This is logical since to achieve the growth goal they need the publicity.

People have different understanding of success. Make your pick and don't care of the others.

Some people get confused one their goal or success target.


This happens all the time in machine learning applications. Or many other engineering disciplines I dare say. If theorems and laws never need some tweaks here and there in the real world, what do we need hackers and engineers for?


Often the tweaks are then used to inspire more solid mathematical footing. An interesting example of this is going on with the recent surge of interest in neural networks and deep learning at machine learning conferences. What used to be hacks and heuristics are being given a more rigorous narrative. Of course, as soon as we have a better model for neural networks, someone immediately finds a non-rigorous tweak to improve its performance. And the cycle goes on...


A good example is regularization. You have nice proofs saying that your classifier is optimal, then you tack on a regularization term to it, which breaks your optimality proof but improves your classification accuracy. It seems unexpected, but it's not really all that surprising when you get down to the details of it.


Oops hit the down-arrow without intending to, my bad, hope someone will fix that.

There is nothing tacked on about a regularizer though, it is very sound even in theory. There are several ways to look at it. One way is to see it as a natural consequence of Bayes law, it is just the log of the prior probability. There are certain things we know or assume about the model even before looking at the data, for example we expect the predictions to have a certain smoothness etc, all this knowledge can incorporated into the prior model, and that is what the regulaizer is. Another way to look at it from stability of the estimates of the parameters. I find the former more convincing.


Absolutely, there's a pretty clear mathematical justification for regularization. However, it is very literally tacked on at the end. Take logistic regression, if you minimize the cost function without regularization, you get a max-likelihood estimate of the regression parameters. But what we do is to add a regularization term to that cost function. Minimizing that cost-function will no longer give a MLE solution, but it will (likely) give a better solution. It all comes down to understanding that the MLE property is an asymptotic result. Same goes for covariance matrix estimates, where you have regularization procedures that are guaranteed to never be worse than the plain MLE solution.


As an engineer, you should also be aware of when discovering the basis of the tweak is crucial. Discovering that tweaking the beam bending equations gives a much better fit to your test results on the beams you would like to use for your building is one example.

In some cases, these tweaks provide better results for a small range of conditions. That small range may be big enough for you (given your task at hand), but without understanding the tweak, you can't actually know. So care must be taken.


This kind of thing is awesome in a way. I get the sense that machine learning really feeds on people attacking problems from both ends, the elegant probabilistic side and the practical optimisation hacks both inspire each other.

Some potential downsides for a hack which isn't backed with any theory though, just to demonstrate why it might be worth trying to do some theory after spotting one of these hacks, from a practical not just an aesthetic perspective:

- It may have an impact on convergence properties and numerical stability of any optimisation algorithm you're using to fit the model. Convergence speed, quality of local maxima attained, whether it even converges to a local minimum of your cost function at all, whether there are any guarantees that it doesn't sometimes blow up numerically in a horrible way...

- In general it may be brittle, with the circumstances under which it works well poorly understood. Will it break as your dataset grows? will it work on slightly different kinds of datasets?

- Too many arbitrary parameters to tweak can be expensive unless you have a smart way to optimise them (smarter than grid search + cross-validation)

- Maintainability. It can be frustrating trying to re-use work when people have been less than completely honest in documenting things like "this term/factor/constant was pulled out of my arse and seems to work well on this one dataset, caveat emptor".


One should often consider applying tweaks to a final system. If there's an obvious place to introduce a free parameter, it seems silly not to do so and cross-validate the parameter against application performance.

Things get out of hand if there are many such possible tweaks, or multiple components are combined, each with interacting tweaks. Then some principles behind the tweaks need identifying. Or at least a differentiable cost function to target.


"If you're a good hacker in your mid twenties, you can get a job paying about $80,000 per year."

Wow, things change fast. Isn't there a thread talking about certain mid twenties in Google having $250k per year lately? It was written only 6 years ago.


Wouldnt the equivalent non top of the range amount be around 120k now? And it seems the dev market is a fair bit tighter than 6 years ago?


What do you mean by "tighter"?


A good programmer who can find a job in an urban market can earn $120K a year. But there are a lot of programmers who can't find jobs, and that puts significant downward pressure on salaries; and there are a lot of programmers in more rural markets who are willing to telecommute, and that puts significant downward pressure on salaries as well.


I too recently lost a co-founder to a Valley company of similar size. Considering that he is very green, not exactly the best in our startup and that we are based somewhere with rather low pay for engineers in general, the offer he received was very hard to believe for everyone. It's hard to convince people to stay when those work in Valley could be as interesting without the stress and come with a big fat cheque.


Can't help casual users, but for power users, this is a very handy tool to inspect the source on-the-fly:

https://chrome.google.com/webstore/detail/bbamfloeabgknfklmg...


Ah yes, but how can I trust that "Extension Gallery and Web Store Inspector" is safe to install?



I'm not sure how you can download an extension before installing but in Ubuntu after you've installed it you can look at the source under ~/.config/google-chrome/Default/Extensions/ (or in Windows C:\Users\You\AppData\Local\Google\Chrome\User)


You can inspect it too


The misspelling in the blurb "Crack open any extenstion or web app in a gallery and see what it actually does before installing" doesn't render a lot of trust.

s/extenstion/extension/


I doubt using typos to discredit other people's work or opinions is rational or polite. I didn't write that extension, but he/she could just be a non-native speaker like me. What's your problem with people who doesn't speak and write in your native language? Can you write in mine, 100% error free?

*fixed typo, thx


I have absolutely no problems with people who don't speak and write in my native language. You're misinterpreting my comment, which was intended to be constructive, and responding with an ad hominem attack by suggesting that I am bigoted.

I think it's quite rational to suggest that work is of dubious quality if something as important as the tagline or elevator pitch contains errors that should be caught by a proofreading or spell-checker. The author spelled it correctly in the title and elsewhere in the text, so it's a simple error, not a lack of knowledge. It would be impolite to not suggest improvements, because the author could use my help.

A rational response to my comment would be exactly as you have done with drv's comment: "Fixed typo, thanks!"


I don't want to be too pedantic, but I thought you might want to know "discrete" is not a verb.


No, but I'd certainly have found a spell checker before trying.


Do you generally measure code quality using typos in unrelated (to the quality, that is) copy? I've seen worse errors in the MySQL docs, but I still trust it to hold data.


Meta: Wow, I didn't expect to be downvoted for this. I wasn't trying to discredit the extension, I just wanted to be helpful. The attempt at humor probably didn't improve my case any, but I can't edit it anymore.

How can I submit comments like this in the future? I didn't see an easy way to submit feedback to serg472 (the author's name isn't clickable, and there's no email as described in http://www.google.com/support/chrome_webstore/bin/answer.py?...), so I dropped it here in case the parent, who was obviously supportive of the extension, wanted to forward it to someone who could take action.


Threatening to go after other Android partners makes little sense for Motorola. It's not something Motorola would do to get most profit from it. If Motorola has threatened Google something, it would be to sell to Microsoft. That will make much more sense.


I would bring up IP as a very important for differentiation (among Android vendors). We have a very large IP portfolio, and I think in the long term, as things settle down, you will see a meaningful difference in positions of many different Android players. Both, in terms of avoidance of royalties, as well as potentially being able to collect royalties. And that will make a big difference to people who have very strong IP positions.

http://www.unwiredview.com/2011/08/11/motorolas-sanjay-jha-o...


That doesn't mean the action make sense. My point is that it won't make sense for Motorola to do that, not whether they did or did not do it. Google can easily tell them that if they do that prematurely, Android ecosystem will fail, rocking the same boat they are in. Sensibly, you can't get much royalties from a failed platform, and Apple/Microsoft won't give Motorola any medals for going that path.


Doubt it. The dumbest would have been sitting there and do nothing.


We're comparing things they did do, though, so that's not an option.


Another scenario: you wish to tell your web app's users that there are some issues on certain Firefox versions. You wrote "There are some issues for Firefox 7.xx". Users scratch their heads and ask, what version? Nightmare follows.


If it's your webapp, you check the User-Agent-header (which still contains the browser version) and then, if they have the problematic version, tell the user to make sure that they have the latest version of Firefox installed (which they can still check in the about box).

If it's somebody else's webapp, you still tell the user to check the about box to make sure it's the latest version of Firefox.

For you as a web author and supporter, this process makes sure that you only have to test with one version of Firefox any more: The latest (minus the few percent of <= 3.6 installs still around before the fast release cycle went into effect)


You mean like http://fafsa.gov ? They tell users to get the latest browser version, and then block them if they're running anything higher than Firefox v3.6. (Complaints are met with "install 3.6" or crickets. Same problem with running any browser from Mac's 10.7 Lion, as they also do OS sniffing.)

On the bright side, removing version numbers may help push ignorant web devs away from version-based browser sniffing toward the more palatable/usable/progress-friendly functionality-based sniffing.


"If it's your webapp, you check the User-Agent-header"

Thats fine if your communication is only via your website. Consider if you have to send an email bulletin and explain this. Or write it up in a technical support answer.

"If it's somebody else's webapp, you still tell the user to check the about box to make sure it's the latest version of Firefox."

Sometimes there are problems in the latest version.


> Thats fine if your communication is only via your website. Consider if you have to send an email bulletin and explain this. Or write it up in a technical support answer.

The people who visit your page with the problematic version of Firefox will get to see the message. The others don't need to know.

> Sometimes there are problems in the latest version.

as you only have to deal with one version of Firefox, work around the problem in your web app code.


> The people who visit your page with the problematic version of Firefox will get to see the message. The others don't need to know.

Obviously, this is not true when you're providing a web application as a service to a large corporation with big IT departments, business analysts, armies of project managers and complex contracts with all sorts of requirements.


> as you only have to deal with one version of Firefox, work around the problem in your web app code.

Ah, of course, wish I'd thought of that.

And what do you tell them in the interim, while the problem is being debugged, fixed, tested and rolled out?

I guess now its just "doesnt work in Firefox" not "doesnt work in Firefox 7, keep using 6 for now".


Why is this a problem in Firefox but not in Chrome? Chrome updates itself all the time and it's really hard to make it stop doing so.

Now Firefox is moving in the same direction and this behavior suddenly gets to be a problem.

Firefox releases new major releases every six weeks, but there is a period of 12 weeks for it to move from alpha to final. That should be enough time to debug, fix, test and roll out.

And if you are targeting corporate installations, you will probably disable Firefox auto-updating itself, but again, it's irrelevant to ask "which version of Firefox are you running?" as the answer will always be: "the latest possible" which is the latest stable in non-managed cases and the latest deployed in managed cases.

Hence I think removing the version number from the about box is no big deal and will help against all these "omg! They are crazy to release so many major versions in so little time" posts we are seeing all over the place.


No. You tell the users that the web app works fine on Firefox, and they can check in the about window to make sure they're up to date.


Could anyone explain to me why the HN community seems to have a particular interest in AI compared to other more "academic" areas? I have seen quite some amount of AI resources here, but at the same time for most startups discussed here they aren't particularly related to AI. Just curious.


I'm going to guess that you might be surprised at the number of startups that actually do rely on some aspect of AI, if you were to dig deep enough. Even if they're not "an AI startup" lots of companies these days are using some sort of machine learning, data mining, collaborative filtering, etc. stuff, which are aspects of AI. Look at any company that's using a recommender system of some sort... that stuff is one element of AI.

Aside from that, I think it's just that hackers have always been fascinated with AI. And lot of what we know as "hacker culture" to this day, dates back to people and events that happened in the AI research group at MIT, decades ago[1].

[1]: http://en.wikipedia.org/wiki/Hacker_(programmer_subculture)#...


My interest in AI is purely practical. The fist reason (very generic) is that I am (nearly) a lone contributor on a project that grew large. I have been using every trick in the book to make my code more abstract and manageable by one person. The next step in code reduction requires declarative programming (aka Prolog, though doesn't have to be) which requires deductive reasoning, which is a major topic of AI.

The second reason (more specific) is that I need to use an ontology (stored as a database) which is basically another topic of AI.


One possible explanation is in the syllabus:

AI has emerged as one of the most impactful disciplines in science and technology. Google, for example, is massively run on AI.


Google is massively run on databases, but that topic isn't as popular in media (or on HN). I think a big part is that it is such a highly visible topic that is also very romanticized, partially because its a field where serious inroads have been made (at least compared to expectations and last generations science fiction)


Databases are very "old world". Very much part of modern enterprise. Case in point, Oracle. As a result, it's not very exciting to people.

I expect that when there is an Oracle of AI, it too will lose its lustre.


Machine learning in particular is hugely important for startups--for example, given a rich training dataset of deals you've bought from a deal site (with predictors such as type of deal, location, price, datetime, number of friends who've bought them, etc), can we train a model that'll rank deals by likelihood that'll you'll purchase them?


Many technologists make money when they automate tedious processes executed by humans.


The author's view is strange indeed. To be a "rounded, open and engaged intellectual citizen" we can well just get out of the room and put curiosity and action into people's lives and matters, and reflect on what you see and do. In my experience that works much better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: