Hacker News new | past | comments | ask | show | jobs | submit | ten_fingers's comments login

Ah, how to use the law of large numbers and utility functions to make money!!!


You seem to be new here. Let me give you a tip: you will be hellbanned very shortly if you continue commenting on the site the way that you have been doing so far. Please read the guidelines:

http://ycombinator.com/newsguidelines.html


You responded to my post about the law of large numbers and utility functions?

That is actually a good and appropriate observation.

Of course, some mods came along and down voted it. I doubt that they understand either the law of large number or utility functions. Of course, the law of large numbers is a crown jewel of 20th century probability theory, and utility functions are one of the better contributions of von Neumann.

For being HELL BANNED, sure, that is the shame of HN. I no longer care about HN. I've had it.

I long ago read the 'guidelines', and I've done nothing wrong. But HN is run by some arrogant, nasty people. To HELL with HN.


I responded to your post about large numbers and utility functions only because it was your most recent one. I would've sent you a private message had the site supported PM functionality or if you had some e-mail address in your profile.

The point that I am making is about your general behavior in this thread. You have broken multiple guidelines:

'Be civil. Don't say things you wouldn't say in a face to face conversation.'

'When disagreeing, please reply to the argument instead of calling names. E.g. "That is an idiotic thing to say; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."'

'Please don't use uppercase for emphasis. If you want to emphasize a word or phrase, put asterisks around it and it will get italicized.'

'Please don't bait other users by inviting them to downmod you.'


I was attacked strongly for no good reason. Basically this UID was ruined because I am now an enemy of the HN mods. In the future, they wouldn't let me comment in positive terms about apple pie.

The main reason for the attack was clear: I was commenting on how venture capital could do better, and that is a sensitive subject on HN. Further, the VC community is wildly arrogant and just insists on pretending to be the smartest guys in the room. So, no way do they want public comments on their work, comments that presume to tell them better ways to do their work. And, since the venture partners are rarely significantly technical, they are insecure and defensive about their qualifications and, thus, especially fear and resent a technical Ph.D. commenting on their work. Especially the VCs don't want comments that go around them to their LPs, who now are unhappy with VC returns. So, in the end the issue is VC and HN ego.

The HN mods are hot on slapping down any commenter who fails to bow deeply enough before the VCs. So, they slapped me down. In the end, that is the shame of HN and its mods and a display of their absurd ego. I did nothing wrong but defend myself.

The world is what it is. It's too bad that HN is run by some nasty people. But, so be it. Users can read this thread and draw their own conclusions, at least while my posts are still available and not yet 'hell banned'.

I've got nothing to lose here. But HN and PG have lost, and by attacking me for no good reason they deserve to lose.


0) It's very nice to see you change your position. First, "you did nothing wrong". Then, after I pointed out that you did, indeed, do something wrong, turns out you did it because "you were attacked strongly". This is a common behavior in elementary school, and not an acceptable mode of conduct in adult age, especially from a technical Ph.D.

1) There are plenty of people on HN who don't bow down before VCs. See patio11, mechanical_fish, tptacek etc. They are not voted down into oblivion, so it is likely that the reason why you are being downvoted has nothing to do with your particular opinion of VCs.

2) I looked through your comment history on the site. You tend to write long, rambling, stream-of-consciousness walls of text - this is true not just of this thread, but of many others over the last two months. Different commenters have pointed out that they do not enjoy reading your posts precisely because of that reason. Frankly, I don't either. So, I'm pretty sure that's why you were, as you say, "strongly attacked". It is perfectly acceptable to downvote comments for presentation rather than substance, and that happens on HN quite often. If you think that the substance of your comments is so important that presentation doesn't matter, you and HN are both better off if you leave.


My first comment on this thread was down voted to -4 quickly. That post did nothing seriously wrong on the HN rules. Since only a few users are able to down vote, that down voting had to be heavily or entirely from HN mods.

For more evidence, the down voting was well before any responding comments. That is chicken sh!t behavior from the mods.

So I was attacked, and not for anything I did wrong. Then I responded and defended myself and called names, and that was justified.

For your claim that later I violated the HN rules and, thus, engaged in childish behavior, here is a close analogy: It's against the law to hit someone on the street. But if you do hit someone, then they may hit you back just in self defense, and then they are not violating the law. All I did was to defend myself against a wildly unjustified attack.

It was the HN mods who misbehaved and started the fight, not me.

For my writing, it's clear enough and well organized, for a technical Ph.D. or anyone else. But most blog comments are just really short with little content. Many of my posts to HN have had some content.

And my main post here was of length comparable with the PG post I was responding to.

For making my posts shorter, responses on this thread have shown that even when I explain carefully, number points, give headings, give examples, provide summaries, etc., still many readers don't get it. In part the problem was mentioned by PG in his post -- people can willfully respond critically. Then, as PG mentioned, it can be good to have written enough to be able to point to the part of an original post they just didn't read.

In being so critical of me, you are just doing a playground thing of joining with the majority to form a gang to attack me as a group. It's mob behavior. The posts have not been at all thoughtful about what PG talked about and I responded about on evaluating projects and, instead, have just been gang hostility.

So why was I down voted? Not for length, some use of all caps, some use of sarcasm to try to raise interest and avoid being boring. No, I was down voted because I presumed to mention research to venture capital and, thus, rubbed the ego the wrong way on the VC community and, thus, also the HN mods.

If you don't want to read what I write, then don't.

But HN and I are done. HN is run by some nasty people, and I've had enough. PG has already indicated that he believes that HN has become too big to be easy to manage.

In particular, my UID is dead: The HN mods are angry with me, hostile, making me a target of gang hostility, and down voting just anything, e.g., my little line on the strong law of large numbers and utility functions you responded to.

The shame here is HN's. I'm leaving nothing of value.

But on leaving HN, sure, it's run by some nasty people. That's the shame of HN, YC, and PG.


> Since only a few users are able to down vote, that down voting had to be heavily or entirely from HN mods.

This site has been running for more than five years. Trust me, the overwhelming majority of users who can downvote aren't mods, they're not even regular participants in the conversation; they're probably mostly lurkers who submit decent articles.

Your comments are mostly flip, and provide little value. That's why they get downvoted; you're trying to be funny and by the community standards you're not.


PG,

In part you are correct, but you are making some huge mistakes and, in a saying from my past, are "straining over gnats and forgetting elephants".

> That's made harder by the fact that the best startup ideas seem at first like bad ideas. I've written about this before: if a good idea were obviously good, someone else would already have done it.

That's likely true from what you and Silicon Valley see, but that's just a gnat.

Basically you are saying that success is a shot in the dark and it is impossible for an entrepreneur and their financiers to design success with good reliability. That is, from enormous data, obviously nonsense.

Paul, instead of shooting in the dark, you need to turn on a light!

I claim that we can design success, on just clean sheets of paper, with high reliability and that how to do so is well known with a fantastic track record with the long track record right in front of us.

Also, we need to address your:

> if a good idea were obviously good, someone else would already have done it.

Nonsense. Paul, where do you get such stuff? You have one of the best backgrounds of anyone in Silicon Valley entrepreneurship, and yet you still fall for that nonsense that fills the trash and the offices on Sand Hill Road.

What you mean by "a good idea" is just a good 'business idea', e.g., a very short, nearly superficial, description of the business to, say, an early customer. But, Paul, that's nearly irrelevant.

Want a good business idea? Make a billion dollars quickly? A sure fire winner? Okay, this is your lucky day. May I have the envelope, please (drum roll). Yes, here it is: One pill to take once that provides a safe and effective cure for any cancer. That's your guaranteed, 100% true, died in the wool, sure fire, billion dollar "good business idea". And it's obvious, and no one has done it. Done.

Since the 'idea' is obvious, why has no one done it? Sure: No one knows HOW to do it.

Are you beginning to understand?

So, for a hand up, in case you were up all night drinking beer with the boys, here is a 'generic method' (ALL TRUE hackers just LOVE such verbiage):

Step 1. Think of a big, unmet need, something a billion people will pay a little for or thousands of people will pay a lot for. E.g., the one pill cure for any cancer. That is, think of an important, unsolved problem.

Step 2. Find a way to meet this need. E.g., for the cancer pill, have to do some quite good biomedical research. Uh, did I say that it was all easy? I don't remember saying it was all easy. If Step 2 is too difficult, then return to Step 1 (we're writing this in the form of an 'algorithm' since ALL TRUE hackers just LOVE algorithms). Else proceed to Step 3.

Step 3. Sell the solution to the customers and take the money to the bank.

There it is, in just three steps. Of course, likely the bigger the unmet need from Step 1, the more difficult will be the research in Step 2. Whatever, the key to the 'algorithm' is the research in Step 2.

Did I mention 'research'? Gee, does anyone on Sand Hill Road consider research? Well, Paul, you and Y Combinator have high qualifications in evaluating research, but a cursory look at several of the best known venture capital firms in Silicon Valley, Boston, Winter Street, and NYC show little to no ability or willingness to evaluate research. Uh, to evaluate research, as usual, we want at least a relevant Ph.D., a tenure track faculty position in a research university, and some relevant peer-reviewed publications of original research.

So, research is being ignored! Opportunity knocks!

Now, should we have any faith in the promise of research to get solutions to practical problems, or is all of research just ivory tower intellectual self-abuse that has yet to be proven totally useless forever?

Let's see: It turns out that there is in all of the world in all of history exactly one, unchallenged, unique grand champion of doing research to get powerful solutions to important practical problems. And the track record is much better and much longer than that of Silicon Valley.

Without further ceremony, here's the envelope. Yes, the answer is, the US DoD. They got going about 70 years ago and did little projects like "the bomb, the hydrogen bomb", radar, synthetic aperture radar, spread spectrum radar, encoded with shift register sequences, adaptive beam forming passive sonar, inertial navigation, GPS, and on and on.

So, for the question, is it possible to do research that yields powerful solutions to important practical problems? Sure. Done.

So, should Silicon Valley fund research? Well, perhaps not. But, if an entrepreneur has selected a good "unmet need" in Step 1 and done some good research to get a powerful solution in Step 2, should Silicon Valley consider the research?

Hmm ...? But, such research is rare! Right! Did I notice that you have already noticed that big winners are rare, 'black swans', outliers, "few"? Yup.

How many $200 billion winners does a $100 million venture fund need to give good returns to its limited partners? Many? Several? A few? How about just one?

Silicon Valley is shooting in the dark into a pond with nearly only small fish. Silicon Valley needs to turn on some lights, look, pay attention to powerful research for big unmet needs, and then evaluate pulling the trigger.

My guess is that the real problem is the limited partners (LPs) who really prefer to look at reports from accountants. So, the LPs tell their venture funds to do all evaluations as close to accounting as possible. Since the usual accounting metrics are not yet available, the venture firms use surrogates, and their favorite is 'traction'. They want their coveted 'traction' to be high and growing rapidly. E.g., in the case of the bomb, they would have said "You build and test one and get one ready for delivery, and we will chip in for the gas for the Enola Gay.".


I'm sympathetic to what you are saying, but the DoD bit is the worst part of your argument. The DoD wasn't trying to profitably conduct (or fund) research.


You deliberately misread or need a remedial reading course: I was clear, totally clear, crystal clear: My point was that the DoD research shows that research can be powerful for finding solutions to practical problems.

Your point does not contradict what I wrote and is hardly even relevant.

Also I was totally clear that venture capital might not want to fund research. Instead my point, my recommendation, was that venture capital should consider and evaluate research that has already been done. But in information technology, Sand Hill Road will NOT do that. A-H won't do it. Menlo won't do it. KP won't do it. Sequoia won't do it. Founders's Fund won't do it. Silicon Valley will NOT evaluate research. PERIOD. No wonder their returns suck.

For the DoD, actually, if do some arithmetic on something like ROI, their deployments and even their research look good, MUCH better than Silicon Valley.

Let's take a simple example: If read Richard Rhodes, the atomic bomb project, The Manhattan Project, cost about $3 billion. And there was a LOT of duplication and failed directions. But apparently the Bomb saved about 1 million US casualties by avoiding invading the islands of Japan. So, we're talking $3000 per casualty. We're talking a financial bargain.

Want to do some ROI calculation on, say, GPS? How about the ROI of GPS and laser guided bombs? So, roughly get one bomb to do the work of some power of 10 bombs without GPS and laser guidance.

Want to do some ROI on, say, packet communications networks, i.e., the Internet, and just for the DoD uses?

Again, yet again, my point was that DoD has shown in rock solid terms for over 70 years that research can provide astoundingly powerful solutions to important practical problems. So, in my 'algorithm', the key in Step 2 is just such research, and the DoD has shown that such research is possible.

Just what part of this simple argument is too difficult to understand?

Look, guys, I know that it's possible to dream of typing in some Java, Python, C++, etc. for a social, mobile, sharing, app and hope to get rich. But, PG's essay explained how tough it is to evaluate such work early on, and the averages along Sand Hill Road just SUCK.

But the DoD has done well in essentially my three steps for 70+ years.

Gotta tell you, in broad parts of our economy and technology, what YC and SV use as project evaluations won't pass either the giggle or sniff tests. Instead, projects in applied science and engineering receive careful, detailed evaluation early on and, when a passing grade is given, execute with high success. We're talking dams, bridges, tall buildings, airplanes, and much more. Net, the evaluation techniques of SV are back in the paper airplane days.


Why's your post blurred?


It's a downvoted post. (There are a few shades of gray.)

Note that you can't downvote until you have enough karma, which I believe is 500 right now. The threshold goes up as the site grows and karma becomes easier to acquire.


You overestimate the value of academic research and of PhDs


No, I never claimed that academic research and Ph.D.s have high average value.

The "value" of these two in nearly all cases is low. But, the value in particular cases is astoundingly high, totally blows away nearly everything else. What is crucial, then, as I explained very clearly, is EVALUATING the research. Then what the "value" is in most cases is irrelevant. Instead, what is relevant and crucial is the value after good results from a careful evaluation.

I didn't suggest investing in research and only suggested evaluating research that has already been done.

The US DoD has a fantastic track record both in evaluating projects to do research and in evaluating research already done.

The US research universities are also good at evaluating projects to do research and also research already done. Believe me, in technical fields, the US research community is quite good at evaluating both research proposals and completed research.

The information technology part of Silicon Valley won't even attempt to evaluate research even that is already done.

Does Silicon Valley get a lot of project proposals with such research? Likely not. But, as PG's essay explained, so far Silicon Valley nearly never gets project proposals for projects that are "big wins" and that SV can tell with good accuracy are "big wins" early on.

So, since, as in PG's essay, SV is struggling, especially in project evaluation and refuses to evaluate research, I mentioned, as in my Step 2, that research can yield powerful solutions to important problems from my Step 1 and that the US DoD does well in evaluating research. So do many other parts of our economy, e.g., essentially all of aerospace and huge fractions of all the parts of our economy involved with challenging engineering, and in these cases the evaluations are for high financial ROI.

In simplest terms, the ROI averaged across Sand Hill Road and Winter Street sucks; as PG's essay explained early project evaluations are shot in the dark; research can provide powerful solutions for important problems; research really can be evaluated with good accuracy; but SV refuses to evaluate research. As PG's essay explained, to get SV's returns up takes only a tiny number of "big wins"; well, the US DoD has done well evaluating "big wins" with high accuracy for over 70 years. So, I made a contribution, but I got attacked with heavy down voting. There are people on HN with a lot of power who don't want to hear about different ways to operate that promise to solve the problems they are struggling with. Piss poor.


You're not making any sense whatsoever and your points are far from clear.

For starters, as an asset class, sand hill returns are probably as high as it gets so "sucks" is definitely the wrong description.

The DoD comparisons are just weird.


You are not reading and are just angry for no good reason.

And what you are saying is total nonsense.

For the venture capital asset class, average returns over the past 10 years have been poor, including on Sand Hill Road. For the returns, there are many good sources; one of these I mentioned here is an old post of Mark Suster.

Some of Suster's data shows that in the last ten years roughly half of venture partners are no longer.

Common statements are that limited partners are disappointed in the returns and that many are stopping investing in venture funds.

A few venture funds have done well and have been able to raise funds recently. The total number of venture funds able to raise a round easily now may be under 20.

For information technology venture returns, "sucks" is a fully appropriate word. But if you are a venture partner, then you and your LPs know these facts of life very well already. Also there is enough data so that most entrepreneurs know the truth, also.

The DoD evidence is rock solid and right on target: If you suppress your anger and actually read and pay attention, as I have now explained in this thread in overwhelming clarity at the level of about the fourth grade over and over and over, my point is that the DoD shows that research can yield powerful solutions to practical problems. Apparently this very simple, rock solid, overwhelmingly well supported point is just too difficult for you to understand.

To hell with this HN user ID. Mods: Vote it down to zero. Your site and your mod work SUCK. To hell with you. Use your HELL BAN tricks, etc.

Your down voting mods are chicken sh!ts who won't engage in rational discussion but just sit back, silently, and attack carefully written, helpful posts with down voting. HN is arrogant and just SUCKS.

Hell, just go ahead and delete the UID.

Users be warned: HN is run by nasty people.


You're calling _me_ angry?


Okay, you dirty, filthy, rotten chicken bastards, ANSWER with THOUGHTS or STOP the voting attacks, you dirt bags.


More attacks. No thoughts. WHAT a flock of bird brains.


I think you might have not been down voted as much if you had written less sarcastically and with more practical solutions (versus ideological). The solutions you are offering are so general that by following this high-level point of view we can solve world peace, world energy and world poverty and still make the evening tea.

You see HN is mostly a community of "doers", and though the ideas you presented might have merit among circles of people who.. discuss big ideas (for the lack of a better description), a community of doers says and does things they can actually attempt doing today, tomorrow, or in a week with a reasonable probability of success. I am not attacking you, just trying to explain why I think you're being down-voted.

These might be helpful, specifically the comments section: http://ycombinator.com/newsguidelines.html


> The solutions you are offering are so general that by following this high-level point of view we can solve world peace, world energy and world poverty and still make the evening tea.

Nonsense. I'm talking applied research. There's rock solid, highly developed, high quality education for that called a Ph.D. in applied physics, applied science, applied math, the mathematical sciences, and many fields of engineering. The education is for 'doing' and is fully 'practical'. The shelves of the research libraries are stuffed with peer-reviewed journals of original research with 'applied' and synonyms in the titles. These journals state strongly that they like papers with actual applications. There is nothing too "general" about what I described.

It is true that the HN community and Silicon Valley are short on Ph.D. holders in applied physics, applied science, applied math, engineering, etc. But, as I stated up front, YC is unusually well qualified in these directions.

My post made some rock solid points but was written in a way to raise 'attention' although attacked no one. Instead, I was attacked. Likely the attackers were mostly just HN mods. I've seen such before at HN: There are some strong, secret PC norms sometimes enforced here. Thus HN is nothing like free and open discussion of IDEAS.

PG's essay raised some questions about some pressing issues, and I gave some rock solid answers, and was, thus, attacked. Piss poor.


Then why aren't you a billionaire yet and why do you need Paul?


If my project is successful, I will be a billionaire, many times over. I have no desire to do that, and really don't want the down side of being that wealthy, and didn't try to pick a project that would make so darned much money, but that's just the way my project looks.

I picked a project using my Steps 1 and 2. So, in Step 1 I picked a big unsolved problem, one that nearly every Internet user, desktop to mobile, wants to have solved and that so far is at best poorly solved. Then I executed my Step 2 and drew from my background in pure and applied math, had some new ideas, wrote out some new theorems and proofs, as my education taught me very well how to do, and then wrote the corresponding software.

At this point what is left to do is not very much quite routine Web site construction and some initial data collection. The rest of the software is ready for at least initial production. I've written successful production software before and for this project had no desire to write 'prototype' software.

But what's crucial about my project is the research, just the research, or Step 2 in my post. All the rest is routine.

The main business risk is, will users like my solution. Why is there a question? Mostly because the UI and UX are different. The UI is much easier to use than anything in, say, Office, but there is still a little for users to do.

Can the solution 'scale'? Apparently. From how my software works, my software timings, and some fairly simple estimating, it appears that my software could serve the world from just 2000 square feet of standard rack space in a room of, say, 20,000 square feet. So, my software is relatively efficient. The needed scaling techniques are just the simplest ones -- lots of parallelism and redundancy and processing mostly read only data with good locality of reference.

For 'needing' Paul, really I'm not trying: I've never applied to a YC 'class' and wouldn't want to be part of one. E.g., I don't have a Mac laptop! And I'm building on Microsoft instead of Linux. And I'm writing in Visual Basic .NET instead of C#! So, my software writing doesn't 'fit in' with the YC or HN 'norms'! And, more importantly, I'm not writing just demo or prototype software. Also, I'm a one-person effort: As founder, I insist on knowing all the early software, and the way for me do to that is just to write it. Besides, I enjoy writing software.

The 'business idea', the research, and the corresponding software were all fast, fun, and easy for me. But learning enough about .NET and SQL Server administration has been a self inflicted root canal procedure bottleneck -- that maybe by now I'm mostly through.

When I get some revenue or equity funding, for more obscure details about Microsoft's software, e.g., when I get to be a big uses of Windows Server and SQL Server, I will just pick up a phone, call a Microsoft expert, and pay. My patience working through MSDN Web pages is drawing to a close. Similarly for boxes I get from Cisco.

So far I am 100% owner. Some venture funding would have helped me a little mostly just because I could have called Microsoft instead of worked through thousands of MSDN Web pages. Also a LOT of venture funding would have let me hire people for all the routine software. Net, so far being 100% owner has likely been for the best.

But in the future there may be a role for some venture funding. But it looks like the 'window' will be short: By the time I qualify for such funding, I should be close to no longer needing or willing to accept it.

But YC doesn't really do venture funding. So, I would not be looking to YC for venture funding. So, my post was not to try to get YC funding.

Instead my post was to try to help Paul with the struggles in project evaluation in his essay. Also, since SV has similar struggles, I was writing to help SV. If someone in SV wants to discuss venture funding soon, then okay, but I doubt they will.

For SV funding my project, from all I can tell there will be no problem if and only if my project is nearly far enough along that I no longer need or will accept funding!

My guess, from contacts with VCs I have had, is that to fund my project now, VCs would have to evaluate my research, which they won't do and would have a tough time doing, and then violate some rules from their limited partners.

So, really my post was to tell PG, YC, SV, VCs, and the LPs that for the few "big wins" they want, they should learn to evaluate research and, then, should do that.

Of course, the SV answer, should they ever actually think that far, would be, if the rest of the software is just a little, routine Web site construction, then that is not too much to ask before looking for equity funding. My response would be, okay, but then you risk trying to get on my airplane after it has already left the ground.

Net, then, my post was really to try to shock SV enough to get them to pay enough attention that maybe I could do them some good on one of their worst problems and not really to get funding for my project.

But I should be worth about $500 million: I helped start FedEx and saved it twice. My offer letter said I'd get stock. Later Fred Smith told me, with Mike Basch, that the amount would be $500,000, and that would be worth ballpark $500 million now. That FedEx wouldn't do what they promised in my offer letter is my loss but their shame. Ah, what the heck: If my project works, then I'll be worth more than Fred Smith anyway.


This is more venting, or a manic episode (I am not being facetious, I am reading it that way) without any sort of specifics besides the merits you are expounding about applied research without specifics in an attack like style of writing. It's very interesting for me to read as a stream of conscious, but not practical in any aspect, nor open to any debate.

Your "doing them some good" had nothing specific to merit attention towards fixing a... valuation or funding problem?


My point is clear, simple, rock solid, quite explicit, and very well supported: Again, yet again, this time just for you, my point from DoD research is that DoD research shows that research can find powerful solutions to important practical problems. Examples include a long list of astounding military technology from the atomic bomb, the hydrogen bomb, sonar in all its forms, radar, now a deep and astounding field, laser guided bombs, GPS, stealth, high bypass turbo-fan engines, e.g., first for the C5-A, carbon fiber materials, CAD, originally heavily for aerospace, etc.

What is "practical" is the unique power of research to find powerful solutions to important real problems.

I omitted all my peer-reviewed, published research results. But if you want some examples of research, try the shelves of any research library.

My post, if actually read, directly addressed the main problem in PG's essay, how to evaluate projects. My solution was my three steps. That is, start with an important problem and do some research to get a powerful solution. That solution addressed PG's issue.

For how to evaluate some routine application of software for a simple case of some social, sharing, mobile app, my solution is don't try and, instead, go with projects that use original research to get powerful solutions for important problems.

If you want to know what research is, get a Ph.D. in a technical field from a good research university. If you want to know how DoD does research and applies it, then get a job in a DoD laboratory that does such work -- the DC area is surrounded by such labs from NRL, NSRDC, JHU/APL, and many more, and there are many more such labs all across the US.


Come on bird brains, cough up what passes for thinking in your flock of losers.

"Losers"? Sure: As the limited partners know all too well, the returns over the past 10 years from the 'information technology' venture capital 'asset class', in highly technical terminology, just SUCK. And PG's essay provides more evidence of the struggles to make money.

Bluntly, on average, even including the big winners, the HN flock is losing. So how to pick winners is a pressing issue with good answers not yet widely implemented or known.

So, I give some answers with rock solid foundations, and you HN bird brain chickens attack with votes but no thoughts. WHAT a flock of losers.

And since only a tiny fraction of users can downvote, no doubt the down votes are from mods. WHAT a bunch of yellow, brain-dead, bird-brain, head in the sand, chickens.


Ad-hominems will get you nowhere.

Also you appear not to have any idea how much waste exists in Pentagon contracting. Look up the writings of Robert Higgs, Winslow Wheeler, Dina Rasor etc etc etc When DoD is throwing billions of dollars around with merry abandon of course some of it ends up used for useful projects. They use up all their money and then get their budgets bumped up automatically by the politicians.

And, if YC starts acting like DoD, who exactly is going to provide follow-on financing? None of the money men, not Sand Hill Road, not Wall Street, are willing to put money in the same way and with the same scale that DoD is.

What's even worse is academic research. The professors don't care about product development, they care about publish or perish. And the output from PhDs is predictably a lot of useless greek squiggles and not very much product actually usable by Grandma.

There is no fabulous opportunity because the incentives are out of whack, and your exhortations won't change that.


You talked yourself into nonsense.

On waste in DoD, sure, but not the parts I was talking about. Some of the big waste was, say, getting AC working in Iraq and now getting Diesel fuel and jet fuel to Akrapistan. And the cost of the black oil to send a destroyer across the Pacific would really set one back.

> Also you appear not to have any idea how much waste exists in Pentagon contracting.

Nonsense. All my early career was in DoD work, mostly in research, especially in applied math. E.g., I saved a project to improve the system that keeps an SSBN at the right depth in rough seas (and, thus, got my company a nice development contract). I found a solution to a problem of global nuclear war limited to sea, apparently later sold to a place near Langley, VA. I reduced to simple Lagrangian relaxation and the Kuhn-Tucker conditions a problem in non-linear, integer, max-min for evaluating the SSBN fleet. I know my way around the DC beltway, out to Vienna, VA, down Shirley Highway, up to Howard County, etc. very well, thank you.

What I saw in DoD spending was quite efficient, amazingly so.

For academic research, it takes some effort to understand it and how it can connect with entrepreneurship. Much of academic research is too far from real products for my tastes, and I complained about that when I was a grad student and, by accident for a while, a prof.

Still, the best of academic research is by far the best stuff, including for entrepreneurship. But it is crucial to pick and choose. And how to connect from academics to entrepreneurship is not so easy to see at first glance, varies across subjects, easy in some fields of engineering, super tough in some other fields, has varied across time, especially in recent years, but, net, in many cases can be quite easy, efficient, and effective.

The idea that the academic research is just generalized abstract nonsense and Greek chicken tracks is dangerously misinformed: In simple, blunt terms, for the technical areas of research, it is funded for essentially one purpose from essentially one source. The source is Congress, and the purpose is US national security. E.g., that's where Silicon Valley came from. Along with microelectronics. And the Internet. And the last I heard, the Department of Energy funded the BSD effort at Berkeley, and Congress funds that department also mostly just for national security.

In funding this research, Congress is quite correct, amazingly so. The beginning of the movie on Nash was correct: Essentially math research won WWII. No joke.

You are correct that SV would not be able to deploy the results of the research the way the DoD often does, but you are wrong to assume that there is nothing that SV could do. First, notice that much of the best research, especially for 'information technology', is done dirt cheap, typically by one person with paper and pencil. Second, notice that now with current computing, in a major fraction of cases, such research can be deployed at shockingly low cost. Not all research has to be as expensive to deploy as the B-2 bomber or GPS satellites.

There are many ways to waste money, and for SV to waste money. And PG's essay indicated some serious struggles in project evaluation. And there are plenty of posts, e.g., by Mark Suster, that on average VC ROI over the past 10 years sucks. So, SV is wasting money now.

But there are also some ways to make money, and I gave some rock solid ideas for how, based on research.

Long ago I guessed that no one would believe me short of my having a 300 foot yacht in Long Island Sound. By then I'm not sure I'll still give a sh!t about telling people things they so much don't want to hear.


But there are also some ways to make money, and I gave some rock solid ideas for how, based on research.

There has been nothing concrete out of your posts in improving evaluation of ideas / founders / research.


"Nothing concrete"? SURE there is. Just read what I wrote.

I said to pick an important, unsolved problem and then to do some research to find a powerful solution.

For what is 'research', get a Ph.D. from a good research university. For how to do research that yields a solution powerful for an important problem, get a job in a lab, e.g., a DoD lab, that does such work. For how to take a powerful solution to an important problem and make money with it, be an entrepreneur -- that is, write some corresponding software and start a business.

If you will lower your ego and read, you might learn something, e.g., a solution to what PG, VCs, SV, and LPs are struggling with, i.e., how to get "big wins" and how to know early on that a project has high promise of a big win.

A war story: The SSBNs were well on the way to sea, and the question of navigation was noticed. An SSBN didn't want to have to surface for navigation. So, there was inertial navigation, but something more accurate was desired. So some physics guys worked out navigation satellites. Their derivations and proposal were short. The Navy evaluated their proposal, approved it, and the project was 100% wildly successful. The lab where the work was done navigated its position to within 1 foot. The GPS system was later, by the Air Force, and better, but the original Navy system was quite good. At one point, the Navy system, to have a better means of measuring the gravitational field of the earth, wanted a satellite with no drag. Some research found one. I will leave for you just how to do that!


What is your business idea?


This is so silly. You don't have a startup idea until you have a product that you can sell to Grandma, that will work for her on a standalone basis. It can't require an Act of God to take it market. It has to be cheap enough that Grandma can buy it, and it has to be simple enough that it can be built by 3 guys in a shack in Palo Alto.

Often this means taking stuff that exists already and chopping off the most expensive features, even if they are the best features, to make it cheap enough for Grandma. Whole books have been published on this; lookup "The Innovator's Dilemma".

This is a completely different goal from the goals of research, which has to be new to be published. Taking an old idea, even which was impractical to build/deploy, and productizing it, is a very hard sell to the professors and very difficult to get published papers out of.

(Some academics just do not care about practicalities, no matter how hard one tries to persuade them.)

Both these situations are also completely different from the military, who care about maximum effectiveness even if it means throwing money at problems and even if it means redeploying the very oldest ideas. If a ceramic capacitor works, instead of a funky DSP algorithm, they'll use the ceramic capacitor. If it somehow proves necessary to defend against a nuclear attack, they'll switch the ceramic capacitor for a solid gold ingot in a heartbeat.

The incentives and goals are not aligned.


Hmm, let's see: One of the algorithms is the fast Fourier transform (FFT). It is sometimes described as a form of matrix factorization and multiplication.

Another algorithm is Dantzig's simplex algorithm for linear programming: It is basically just elementary row operations much as in Gauss elimination on a, usually, vastly 'under determined' system of linear equations. The math for the simplex algorithm is mostly nicely presented via matrix theory.

A third algorithm in the list is the QR algorithm for finding eigenvalues.

So, of the 10 algorithms, at least three are closely related to just matrix theory. Amazing.

Then, for computer science I would add an observation: A few days ago a venture firm principal asked me about my project, "So you have an algorithm?". I had to respond, "Well, yes, but by itself an algorithm doesn't have much to recommend it and, thus, doesn't mean much.".

Well, this list of 10 algorithms supports my observation: For a problem as complicated as those solved by the 10 algorithms in the list, an algorithm by itself doesn't mean much, really doesn't mean anything. Instead, to take any such algorithm seriously, we need something we can take seriously and logically prior to the algorithm.

Well, as in the 10 algorithms, what is prior is just some applied math, typically with theorems and proofs. Then due to the theorems and proofs, we take the applied math seriously. Then we take the algorithm seriously because and only because it is a careful implementation of the data manipulations in the applied math. So, net, what is really crucial for such an algorithm is the logically prior applied math.

Of course, at times we can proceed without any prior applied math and work just with an algorithm that we got, say, from just heuristics. Then we may be able to take the algorithms seriously after a lot of empirical testing.

So, here's my point: For reasonably complicated problems, the key is some applied math, and we take a corresponding algorithm seriously only because of the logically prior applied math.

Or, computer science: The most important work with algorithms is work with logically prior applied math complete with theorems and proofs. Work with algorithms without such prior applied math is close to hopeless.

Venture firms and limited partners: If an entrepreneur has some crucial, core 'secret sauce' in running code that can be called an 'algorithm', what is crucial (prior to already having a financially successful company based on that code) is the corresponding applied math, not the 'algorithm' and not just the code.

Information technology entrepreneurs: If your business is trying to solve a serious problem with an algorithms and some corresponding code, then don't start with the algorithm and, instead, start with some applied math.

Computer science students: If you want to do good work with algorithms, study appropriate topics in a math department, not 'algorithms' in a computer science department.

Computer science professors: Algorithms are crucial to your field, but your approach to algorithms skipping prior applied math complete with theorems and proofs is bankrupt, with no good reason to take any such algorithm seriously, and will lead to a long walk on a short pier and blocked progress for your field. What's just crucial for your interest in algorithms is in the math department and not in your department. Sorry 'bout that!


It's an article from SIAM. Of course it's heavily biased towards applied mathematics. It's not only biased against computer science, it's biased against pure mathematics. Do you doubt that Buchberger's algorithm will reverberate down through the millenia? Even within applied mathematics, the list leaves out multigrid methods, the only linear-time algorithms in their class, which seem a shoe-in given their criteria for inclusion.

It would hardly be difficult to make a very long list of pure CS algorithms and data structures that could stand head to head against the likes of the multipole method in both industrial application and scientific value.

Your knowledge of computer science is evidently shallow. Educate yourself and you might think twice before making such ignorant pronouncements.


I wonder what proportion of "pure" CS algorithms can be conceptualized as an application of one of the fixed point theorems and/or properties of monotone operators. There has got to be _something_ useful about the functional analysis perspective and people doing serious work in algorithms are certainly familiar with it as a group.


> I wonder what proportion of "pure" CS algorithms can be conceptualized as an application of one of the fixed point theorems and/or properties of monotone operators.

You might enjoy http://www.amazon.com/Graphs-Dioids-Semirings-Algorithms-Ope.... It has some cool ideas, but you'll first have to wade through a sea of abstract nonsense a la Bourbaki.


You make several points. It appears that you don't like my main point but have no good argument against it or replacement for it; I will respond and try to explain my main point again.

One of your points seems to be that the selection of 10 best algorithms is not very good. I would agree: I would have selected heap sort instead of quicksort because, given a positive integer n, the execution time of heap sort in sorting n items is proportional to n ln(n) in both average case and worst case and, thus, heap sort meets the Gleason bound for the fastest possible sort by comparing pairs of keys. The execution time of quicksort on average is n ln(n), and in practice faster than heap sort, but in worst case appears to run in n^2. Quicksort seems to do better on locality of reference for a virtual memory system, but there are ways to improve the locality of reference of heap sort.

For my main point, that the list of 10 best algorithms was well chosen is not very important.

Here is my main point again:

"For reasonably complicated problems, the key is some applied math, and we take a corresponding algorithm seriously only because of the logically prior applied math."

So, since you don't like this point, I will try to explain in more detail:

First, we are considering 'algorithms'. So, let's agree on what we mean by an algorithm: I will accept running code in a common programming language -- C/C++, Fortran, PL/I, etc. -- or something similar in 'pseudo-code'.

So, the issue is, given an algorithm, what do we need to take it "seriously", that is, be sure it does what it is intended to do?

Second, briefly let's return to sorting: Quicksort and heap sort are darned clever. But, given the algorithms, in the form I just mentioned, it's easy enough to trace the logic informally and confirm that the algorithms actually do sort. For the running times, finding those is more work but not very notable math.

So, net, for these sorting algorithms, it is easy enough for us to take them seriously for their ability to do what is promised -- sort or sort in n ln(n).

You also mentioned data structures. Well we can say much the same for AVL trees or the data structures used in a fast implementation of the network simplex algorithm for least cost capacitated network flows, etc. For some of the data structures used in, say, dynamic programming, more is needed to take the algorithms for those data structures seriously. Similarly for some uses of k-D trees.

So, for some algorithms and data structures, we can take them seriously just 'by inspection'.

Third, consider, as in the list of 10 algorithms, trying to solve a fairly complicated problem. Examples could include the discrete Fourier transform, finding eigenvalues and eigenvectors of a symmetric, positive definite matrix, least cost network flows, matching to minimize the most expensive single match used, linear programming, quadratic programming, iterative solution of large systems of linear equations (e.g., via Gauss-Seidel). Maybe we are trying to solve a problem in non-linear programming and want an algorithm to achieve the Kuhn-Tucker conditions.

Given an algorithm for one of these problems, to take the algorithm seriously we need more than just 'inspection'. So, since just 'inspection' no longer works, the question is, how can we take such an algorithm seriously?

Actually, there is some fairly tricky math needed for us to take seriously the simplex algorithm for linear programming: For the set of real numbers R, a positive integer n, and Euclidean R^n with the usual topology, let F be the intersection of finitely many closed half spaces of R^n, and let linear function z: R^n --> R. Claim: If z is bounded above on F, then z achieves its least upper bound. For us to take the simplex algorithm seriously, we need to know that this claim is true. Note: The proof is not just usual analysis with converging sequences from Rudin's 'Principles'.

For more, given a linear programming problem, it may be feasible or infeasible. If the problem is feasible, then it may be bounded or unbounded. If the problem is feasible and bounded, then we want to know that there is an optimal solution and that the simplex algorithm can find one in finitely many iterations. Since the simplex algorithm considers only extreme point solutions, we would like to know that, if there is an optimal solution, then there is an optimal extreme point solution. In the simplex algorithm, there is a sufficient condition for optimality, but there can be optimal solutions and optimal extreme point solutions without this sufficient condition. So, we need to know that the simplex algorithm can achieve the sufficient condition. The nicest solution to these issues I know of is via Bland's rule by R. Bland, long at SUNY. It's not obvious.

Again, let's be more clear: Given an algorithm as above, that is, just code or pseudo-code, for the simplex algorithm with Bland's rule, solution of linear equations with double precision inner product accumulation and iterative improvement (Forsythe and Moler), detecting and handling degeneracy (basic variables with value 0), detecting infeasibility, detecting unboundedness, considering the reduced costs as a sufficient condition for optimality, the code will look like just so much gibberish with no good reason to take it seriously. To take the code seriously, we need some prior math where the algorithm just implements the manipulations specified by the math.

Yes, apparently computer science regards the simplex algorithm as an algorithm in computer science: E.g., the algorithm is discussed in Chapter 29 of

Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein, 'Introduction to Algorithms, Second Edition', The MIT Press Cambridge.

commonly called 'CLRS'.

E.g., recently I encountered a question: For some positive integer n, given n client computers and one frequency band with bandwidth x Mbps, design a wireless network to provide all x Mbps to all n client computers simultaneously. Is such possible? Yes. Does this violate Shannon's information theory? No. So, how to take any such proposal seriously? Sure, some math. Basically the solution is one heck of a strange 'antenna' pattern with n nodes with each client getting just the node with just their data. It all boils down just to linear algebra after taking some Fourier transforms.

So, again, given a fairly complicated problem, when trying to evaluate an algorithm to solve this problem where just 'inspection' no longer works, the question is, how can we take such an algorithm seriously?

Well, to take the algorithm seriously, we need more than just the algorithm, that is, more than just the code or pseudo-code.

As I mentioned, one way to take the algorithm seriously is a lot of empirical evidence. Alas, this can be slow and is not very satisfactory.

So, what is left? You mentioned nothing. And computer science has nothing.

I gave a solution: Start with the real problem. Get a solution via some applied math complete with theorems and proofs. That is, properties of the real problem provide assumptions for the theorems, and the conclusions of the theorems provide the desired solution. Then the software, that is, the 'algorithm', implements the data manipulations specified by the math. We take the math seriously because of the theorems and proofs. Then we take the algorithm seriously because, and only because, we took the math seriously.

Net, without the math, there is little reason to take the algorithm seriously. With the math, to evaluate the algorithm, we should evaluate the math and then confirm that the algorithm does what the math specifies.

So, for your

"Your knowledge of computer science is evidently shallow. Educate yourself and you might think twice before making such ignorant pronouncements."

What is relevant here is what I wrote; whether my "knowledge of computer science" is "shallow" or not is irrelevant.

I said nothing "ignorant"; you gave no solution to the question of how to take an algorithm seriously when 'inspection' is not sufficient; and I gave apparently the only good solution we have.


I hope you don't expect me to respond to every statement of yours in that mountain of text. This response of mine has already grown too long.

You don't have to convince me that mathematics is important--my background is in pure mathematics. I just don't think CS people are hapless fools who need applied math people like you from the outside to help them out. In my book, the non-systems part of CS is already a part of mathematics.

Since you bring it up, I've always considered numerical methods as firmly a part of applied mathematics. Combinatorial search and optimization problems are in the heartland of CS. The simplex method straddles numerical optimization and combinatorial optimization and is a bit of a hermaphrodite. Low-dimensional LPs are important in computational geometry and that field has produced some beautiful algorithms. Here's Seidel's randomized algorithm: Throw out a random hyperplane and solve the smaller problem recursively. If this optimum satisfies the excluded constraint then we are done. Otherwise the true optimum must lie on the hyperplane, so we can reduce the dimension of the problem by 1.

> Net, without the math, there is little reason to take the algorithm seriously.

An algorithm _is_ mathematics. The reason I said you seemed ignorant of computer science is this apparent belief that computer science is a random bunch of algorithms without any supporting theory.

> Well, to take the algorithm seriously, we need more than just the algorithm

Your premise is based on a strawman. How do you think algorithms are designed? They aren't generated randomly. They are generated based on exactly the kind of insight that generates theorems and proofs. There may not be tight proofs of every property we'd like to know for certain. That is certainly true for the simplex method! Its worst-case running time is exponential (e.g. Klee-Minty), and there is a small cottage industry devoted to proving its running time is polynomial in the average case (for various definitions of 'average case'). People have been using the simplex method very effectively since its inception despite knowing that its pathological behavior can be very bad indeed. Had you wanted to make your point more effectively, you should have picked interior point methods, which have the additional benefit of working for a much wider class of convex problems like SDPs.

> Claim: If z is bounded above on F, then z achieves its least upper bound. Note: The proof is not just usual analysis with converging sequences from Rudin's 'Principles'.

This has nothing to do with linearity or convexity. It's true for any continuous function f : X -> R on a closed subset F of a complete topological space X. If f is bounded above on F then its supremum on F is achieved at some x in X. By the supremum property and continuity of f, you can find a net in F which accumulates around x. This Cauchy net converges to x since X is complete. Because F is closed it contains its own accumulation points. Hence x is in F.

This proof is exactly at the level of Baby Rudin if you drop back the generality a bit and work with metric spaces.

> For us to take the simplex algorithm seriously, we need to know that this claim is true.

Nonsense. Any implementation of the simplex method is already going to introduce round-off error, so the theoretical difference between optimizing on a set and its closure has no practical consequences. If your point is that the achievement of the supremum ensures termination of the simplex method, that is false. The simplex method is hill climbing on a polytope's vertex graph. Whenever there is a local tie between neighboring vertices you need a tie-breaking pivoting rule. Bad pivoting rules (e.g. lexicographic ordering) will lead to cycling and non-termination. In the absence of such local ties, all you need to know to prove termination is that the number of vertices is finite and that the objective function strictly decreases each step. Even the proof of termination when pivoting with Bland's rule is purely combinatorial.

An example of something that's actually important in practice is how strong Lagrange duality supplies non-heuristic stopping criteria for convex programs via dual feasible points. That way we know what we're giving up by being suboptimal. Duality theory is also much richer and subtler than the kind of mindless definition-chasing freshman homework problem you bought up above.


> You don't have to convince me that mathematics is important--my background is in pure mathematics. I just don't think CS people are hapless fools who need applied math people like you from the outside to help them out. In my book, the non-systems part of CS is already a part of mathematics.

Would, could, and should be, but so far in practice in the universities and elsewhere very much is not.

> Combinatorial search and optimization problems are in the heartland of CS.

No: They are in the "heartland" of optimization, 'operations research', and applied math. The CS people aren't good enough with the theorems and proofs to make progress with the math. E.g., not nearly enough CS profs went through a careful theorem proving course in abstract algebra or through Rudin's 'Principles'.

> An algorithm _is_ mathematics.

Well, an algorithm is necessarily mathematically something, but by itself what it is mathematically is unknown and, thus, not in any very meaningful sense mathematics.

I gave a definition of an 'algorithm': Again, yet again, just look at the code or pseudo-code, and you don't have much. To take such code seriously, need some logically prior math, some actual math, as in a math department and in the tradition of von Neumann, Halmos, Rudin, Birkhoff, Bourbaki, etc. CS tries hard to avoid such math and, thus, is stuck in making progress in algorithms.

> Your premise is based on a strawman. How do you think algorithms are designed? They aren't generated randomly. They are generated based on exactly the kind of insight that generates theorems and proofs.

No: Big themes now in CS are to come up with algorithms by whatever means -- genetic, intuitive, heuristic, 'clustering', neural network fitting, 'rules', 'machine learning', 'artificial intelligence', etc. -- often with no "insight" at all and, in particular, and most significantly, with no reason to take the algorithm at all seriously.

> there is a small cottage industry devoted to proving its running time is polynomial in the average case (for various definitions of 'average case').

K. H. Borgwardt.

Empirically the running time on usual problems has long been known to be about 3m iterations for a problem with m constraints.

> Had you wanted to make your point more effectively, you should have picked interior point methods, which have the additional benefit of working for a much wider class of convex problems like SDPs.

You are missing my point: I'm using the problems in the top 10 list, optimization, digital filtering, etc. just as sources of examples of my point. Again, yet again, just for you, one more time, please actually read it this time, my point, already repeated over and over and over, and here repeated yet again, about algorithms, and having nothing to do with optimization, is:

"For reasonably complicated problems, the key is some applied math, and we take a corresponding algorithm seriously only because of the logically prior applied math."

Here I said "For reasonably complicated problems" but said nothing, zip, zilch, zero, about optimization or digital filtering. My claim holds for algorithms, all algorithms, ALL of them, for whatever purposes or problems, in the full generality of algorithms "for reasonably complicated problems".

Why? Again, yet again, with just an algorithm, all we have is just the code, and for any very complicated algorithm that means next to nothing down to nothing at all. This point is the same for simplex for linear programming as it is for interior point methods for achieving the Kuhn-Tucker conditions or for anything else complicated.

So, again, yet again, given the algorithm, just the code, how to take it "seriously"? There ain't but just two ways: First, can evaluate the algorithm just empirically by running it on a lot of data. This way was long ago adopted essentially in whole by the entire CS 'artificial intelligence' community. Second, can have something 'logically prior' that do take seriously because it has theorems and proofs. For this second way, CS is stuck-o because they are not good enough with theorems and proofs because they didn't take the right courses in the math department in grad school.

You find another way, another means, another 'paradigm', for taking an algorithm seriously, and I will jump for joy, but I'm not holding my breath until then.

> This has nothing to do with linearity or convexity. It's true for any continuous function f : X -> R on a closed subset F of a complete topological space X. If f is bounded above on F then its supremum on F is achieved at some x in X. By the supremum property and continuity of f, you can find a net in F which accumulates around x. This Cauchy net converges to x since X is complete. Because F is closed it contains its own accumulation points. Hence x is in F.

You typed too fast, way too fast. You are flatly, badly, easily, trivially, clearly wrong!

Counterexample: Let R denote the set of real numbers and let x, y be in R and x, y > 0. Then the problem is to find x to minimize y subject to y >= 1/x.

Then y is bounded below but does not achieve its greatest lower bound, which is 0. Done.

Why? There is no compactness. In your argument, you omitted the assumption of compactness. And as we know for positive integer n, in Euclidean R^n, a subset is compact if and only if it is closed and bounded. Yes, I made As in the course in Rudin's 'Principles'!

The set I illustrated

{ (x, y) | x, y > 0 and y >= 1/x }

is closed in the usual topology for R^2 but not bounded.

So, your argument based on Rudin style convergence does not establish my linear programming

Claim: If z is bounded above on F, then z achieves its least upper bound.

Since F is an intersection of finitely many closed half spaces, F is closed. But F need not be bounded. Still my claim holds even when the feasible region F is unbounded. So, proofs need to make some crucial use of linearity, that is, that the function to be maximized is linear and the half spaces are from linear functions.

Beyond Bland's proof (which really came from some of his work in matroid theory), can also establish the results for linear programming and simplex by some careful use of 'separation theorems' or, if you will, theorems of the alternative. But, again, linearity is crucial.

We very much do need to know the properties I listed of the simplex algorithm and linear programming.

If you are concerned about floating point, then just do the arithmetic exactly. For this, consider the M. Newman approach of solving a system of linear equations by multiplying through by suitable powers of 10 to convert all the data to integers and then solving in exact machine single precision arithmetic in the field of integers modulo a prime for a sufficiently large set of prime numbers and constructing the multiple precision numerators and denominators via the Chinese remainder theorem. Besides, in practice, the floating point issue is usually not very serious.

What you meant about Lagrangians is not so clear, but, sure, there is some nonlinear duality theory where the dual of a nonlinear minimization problem is a maximization of a concave function which, then, can be approximated by supporting hyperplanes found from efforts with the primal problem. The proof is short and easy. To write out the proof, I'd want TeX but don't have that on HN.

Uses of this result are commonly called 'Lagrangian relaxation'. Once I had a 0-1 integer linear program with 40,000 constraints and 600,000 variables and got a feasible solution within 0.025% of optimality in 905 seconds on a 90 MHz computer from 500 iterations of Lagrangian relaxation. To maximize the approximations to the concave function, I used the old Watcom Fortran version of the IBM Optimization Subroutine Library (OSL).

This work with Lagrangian relaxation is a multidimensional generalization of H. Everett's single dimensional 'generalized Lagrange multipliers' that he used on DoD resource allocation problems at his Lambda Corporation (after he did his 'many worlds' version of quantum mechanics).

Again, such an example doesn't add or detract from my basic point that we have no good way to take an algorithm by itself seriously and, thus, for an algorithm to be taken seriously need some logically prior work we do take seriously, which is forced to be math with theorems and proofs, which the algorithm merely implements.

Again, find another way to take an algorithm seriously and I will jump for joy. So, in particular, as long as the CS people pursue algorithms without careful, powerful, logically prior work with math theorems and proofs, they will be stuck-o for progress. Sorry 'bout that.


> So, of the 10 algorithms, at least three are closely related to just matrix theory. Amazing.

"Mathematics is the art of reducing any problem to linear algebra." - William Stein


I hope most of this post is wildly wrong. To decide is for now heavily a judgment call. However, it may be crucial that we decide quite soon. In case of doubt, for something like domain name seizures, there is a two word solution -- 'due process'. More generally there is a one word solution -- 'vote'.

Yes, a good candidate for the most important problem facing the US and civilization is the citizens having too little information to monitor their government as well as is crucial. The good news is that we are now at the beginnings of by far the best solution so far in history -- the free and open Internet.

We're beginning to understand: For the most important players and purposes, PIPA and SOPA were actually not about 'protecting content'. Instead PIPA and SOPA were to be some new laws that could be selectively enforced to build a political machine and get power. So, if make your campaign contributions "on time", then you are free to do business. Else, the FBI may knock down your door, trash your offices, and take your computers.

PIPA and SOPA are just small potatoes: So, make your campaign contributions on time, and continue to operate your coal fired electric generating plant. Else the EPA may shut you down. In recent years coal has been the source of about 49% of US electric power. So, the EPA is a means of a shakedown of essentially all of the US energy industry.

Look, to the important players and purposes, reducing CO2 emissions to stop 'global warming' is just an excuse to execute a shakedown and build a political machine and get power. Those players care about 'climate change', 'global warming', 'rising sea levels', 'more frequent hurricanes', etc. less than a spit to windward.

We've already seen the shakedown of the Internet. The "National Broadband Plan" would give more such power over the Internet for more shakedowns and power.

Having the DHS run 'Internet security' would be another case -- more opportunities for shakedowns, building a political machine, and getting power.

Then there's the takeover and shakedown opportunities of 17% of the US economy, i.e., all of US health care. Believe me, to the important players and purposes, health care is just an excuse.

Then with control over all of US health care, essentially dictatorial control directly by appointed bureaucrats in the Executive Branch, unionize the 21 million health care workers and have them as a source of 'Brown Shirts'. Did I mention, it's not about health care. Instead it's about building a political machine and getting power.

Then there's the same for transportation, i.e., play nice and get help with your hybrid electric car project to try to satisfy the 50 MPG standard; not play nice and go broke.

Also if go broke but play nice, then get a bailout like GM did and, presto, have the US Treasury holding some of your preferred stock with someone appointed by the Executive Branch on your Board. Play nice and some 'stimulus' money can go to local governments to buy your cars, and now you have money enough to pay your unions what they want. Not play nice and you're out'a business.

Besides, another proposal is much more in passenger trains instead of private cars and, thus, more Federal Government control and more opportunities for shakedowns, political machine building, and power.

Another proposal is to do for all of US manufacturing what was done for GM -- under the control of the Executive Branch.

Look, the real objectives are not to do good things about energy, health care, finance, manufacturing, communications, transportation, the environment, or the economy, all of which are being used just as excuses. Instead, there are other objectives. And the step now is toward just one word, power, based on essentially a political machine based on new laws, regulations and, then, selective enforcement, shakedowns, payoffs, and kickbacks.

Then for the real objectives, that subject needs a revolution, and a standard prerequisite is a 'rotten door' so that the revolution is kicking in the rotten door. So, on the way to revolution, work to make the door rotten. So, the goal for now is not to make things better but to make them worse.

Then with the power of a political machine and a rotten door, kick in the rotten door and have the revolution.

What will be the goals of the revolution? We have to guess, but we have a lot of hints.

Look, guys, the risk is not just to your domain name.

But there is a solution: Become informed and, then, just one word. May I have the envelope, please (drum roll): Yes, here it is, "vote".


But, but, but, just think of the progress! How much progress we've made! I mean, in Germany in the 1930s, when one newspaper wanted another newspaper shutdown, the solution was Brown Shirts with clubs! Since the police leadership was all Nazi, the police looked away.

Crude! SO crude! Now we have progress! A MUCH better way!!!

Just call your local campaign bundler, pay the price, and then get fast, effective action via the DoJ, DHS, FBI, etc.! Progress!!!


The answer is simple, dirt simple: If you have to ask the question, then DON'T!

That is, we factor algebraic expressions if and only if (iff) we have a good reason to do so. If we don't have a good reason to factor, then there is no need to bother.

Yes, in high school, 'factoring' is seen as an important algebraic manipulation. It is. Then high school continues on and wants to factor whenever possible and for no reason other than it is possible. This is dumb.

Also, commonly there is more than one way to factor. Then high school gets all in a tizzy over which way is 'best'. Nonsense. Again, we factor for a reason we have in mind, and of several possible ways to factor we select the one for the reason we have in mind. Simple.

We factor when we have a reason to do so. Otherwise, f'get about it! High school teachers: Understand that now?

My authority: I hold a Ph.D. in the applied math of stochastic optimal control. I've taught math in college and graduate school. I've published peer-reviewed original research in applied math and mathematical statistics.


The obvious counterpoint here is that we're trying to teach skills before they're needed. After all, it would suck to have to rediscover Calculus on your own in the middle of your Physics II exam just because you didn't see a point to it at the time.


Yes, and that's the way I learned in high school. At the time it appeared that we factored to achieve some 'artistic' goals of making the algebraic expressions 'look nice'. When later I concluded that we factored for some serious purposes and that the artistic goals of look nice were silly, I resented some of what I had been taught.

But the question on this thread is appropriate: "Why" do we factor? Sure, the reason in some of high school is just to learn how to factor so that we will be able to when we need to, say, working with integration by parts in calculus. But likely this tread and the students want a reason more substantial than just to learn for later. So, my answer was (say, beyond just learning) to factor when there was a good reason and otherwise just f'get about it, and basically that's the correct answer.


>My authority: I hold a Ph.D. in the applied math of stochastic optimal control. I've taught math in college and graduate school. I've published peer-reviewed original research in applied math and mathematical statistics.

With all those one would expect better reasoning and/or better wording.


No, my reasoning and wording are fine: Instead, "Things should be as simple as possible and not simpler"!


Let's see:

(1) See a market opportunity in yachts 55 feet long. Need to hire a yacht designer to get the engines, hull shape, hull construction, safety, other engineering details right and supervise the construction including selecting the people for the interior design and finishing the interior. Want (A) someone who has done such work with high success for two dozen yachts from length 30 feet to 150 feet or (B) someone with the potential?

(2) Have a small but rapidly growing Web site and need to hire someone to get the server farm going for scaling the site. They need to design the hardware and software architecture, select the means of system real time instrumentation, monitoring, and management, work with the software team to make needed changes in the software, design the means of reliability, performance, and security, get the backup and recovery going, design the server farm bridge and the internal network operations center (NOC), write the job descriptions for the staff, select and train the staff, etc. Now, want someone who has recently "been there, done that, gotten the T-shirt" or someone with the 'potential' of doing that?

(3) Need heart bypass surgery. Now, want someone who has done an average of eight heart bypass operations a week for the past two years with no patient deaths or repeat operations or someone with that 'potential'?

(4) Similarly for putting a new roof on a house, fixing a bad problem with the plumbing, installing a new furnace and hot water heater, installing a high end HVAC system, etc.?

War Story: My wife and I were in graduate school getting our Ph.D. degrees and ran out of money. I took a part time job in applied math and computing on some US DoD problems -- hush, hush stuff. We had two Fortran programmers using IBM's MVS TSO, and in the past 12 months they had spent $80 K. We wanted to save money and also do much more computing. We went shopping and bought a $120 K Prime (really, essentially a baby Multics).

Soon I inherited the system and ran it in addition to programming it, doing applied math, etc. When I got my Ph.D., soon I was a prof in a B-school. They had an MVS system with punched cards, a new MBA program, and wanted better computing for the MBA program. I wanted TeX or at least something to drive a daisy wheel printer. Bummer.

At a faculty meeting the college computing committee gave a sad report on options for better computing. I stood and said: "Why don't we get a machine such as can be had for about $5000 a month, put it in a room in the basement, and do it ourselves?". Soon the operational Dean wanted more info, and I lead a one person selection committee. I looked at DG as in "Soul of a New Machine', DEC VAX PDP 11/780, and a Prime.

The long sitting head of the central university computer center went to the Dean and said that my proposal would not work. I got a sudden call to come to the Dean's office and met the critic. I happened to bring a cubic foot or so of technical papers related to my computer shopping. I'd specified enough ordinary, inexpensive 'comfort' A/C to handle the heat, but the critic claimed that the hard disk drives needed tight temperature and humidity control or would fail. I said: "These disk drives are sold by Prime but they are actually manufactured by Control Data. I happen to have with me the official engineering specifications for these drives directly from Control Data.". So I read them the temperature and humidity specifications that we could easily meet. The critic still claimed the disks would fail. Then I explained that at my earlier site, we had no A/C at all. By summer the room got too warm for humans, so we put an electric van in the doorway. Later we had an A/C evaporator hung off the ceiling. Worked fine for three years. The Dean sided with me.

In the end we got a Prime. What we got was a near exact copy of what I had run in grad school, down to the terminals and the Belden general purpose 5 conductor signal cable used to connect the terminals at 9600 bps. The system became the world site for TeX on Prime, lasted 15 years, and was a great success. The system was running one year after that faculty meeting. I was made Chair of the college computer committee.

That faculty meeting had been only two weeks after I had arrived on campus. There was one big, huge reason my planning was accepted: I'd been there, done that, and gotten the T-shirt. That is, in contradiction to the article, what mattered was actual, prior accomplishment, not 'potential'.

Why the industrial psychological researchers came to their conclusions I don't know, but I don't believe their conclusions.


You're talking more about one-time jobs; the article focuses on hiring people for long term employment or in places where a bad experience is only mildly annoying (restaurants, comedians). In both cases, the risk of a bad decision isn't as high as the (perceived) possible reward from a good one.

When you're hiring a plumber, the best possible outcome is not very different from an average outcome with an average plumber, and the worst is significantly worse.


That is the key - with low downside, you can afford to take a risk on the untried "potential" - payoff could be huge.

With high downside, you want to minimise that risk.

So, no brain tricks, just risk/reward behaviour


Nicely condensed


The claim in the article is tough to swallow and seems to have been written to get attention.

I can't believe the claim as stated, but there might be a way to restate their claim and get something believable in some narrow cases.

First, however, for your "One-time jobs"? I was appointed Chair of the college computing committee and held that position until I left the college after five years. I was also appointed to other computing committees in the university. And I gave a graduate course on computer system selection and management. So my role was 'long-term' and not just "one time".

Indeed, my next job was at Yorktown Heights in using artificial intelligence for monitoring and management of server farms and networks, and likely that position was based partly on my success in computer system management. So, I pursued that work long-term and not just "one-time", and at each step that I was given the responsibility was heavily from my accomplishments and not just my potential.

If you are a yacht company, you may want to build yachts in several sizes, and then you will likely want to hire a yacht designer for the long term and will, again, want someone with accomplishments and not just potential. Believe me, you don't want to build a yacht and discover that the performance is poor because the engineering was wrong. Once I saw that happen; it was a sad story.

If you are running a hospital and want a heart surgeon, no doubt for the long term, again you want to hire based on accomplishments and not just potential.

If you are running an HVAC company and need a technical leader for the long-term, then again you want someone with a lot of relevant accomplishments and not just someone with potential.

I don't see the issue as short versus long term, but I do see an issue, so let's move on to that:

E.g., in my computer selection, there was an accusation that I was just selecting again what I had done before and, thus, was possibly not getting the best selection for the time and for the college. Interesting claim. Actually my first recommendation had been for the Data General system as in the book 'The Soul of a New Machine'. When that system was just too expensive, I fell back to the Prime system. The main competitor was the DEC VAX already quite popular on campus and with some good advantages in applications software for the physical science departments. But for the B-school, those application software advantages didn't apply, and the Prime was easier to manage and use and provided much more computing per dollar. So, the Prime was a good choice.

So, here was the fear of going with a person with accomplishments: Such a person may just redo what they did before and not really make the best decision for the new time and place. I'm not saying that people should or usually do have this fear, just that they may have this fear in some cases anyway.

So, more generally suppose the need is for a lot of creativity and originality but where past accomplishments, knowledge, and experience are not very important. E.g., maybe are hiring a graphic artist for, say, the box for a new consumer product to be sold from shelves in retail stores. So, if hire someone with "accomplishments", say, who had just designed a successful box, then may fear that the box they design for you will be too much like the last box they designed and not original enough. But that explanation is not the claim of the article.

The claim of the article is stark: For doing essentially any task X, people prefer to hire a person with the potential of doing X instead of someone who has actually done X. This claim is doesn't pass the giggle test.

It looks like we are entering the campaign season of the US presidential election!


In practice many people tend to overvalue potential, regardless of what the logical outcome would be. For example, investors tend to overprice stocks based on potential growth. This can be seen in the fact that high P/E stocks typically underperform low P/E stocks when considering risk-adjusted returns.


Not sure how much it relates to the article, but I enjoyed the war story. Good stuff.

There's a lot of garabge in online HBR in recent years, mainly on their blogs, as they grovel for traffic like any other site using whatever means possible. I am rarely tempted to even read HBR links that get posted here.

War stories are much more interesting!


No, no, no, no!!!!!

You left out!!!!

You have the market, the business idea, the technology, the software, the team, and the beta users all in good shape but you didn't tell a good 'story'!!!!!!

You failed at 'story telling'!!

"Once upon a time, there was ... ". Now please send the check!


In one word, classical.

In two words, classical instrumental.

In more violin, cello, piano, orchestra, and voice not in English, e.g., usually Italian.

Examples:

     Rachmaninoff
     Rapsody on a Theme of Paganini
     Van Cliburn
     Eugene Ormandy
     Philadelphia Orchestra

     Antonin Dvorak,
     Second Movement,
     Adagio, ma non troppo,
     Concerto for Cello and Orchestra,
     Mstislav Rostropovich, cello
     Herbert von Karajan
     Berliner Philharmoniker

     Beethoven
     Piano Concerto 5
     Van Cliburn
     Fritz Reiner
     Chicago Symphony Orchestra

     Adagio un poco mosso

     Rondo Allegro

     Max Bruch
     Scottish Fantasy
     Andante Sostenuto
     Jascha Heifetz
     Malcolm Sargent
     New Symphony Orchestra of London

     Peter Tchaikovsky
     Variations on a Rocco Theme for Cello
     and Orchestra
     Mstislav Rostropovich, cello
     Herbert von Karajan
     Berliner Philharmoniker

     Chopin
     Etude in E
     Van Cliburn

     Puccini
     Gianni Schicchi
     O mio bambino caro
     Kiri Te Kanawa

     Beethoven
     Violin Romance Number 2
     David Oistrakh
     Eugene Goossens
     Royal Philharmonic Orchestra

     Chopin
     Fantaisie-Impromptu
     in C Sharp Minor
     Van Cliburn

     Brahms
     Concerto for Violin and Cello in A minor
     Andante
     Heifetz
     Piatagorsky
     Alfred Wallenstein
     NBC Symphony Orchestra

     Beethoven
     Violin Sonata Number 9
     "Kreutzer"
     Variation II, Andante
     Jascha Heifetz
     Brooks Smith

     Beethoven
     Violin Sonata Number 5
     "Spring"
     Scherzo, Allegro molto
     Jascha Heifetz
     Emanual Bay

     Beethoven
     Violin Sonata Number 5
     "Spring"
     Trio
     Jascha Heifetz
     Emanual Bay

     Bach
     Cello Suite 1 in G major
     Prelude
     Rostropovich

     Delibes
     Coppelia
     Richard Bonynge
     National Philharmonic Orchestra
     Act 1
     Prelude et mazurka


The lesson is very old: "Typing is no substitute for thinking." from Kemeny and Kurtz in their book on Basic.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: