Wouldn't that be nice. Now consider the case of a pharmaceutical company that has to spend hundreds of millions of dollars to develop a new drug and get it past FDA. If there is no IP protection would you as the CEO of this company spend that kind of money on innovating when the resultant drug could be copied a week after it appears on the shelves? In this case you would be competing on product alright - exactly the same product. But the copier has had no R&D cost and can undercut you and put your offering out of business.
The issue is often framed in terms of all patents being bad, or all software patents being bad but to anyone to gives the matter more than the kind of superficial thought stemming from ideology the reality is rather more complicated.
Pharmaceutical and other physical patents make sense and appear to work. But arguing that software companies won't innovate without patents ignores the fact that software companies did innovate without patents, for years. And software patents obviously haven't been working: see Microsoft's patent on "if not", among other things.
You are absolutely correct. There are differences between the two industries. The two key ones would seem to be that (a) the investment required in time and money to create the software and its likely lifetime value are low compared with a drug, and (b) the ease of difficulty with which the software may be copied can vary a great deal. Compiled source code is harder to reengineer than a drug. Server side code that is held as a trade secret, such as google's search algorithms, is hard to recreate. So in these cases it is quite true that there has been innotation (ignoring the fact google did have some IP protection). But there are cases I can think of where it would be downright foolish for a software company, particularly a small one, to invest substantially without protection. The key characteristic is that the code cannot be held secret and can be readily read and thereby copied. The key point is that it really isn't very helpful to generalize to broadly about the software industry.
Some of these system designs interview and algorithm interviews are just overrated. You are literally testing how much "alike"" this candidate is thinking like you do, which sometimes is not the right way to conduct an interview.
How many people remember the pseudo-code for quick sort right away? A true engineering test or interview should be done in coding tests and his/her ability to QA his/her own code, and explain it in a clear fashion.
There are some cloud providers where you can allocate dedicated servers with virtualization on top. That way you can manage exactly what runs on each instance, while still have the flexibility to allocate more server instances quickly for handle growth.
The problems with economic model or most modeling are not the methods. It's usually dealing with the quality of the features or parameters. Even in a much simpler problem, no matter how good the methods, if you don't have the right params, your model will suck. And with economic models, it's dealing with a open world system with ever changing params, the challenge is not on the methods, but how to discover quality parameters/features. And that require not just the skills of modelers but many other disciplines.
The core libraries available in R are some of the most well-reviewed, carefully written, and correct codes available.
There are a huge amount of available libraries (thousands!) of variable quality thanks to the open nature of the project. But commercial software has problems too, especially with new and niche products. And when something goes wrong in those cases, you can't see why for yourself. Worse, other independent experts would not have the chance to either.
He is probably comparing R to SAS (which are the two most popular statistical programming languages). SAS doesn't really have libraries, instead you buy additional packages from SAS, which are very reliable and well supported, but expensive.
My company shuns R (although I personally like it), primarily because of this issue. If we need to run a rare or uncommon statistical procedure, it is a lot easier to trust the SAS procedure, rather than an open source R package written by some grad student.
True,
Though if you need to run a rare or uncommon stat procedure, SAS is not likely to have it in the core, and then you are back to using what "some grad student wrote".
I am shunning SciPy and to a less extent NumPy for the same reason. I have reason to believe the developers are not experts in numerical linear algebra and some of the documentation also do not lend confidence.
yes.. but for less adopted or emerging platforms, you have to be more conscious of the source of the library, and should look at the source to verify its functionalities
Not sure why would you recommend SSL from day one. If the API needs to entertain high throughput and low response time, adding SSL means adding overhead.
Avoiding SSL on the grounds of its overhead is premature optimization. Unless profiling reveals that the SSL overhead introduces significant delays (and the cost of getting a proper SSL Certificate is affordable) there's no reason to go without SSL.
it's all depending on use cases.. that is why I don't think an API should be secured through SSL from day one.. here is an example how SSL can be unnecessary, if i am processing billions of ad request daily with a response time of less than few milliseconds, and operates in a secured environment, why would i need to add a layer of SSL on all the requests..
Use cases drive requirements, not buzz words drive requirements, and that's my whole point
I am curious where is the Group in Groupon these days. Groupon is more like a coupon company now than the group buying phenomena that made it successful.
There is no reason to love or hate a language. Every language has its strengths and weaknesses. C/C++ has its performance advantage but may require more skills and training. Php and Python is simple and fun, but you can't really use Php to write a search index. Java is somewhere in between, but the beauty of Java is its proclaimed "write once, at least test everywhere", which allows it to build up a large developer community very quickly. But that is also the downside of Java for its portability, and that is why many platform has to customize Java for speed such as IBM.
Google captures your intent, Facebook captures your social graph. It's not Google and Facebook does it wrong, they just represent part of us. A service that tries to generalize the whole embodiment of human interaction where they only capture a piece of what we do is not going to work. I think it's human tendency to have multiple identities on the web based on context.
Funny to see how iPhone SDK has already forced on us how smart phone apps should look, just like what MS did to us with Visual Studio. I know the SDK developers want to make programming on their respective platform easier with these built in look and feel, but as a mobile developer, do I want my app just look like everyone else's?