I've been getting along fine with nothing but util.inherits from NodeJS.
Are there any real advantages to the "set the prototype to this object" approach versus building it up by assigment?
function Animal(legs) {
this.legs = legs;
}
Animal.prototype.speed = function() {
return legs * 10;
}
util.inherits(Dog, Animal);
function Dog(name) {
this.constructor.super_.call(this, 4);
this.name = name;
}
Dog.prototype.speed = function() {
// I don't disagree that more sugar here would be good
return this.constructor.super_.prototype.speed.call(this) * 2;
}
Beyond the need for calling the constructor (which I'm currently viewing it as an unnecessary hidrance [objects are already initialized]), Object.getPrototypeOf may provide a way out - but maybe not the way you intended. Have you considered it?
Works, but is even more verbose. However if you use Object.getPrototypeOf on this you fail the recursive problem in nest super calls. Read the stackoverflow euestion
I was deliberately excluding the constructor situation. I should have made that clearer in my previous comment. I think the way out of the constructor mess is not to require them at all.
I do think the Object.getPrototypeOf approach is feasible for methods.
That's the whole point. Most requests have the same order of magnitude.
If you have two sets of requests that have different orders of magnitude then put these two sets on their own node worker process behind your load balancer.
As long as you send your requests to the worker process that handles requests of a similar order of magnitude you will never have the faster requests being stuck behind slower requests problem.
$ ab -n 30000 -c 300 http://127.0.0.1:8080/
Concurrency Level: 300
Time taken for tests: 4.908 seconds
Complete requests: 30000
Requests per second: 6112.65 [#/sec] (mean)
Node (January 2011)
$ ab -n 30000 -c 300 http://127.0.0.1:8124/
Concurrency Level: 300
Time taken for tests: 8.140 seconds
Complete requests: 30000
Requests per second: 3685.69 [#/sec] (mean)
I totally agree with you, but i just want to draw your attention to the phrase 'the speed of C'. You almost make it seem like it's a global constant, like the real C-for-celeritas speed of light, some unobtainable blazingly fast mirage accessible only to quantum physicists and unix greybeards.. but we don't need a particle accelerator to beat the performance of C, just better JITters. There's not really anything in the language stopping the performance from being reached (okay, well for JS there's the type system.)
When I say the speed of C. I'm really comparing various language X compilers to the GCC compiler under the assumption that the GCC compiler is the best.
No it's not a magical constant but I think for a baseline comparison "how close your compiler X is to GCC" is a fair thing to compare.
I also doubt V8 or spidermonkey can get better then GCC _on average_
Thanks for the replies. I wasn't aware that JS was that much slower than Java, and I also wasn't aware that Java had closed the gap so significantly vs C.
It's more worthwhile future facing project.