That may be partially true[1], but I'd challenge you to re-implement this non-optimized case (ie, this is a rough webapp, no caching, etc) in another runtime on similar hardware and see:
1) Whether it can support anywhere near the same level of concurrent requests with similar response times.
2) How much complexity (nginx + unicorn + memcached + puppies) is required to achieve this in comparison to a servlet engine (eg, tomcat) and your webapp.
[1] Most web applications are read-heavy, low on writes, and scaling up write capability generally requires scaling up the database. You can grow quite a bit with simple caching and monolithic database scaling before having to tackle more complex distributed data architectures.
1) Whether it can support anywhere near the same level of concurrent requests with similar response times.
2) How much complexity (nginx + unicorn + memcached + puppies) is required to achieve this in comparison to a servlet engine (eg, tomcat) and your webapp.
[1] Most web applications are read-heavy, low on writes, and scaling up write capability generally requires scaling up the database. You can grow quite a bit with simple caching and monolithic database scaling before having to tackle more complex distributed data architectures.