Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Contrived benchmarks are not useful. Specially when you have tiny datasets.

If you want to do anything interesting, Python/Ruby are slow as hell, which is why you cannot do anything interesting in them.

For example, while in Go, you can load say 1000 rows from the db and perform some data manipulation on it in the code to get a desired result, you cannot do this in Python because it will very very slow.

So what you do is you write complicated sql queries and essentially offload all your work from the application server(s) on to the database server.

Now imagine that these rows on the database don't actually change very often. You could just load them once, keep them in memory (in a global object), and only update them once in a while (when needed). You can always do whatever search/manipulation operation directly on the data that is readily available and always respond very quickly.

This would be _unthinkable_ if you are using Ruby or Python, so instead you keep hammering your database with the same query, over and over and over again.




Simple benchmarks are a useful yardstick. I recently wrote a service in Rust/Iron which only has 4.7x the throughput of the same Ruby/Rails service. That was rather disappointing considering how much more effort is required to do it in a lower level language.

Is Python/Django performance significantly worse than Ruby/Rails? The situations you describe are things I do every day in Ruby. Getting 1000 rows from the DB and performing some operation only takes a couple of milliseconds in Ruby.

Ruby/Python are meant to be glue, and you can most certainly use them to glue together "interesting things", like image processing or audio processing in a web layer.

Memory caching rarely changed, often accessed, but ultimately persisted in a DB things like exchange rates in a global object is exactly what you do in Rails. There's a specific helper for doing it. http://api.rubyonrails.org/classes/ActiveSupport/Cache/Memor...


Iron is still using sync IO, and while Ruby isn't great at parallel stuff, it at least does async IO, IIRC. That's going to be a huge difference.


It depends what you're doing. If you're running a ruby web app server and talking to the database, all the io you're doing is most likely synchronous. In the one-server-per-cpu model, anyway.


It's been a while, but I thought that MRI basically slept during IO and released the GIL so that other threads could do work. In that case, you'd still be handling more requests while the IO was performed. I could be wrong. This kind of thing: http://ablogaboutcode.com/2012/02/06/the-ruby-global-interpr...


You're right that it can do async io. But you compared a framework to a language. Rust has Tokio for async - Iron is just not using it.

It's similar with Ruby/RoR - yeah, they can do async. But not on their own. With unicorn server, you still get no threading and just a bunch of processes. With puma you can do threading (async cooperative really) - as long as you keep the configuration/code within the limits of what's allowed.

And due to the extra care needed whenever you do caching/storage things, I expect unicorn is still the king in RoR deployments. (GH uses that for example)


It's definitely preemptive rather than cooperative. Ruby/Puma is actually using one OS thread per Ruby thread, so when one hits a DB call and blocks on sync IO, it releases the GVL and another Ruby thread can proceed. There's a timer thread Ruby runs and pre-empts each thread to schedule them.


Yeah, I guess I moved to Puma long enough ago that I forgot about this. Good points, thank you.


Yeah, that's pretty much what happens. There's also non-blocking IO you can use with EventMachine but the DB drivers are a bit of an issue AFAIK.


Wow, steveklabnik replied to my comment! Unfortunately, both implementations of the service are CPU bound. I think we're running 25 threads in Iron but I'll have to check.


Ah interesting! With that being true, then yes, I'd be surprised that it isn't faster too.

What about memory usage? That's an un-appreciated axis, IMO: for example, all of crates.io takes ~30mb resident, which is roughly the overhead for MRI itself, let alone loading code.

Anyway, the Rust team loves helping production users succeed, so if there's anything we can do, please let us know!


Every bigger web application will start caching at some point. This is nothing new in either python or ruby. It's even well integrated into SQL access libs: python sqlalchemy http://docs.sqlalchemy.org/en/latest/orm/examples.html#modul... or just using the in memory memcache.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: