Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Obviously dumb microbenchmark, but here's ~17x on my machine:

  $ time python -c 'sum(range(1_000_000_000))'

  real 0m19.997s
  user 0m19.992s
  sys 0m0.005s

  $ time pypy -c 'sum(range(1_000_000_000))'

  real 0m1.146s
  user 0m1.126s
  sys 0m0.020s




I think some relatively simple math JITs and compiles nicely like that, but when you start using other parts of the language heavily like you would in a real project, it averages out to about ~4x due to the object, VM and locking model, I believe. It's been a while since I've looked into this.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: