Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The site of the organisation responsible for assessing the fastest computers in the world succumbs to the hacker news hug of death.


Oh my, it's a django app with debug mode enabled. I just got an InterfaceError with the full traceback and django configuration. (I've emailed them so they can fix it)


Just how in the world are people deploying Django apps with DEBUG = True?


Simple, human error and no code review process for your production environment.

Something similar happened to a huge retailer here in Austria where just typing your username without password would log you in. Reason? An intern committed debug code to production and nobody noticed. In my book that's not the fault of the intern but the fault of the CTO/$TECH_LEAD that hasn't implemented and religiously uphold a code review process for everything that goes into production since stuff like this could happen even to experienced engineers that are tired or having a bad day.


I live in Austria as well, could you share to which retailer it happened?


It shouldn’t be a manual code switch in the first place.


What else would it be? The authentication code would live somewhere. And for debugging someone could change it to always return successful for an empty password. That debugging change shouldn't be checked in of course, and it should have been caught in code review. (It's a reasonable oversight for the authentication unit tests to only test incorrect passwords rather than the edge case of empty passwords)


It is the fault of both.

Interns are not stupid and so they have to carry the burden of their mistakes too.


No, absolutely not. As an engineer you develop systems and processes that don’t allow such major mistakes.

You can’t fault people for making simple mistakes or you’ll end up with an organization where nothing gets done.


Every single human holding a responsibility at any level gets blame all the time. There is nothing wrong with that, nor with making mistakes, and it is a fact of life.

Engineering processes are orthogonal to that.


That can happen when maintainers are professional scientists rather than professional web developers

update: added "professional" before "scientists"


But you'd think they're still reading through Django's deployment documentation which explicitly states that you don't run DBEUG = True in production.


I used to support scientists. I would not expect that unless you got the documentation published in a high-profile journal.


Heheh, from the guys who use variables such as Vo ,tm,max_p, this guy expect them to RTFM.


Those are perfectly reasonable names since they're standard convention in engineering and science. It's like using i, j, and k for index variables in iterators; the meaning is clear due to convention.


As someone who started learning Python with Data Science and ML courses and tutorials I overused these short names for variables in my first web apps


Modern practice is to avoid i/j/k. i for a loop is alright, but if you start nesting, start giving those names semantic tells


Good work!


To be fair, I wouldn't host my website on my world-class supercomputer either, if I had one...


To be even fairer, I've served a shit ton of traffic on a small DigitalOcean droplet and never had issues because my stack is reasonable.


To be fairest you can serve pages on a potato and as long as it's cached by cloudflare no one will know.


and yet, they didn't


Is there a CDN sitting between your small droplet and the world?


But... We do realize the skills required to make and evaluate a super computers is vastly different from web serving right?

Sure some principals may be common due to the distributed network but the actual practical tools have nothing in common.


What’s your stack, and how much traffic?


I survived without errors from being to HN front page with Nginx and a static site (made with Hugo, iirc) on a Linode 1GB (when the plan was still existing). No CDN whatsoever.


That sounds reasonable to me. In the old days people used Apache because it was available and worked. But because the performance was shit they adopted other complex means of dealing with heavy loads. Nginx has dramatically improved the web server performance problem but a lot of old practices are still in place.


Well, it's fastest number crunchers, not fastest file servers.


Ah, but did it do it quicker than the last version? That's the question.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: