Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: how many EC2 instances do you run?
9 points by Lightbody on Feb 15, 2012 | hide | past | favorite | 12 comments
I'm speaking at Cloud Connect in Santa Clara tomorrow morning, giving a talk on how to launch hundreds or even thousands of concurrent EC2 instances and manage them all.

It's based on my experience with a load testing startup I launched a few years back (http://browsermob.com). I thought it might be fun to incorporate some stories from the HN community on how they use EC2.

So please feel free to share what your instance limits are, how many you run concurrently, what kinds of instances you use, whether they are "long running" or you have "spikey" usage, or anything other interesting experiences.

I'll be sure to post my slides to the thread tomorrow afternoon. And if you'd like a hat-tip in the slides, just say so in the comments. Thanks!




At one time, extremely spiky usage: my primary use-case was short-term compute cluster deployments for specific projects, so it would fluctuate wildly. Typical sizes at any given time were 0 (no project), 16 (small cluster), 64 (moderate) and 128 (decent, at least for the particular domain I was in).

These days it's zero, because I work at an HPC-targeted IaaS startup that hosts our own hardware. (Because Infiniband is awesome when you're latency-bound.) But I used some decent-size ec2 deployments for a bit there.


Thanks for sharing! What kind of instance types did you use? Was it the cluster compute instances, or something else?


Typically cluster compute for production, yeah. I also did a couple tutorials where we used m1.small in "student clusters ".


I use just 1 small instance for database, web, SOLR, cron jobs, and everything in-between. And yes, I do get a lot of alert emails saying it's over load. But that's how I roll.


Small bit of work here, but I use up to 11, I have one instance as my command and control that uses a messaging queue to send work to the others that are dynamically added as needed up to the self limit of 10 workers.


Thanks for sharing, Jason! What prevents you from scaling higher? Lack of need? Cost? Architecture?


It fluctuates, but most recently somewhere between 34 - 56 instances for dev, staging and production search, crawler and content build clusters. (we index terabytes of data and hundred of millions of documents)


Most of our instances we keep up for months at a time, although they can turn flaky. We tend to use EBS, so we reprovision instances when needed. The biggest overall issue we have with EC2 is clock skew issues that can fool our fault-tolerance layers into thinking there is a major timeout issue when it's only some clock skew. That and some instances randomly rebooting in the middle of a time critical processing operation. We primarily use m2.xlarge or m2.2xlarge instance types.


Thanks! Reserved? Spot? On-demand? What instance types?


The needs of our customer are too transient for reserved yet, so it's primarily been as needed (on demand?) We run into problems during provisioning sometimes and Amazon has to then raise our quota. Some customer architecture and capacity requirements have stabilized, so they are talking about getting reserved instances. We also have been experimenting with provisioning a bunch of servers for short durations for "burst processing" of content, but we're never totally sure if AWS will have the capacity we need. Things like reserved instances might get more use by us soon.


Come on now, don't be shy :) If you'd like to share anonymously, shoot me a note at patrick@lightbody.net and I'll repost here.


Currently running about 13 EC2 instances and 6 RDS instances for production & development/staging environments...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: