It isn't "difficult and annoying" anymore, it used to be before SSD backed EBS and/or provisioned IOPS because you had to RAID0 together a dozen or so magnetic EBS volumes to get decent disk performance and then deal with the annoyance of sorting out a way to take consistent snapshots of the RAID array for backups.
Now you can just toss a single 1TB SSD backed EBS volume on an instance and get ~3k iops, or use provisioned iops to get almost any performance level you need.
Regardless of whether they're backed by SSD, all EBS volumes on an instance sit behind a 1 Gbps pipe (except for the more exotic and expensive instance types). That's part of the reason why Amazon talks about IOPs instead of raw disk bandwidth.
Go ahead and run:
$ sudo du -hs /*
..on a vanilla m3.* instance and run iotop in a different session. You'll see bandwidth numbers that 2002 would be embarrassed about.
Bandwidth available for EBS volumes varies. For instance, with EBS-optimized volumes it can be 500Mbps, 1Gbps, 2Gbps, or 10Gbps, depending on the instance type, as shown in this chart from an Amazon presentation:
The "NA" of the 8xlarge instance types is because there's no ebs-optimization as an optional feature on that instance type; you automatically get access to 10Gbps.
The SSD EBS claims a default performance of 3 IOPS/GB with a burst of up to 3000 IOPS with a 99% consistency. One of my larger instances uses SSD EBS and so far I am happy with it. In particular, I was very impressed how much faster updating Ubuntu was when I first booted the instance. I am planning to move my Postgres RDS instance to SSD EBS next week. I think (I hope) it will fix a particular stall my application experiences sometimes.
Now you can just toss a single 1TB SSD backed EBS volume on an instance and get ~3k iops, or use provisioned iops to get almost any performance level you need.