The P3 instances are the first widely and easily accessible machines that use the NVIDIA Tesla V100 GPUs. These GPUs are straight up scary in terms of firepower.
To give an understanding of the speed-up compared to the P2 instances for a research project of mine:
+ P2 (K80) with single GPU: ~95 seconds per epoch
+ P3 (V100) with single GPU: ~20 seconds per epoch
Admittedly this isn't exactly fair for either GPU - the K80 cards are straight up ancient now and the Volta isn't sitting at 100% GPU utilization as it burns through the data too quickly ([CUDA kernel, Python] overhead suddenly become major bottlenecks).
This gives you an indication of what a leap this is if you're using GPUs on AWS however.
Oh, and the V100 comes with 16GB of (faster) RAM compared to the K80's 12GB of RAM, so you win there too.
For anyone using the standard set of frameworks (Tensorflow, Keras, PyTorch, Chainer, MXNet, DyNet, DeepLearning4j, ...) this type of speed-up will likely require you to do nothing - except throw more money at the P3 instance :)
If you really want to get into the black magic of speed-ups, these cards also feature full FP16 support, which means you can double your TFLOPS by dropping to FP16 from FP32. You'll run into a million problems during training due to the lower precision but these aren't insurmountable and may well be worth the pain for the additional speed-up / better RAM usage.
Great write up as usual! Could you elaborate more on the python overhead a bit? We have fp16 support running in dl4j but I don't think we've really done much with volta yet beyond get it working. In practice, (especially when we do multi gpu async back round loading of data) we find gpus being data starved. I would love to compare support for what you're seeing with pytorch.
Honestly, I didn't spend enough time delving in to the Python overhead, especially in terms of the framework. Most of it would be an issue of my own causing however rather than the framework's. The original code I wrote was never written with data loading / saving in mind as the source for speed issues so I avoided what would have been premature optimization at the time.
Some of the slowdowns now just seem silly and aren't even listed in the per epoch timings: PyTorch doesn't have an asynchronous torch.save(). This means that if you save your model after each epoch, and the model save takes a few seconds, you're increasing your per epoch timings 5-10% just by saving the damn thing!
Regarding FP16, PyTorch supports, and there's even a pull request that updates the examples repo with FP16 support for language modeling and ImageNet. It's not likely to be merged as it greatly complicates a codebase that's meant primarily for teaching purposes but it's lovely to look at. I also think many of the FP16 issues will get a general wrapper and they'll become far more agnostic to the end user. For the most part they're all outlined in NVIDIA / Baidu's "Mixed Precision Training" paper. Might be useful for DeepLearning4j to go through the most common heavy throughput use cases and get them running (just as an example of how to work around issues really) if customers were using P100s/V100s?
I'm really interested in exploring the FP16 aspect as the QRNN, especially for single GPU, is sitting at basically 100% utilization, with almost all the time spent on matrix multiplications. FP16 is about the only way to speed it up at that stage. This gets a tad more complicated regardless as the CUDA kernel is not written in FP16 (and is not easy to do so) but even converting FP16->FP32->(QRNN element-wise CUDA kernel)->FP16 ("pseudo" FP16) should still be a crazy speedup. I tested that on the P100 and it took per epoch AWD-QRNN from ~28 seconds to ~18.
Nice comment. In regard to your reference to reducing precision to FP16 for performance gains, you might want to read a recently published paper by Baidu Research and NVIDIA teams on mixed precision training of deep learning models (link to the paper is at the end of the following relevant post): https://www.nextplatform.com/2017/10/11/baidu-sheds-precisio.... Enjoy! :-)
Genuinely curious: Given that Softlayer bare metal server prices start at 700$ per month is there even remote chance of this actually being profitable?
Nope, that would be hugely unprofitable. To give you an idea:
The P100 instances on Softlayer would cost around $2,000/mo, and would generate approximately $170/mo in ETH when fully optimized. One could probably build a DIY rig with the same hashing power for less than 2k total.
showing up for me on https://aws.amazon.com/ec2/pricing/ for each of On-Demand Instances, Reserved Instances, Spot Instances, and Dedicated Hosts pricing lists.
Are you selecting the regions where this is available - US East (N. Virginia), US West (Oregon), EU West (Ireland) and Asia Pacific (Tokyo)?
Hi guys, Dillon here from Paperspace (https://www.paperspace.com). We are a cloud that specializes in GPU infrastructure and software. We launched V100 instances a few days ago in our NY and CA regions and its much less expensive than AWS.
Think of us as the DigitalOcean for GPUs with a simple, transparent pricing and effortless setup & configuration:
AWS: $3.06/hr V100*
Paperspace: $2.30 /hr or $980/month for dedicated (effective hourly is only $1.3/hr)
Dan here (also Paperspace team). Totally agree that transfer costs are a significant pain point which is why we do not charge for it. We can peer with other providers (eg with AWS we can leverage Direct Connect directly from our datacenters) but most of our customers don't implement this unless they're moving major traffic.
That's a good start but do you have a partnership with anyone that can provide storage with free/low cost bandwidth to your service? Even Direct Connect is ridiculously expensive compared to transit.
>Getting the data into and out of compute services is the most difficult part financially, at least in my experience.
You can never forget that this is entirely because of compute services ripping you off, not because they're providing a valuable service in return for the transfer pricing.
One of the biggest challenges with deep learning is training data. AWS makes loading large datasets easy with S3. What does Paperspace have to help with this? If I have to perform deep learning on multi-TB datasets in S3, any compute cost benefits get cancelled out by the increased data transfer cost from S3.
I've really enjoyed using your service, especially the cloud desktops. I use them for running Fusion 360 (windows only) from my ubuntu xps when I'm away from home.
I'm looking for a way to run serverless (Amazon Lambda style) GPU operations (preferably using OpenCL). Are there any plans for such a service in your platform?
We have definitely been thinking a lot about what that would look like (i.e. is it more of a job architecture, an API, clustering, etc). Would love to hear your thoughts on what GPU Lambda might look like. Feel free to hit me up directly dillon [@] paperspace [dot] com if you want to continue the conversation :)
We're adding sync support to Worker (which has GPU support) at Iron.io soon! This will allow you to run long running background jobs (current behavior) as well as sync serverless/faas Lambda-like functions within a single API.
DigitalOcean for GPUs, awesome! For someone wanting to play around learning more about machine learning, would one of your Standard GPU units be ideal? If so, which one would you recommend? (Or do you think I'd need a dedicated GPU unit?)
For ML/deep learning tasks you should definitely use a dedicated GPU. I would recommend our GPU+ (NVIDIA Quadro M4000/P4000) which has 8GB of VRAM and 1664 CUDA cores.
Out of professional curiosity, what are you looking for from ENA?
(I'm an engineer on Google Compute Engine with a deep interest in customer networking use stories, particularly heavy utilization customers, even if they're not my customers :)
This post states, "In order to take full advantage of the NVIDIA Tesla V100 GPUs and the Tensor cores, you will need to use CUDA 9 and cuDNN7." What version of TensorFlow does it use? From what I can tell, TensorFlow doesn't fully support the latest versions yet.
Slightly off-topic but I'm curious: Nvidia Volta is advertised as having "tensor cores" - what does it take for a programmer to use them? Will typical Tensorflow or Cafe code take advantage of it? Or should we wait for some new optimized version of ML frameworks?
Hmm just tried to spool up a p3.2xlarge in Ireland but hit an instance limit check (it's set at 0), went to request a service limit increase but P3 instances are not listed in the drop down box :(
Looks like Paperspace announced Volta support yesterday: https://blog.paperspace.com/tesla-v100-available-today/ One nice thing here is you can do monthly plans instead of reserved on AWS which is a minimum $8-17k upfront. Really great to see the cloud providers adopting modern GPUs.
How long if you build it your own incl electricity prices? If margins are similar to other EC2 instances, you'd probably break-even after 6 months or so. Which makes EC2 uneconomical for any lab/company that can utilise the cluster 24/7.
Still nice if you quickly need to get some model results though.
Amazon prices are for the pay as you go model. You can shave a significant amount off the price if you know you're going to be running them for 12 months.
If you're going to be running it for 24x7 for 3 years, I think it'd be worth doing the apples-to-apples comparison of buying your own V100s vs renting them from AWS. The DGX Station with 4 V100s is $70k
How many bitcoins can you mine out of this on max power and would it be profitable? I'm sure that Amazon has done the math on this but I'm still curious.
It's not just that Amazon has done the math, it's that sufficiently liquid cryptocurrencies will, by the efficient market hypothesis, quickly gain enough value to make mining on whatever Amazon offers no longer profitable. As soon as you're able to profitably mine without an up-front capital investment, people will take advantage of the arbitrage opportunity until the market adjusts its price, and if the currency is designed at least somewhat competently and has enough of a working market (both of which are definitely true of Bitcoin), that won't take very long.
Cryptocurrencies are the invisible robot hand of the market. (Which is, I think, not a claim about whether they're good, but certainly a claim about whether they are to be feared. If you squint hard enough, the giant Bitcoin mines in China are the work of an unfriendly AI employing people to make paperclips.)
I wouldn't read too much into this - Amazon's Ireland region was deployed earlier (2008?) than London (2016?) and seems to receive updates earlier too.
London only came online relatively recently, maybe there's some operational stuff getting in the way of deploying? Or perhaps London has relatively few users at the moment, so the number of clients who will be able to take advantage of more specialised instances is also relatively low?
The London Region is really new. Amazon are using it as a way in to UK-only projects (specifically healthcare due to NHS regulations). I suspect their datacentres are considerably smaller than that of Ireland along with their client-base for the moment.
I wouldn't read too much in to it, brexit-wise.
GPUs are quite good at doing arithmetic in parallel. A large part of machine learning is doing arithmetic on large data sets. It makes sense to do these operations in parallel. For example, implementing k-nearest neighbors on a GPU is almost 2 orders of magnitude faster than on a CPU[0].
GPUs just work very well when you have a a lot of data and you are able to run the operations on the data set in parallel. Machine learning seems to fit this model quite well which is why you see many GPUs used in this field. Other things that take advantage of parallelism would be graphics and crypto-currency mining.
+ P2 (K80) with single GPU: ~95 seconds per epoch
+ P3 (V100) with single GPU: ~20 seconds per epoch
Admittedly this isn't exactly fair for either GPU - the K80 cards are straight up ancient now and the Volta isn't sitting at 100% GPU utilization as it burns through the data too quickly ([CUDA kernel, Python] overhead suddenly become major bottlenecks). This gives you an indication of what a leap this is if you're using GPUs on AWS however. Oh, and the V100 comes with 16GB of (faster) RAM compared to the K80's 12GB of RAM, so you win there too.
For anyone using the standard set of frameworks (Tensorflow, Keras, PyTorch, Chainer, MXNet, DyNet, DeepLearning4j, ...) this type of speed-up will likely require you to do nothing - except throw more money at the P3 instance :)
If you really want to get into the black magic of speed-ups, these cards also feature full FP16 support, which means you can double your TFLOPS by dropping to FP16 from FP32. You'll run into a million problems during training due to the lower precision but these aren't insurmountable and may well be worth the pain for the additional speed-up / better RAM usage.
- Good overview of Volta's advantages compared to event the recent P100: https://devblogs.nvidia.com/parallelforall/inside-volta/
- Simple table comparing V100 / P100 / K40 / M40: https://www.anandtech.com/show/11367/nvidia-volta-unveiled-g...
- NVIDIA's V100 GPU architecture white paper: http://www.nvidia.com/object/volta-architecture-whitepaper.h...
- The numbers above were using my PyTorch code at https://github.com/salesforce/awd-lstm-lm and the Quasi-Recurrent Neural Network (QRNN) at https://github.com/salesforce/pytorch-qrnn which features a custom CUDA kernel for speed