A month ago I ran benchmarks on CPUs vs. GPUs on Google Compute Engine, and found that GPUs were now about as cost-effective as using preemptible instances with a lot of CPUs: http://minimaxir.com/2017/11/benchmark-gpus/
That was before preemptible GPUs: with the halved-cost, the cost-effectiveness of GPU instances now doubles, so they're a very good option for hobbyist deep learning. (I did test the preemptible-GPU instances recently; they work as you expect)
Yes, because as we all know, university teams don't have access to hardware, but are swimming in money. (obviously the whole point of doing cloud for the cloud providers is that owning the hardware is cheaper)
Whilst I would agree that university teams probably should use the resources the cloud providers make available freely, they should probably stay away from actually using capacity on the cloud and instead have their own hardware.
Besides, what I keep hearing from machine learning researchers is that no matter where you work, developing on your own machine ... there's no beating that, time and productivity wise.
I used to do IT for a large research university. I can confirm that we were practically drowning in funding for new hardware. If we made a business case for upgrading our infrastructure it wasn't uncommon to get an extra million or so dollars in the budget without much thought.
Can't disagree with that, but for some genomics work it's just not a realistic option when you can get output from a slice of mainframe in 1/10th the time, and that time is like 2 and some change days.
Doesn't Nvidia's new eula make that Difficult. Anything requiring more than a handful of gpus would be classified as a data center deployment. Which is against the eula.
IANAL though so I might have interpreted this incorrectly.
I doubt that any court would consider a half-rack in someone's closet to be a "datacenter", nor would I expect Nvidia to enforce that EULA term against a hobbyist.
> ends up creating a billion dollar business,
> that's leverage Nvidia has for a lawsuit.
A great problem to have. Maybe first concentrate on creating a billion dollar business and by that time you can afford to get some 'approved' cards.. ;)
They (NVidia) don't have any recompense beyond withdrawing support. The ELUA is on the (free) drivers, so a court would find no monetary damages. (IANAL etc)
When I bought a 2016 MacBook Pro, I was expecting to buy an external GPU + an nVidia card so I could do deep learning/video gaming.
Unfortunately, using nVidia GPUs with a Mac is still fussy even in High Sierra. And with the GPU instance price drops making deep learning pay-as-you-go super affordable, it's no longer worth the physical investment in a card, especially because they depreciate quickly.
Depends how much are you using this. Looks like right now, for k80+4cores/15G of ram, you will have to pay $0.26 per hour.
PC with 1060 would run you probably less than 600USD, so ~100days of using VM 24/7, excluding network traffic on one side, and power consumption on other (although, latter should not be that much when looking at the whole cost).
And you will still have pretty powerful PC at your home for everyday use/gaming
One benefit to owning the hardware is that you can use it mine cryptocurrencies when you're not using it. I bought my GTX 1060 for $200 used in March 2017, and it's generated around $1300 worth of Ethereum...
It's been at around $40/mo net since March, briefly shooting up to over $100/mo in July (when the price of Eth increased faster than network Hashrate) and now it's back down to $40/mo. The overall profit is higher than the sum of the monthlies because I managed to hold some Eth rather than selling.
I put a note about spot instances at the end (the economics of spot instances are a bit tricker for calculating cost-effectiveness due to price variability).
Although the cost of a K80 preemptible instance on GCP is now close to the approximate cost of a K80 spot instance on AWS, so there's a bit of competition.
Disclosure: I work on Google Cloud (and helped launch this).
Note that our pricing is flat regardless of number of GPUs attached (and you don't need to buy as many cores to go with them). By comparison, Spot often charges greater than on-demand pricing for anything other than single GPU instances.
Thanks again for your write up, sorry about the confusion as we delayed announcement until the new year.
That was before preemptible GPUs: with the halved-cost, the cost-effectiveness of GPU instances now doubles, so they're a very good option for hobbyist deep learning. (I did test the preemptible-GPU instances recently; they work as you expect)