Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A month ago I ran benchmarks on CPUs vs. GPUs on Google Compute Engine, and found that GPUs were now about as cost-effective as using preemptible instances with a lot of CPUs: http://minimaxir.com/2017/11/benchmark-gpus/

That was before preemptible GPUs: with the halved-cost, the cost-effectiveness of GPU instances now doubles, so they're a very good option for hobbyist deep learning. (I did test the preemptible-GPU instances recently; they work as you expect)



It's also a huge gain to a number of science disciplines that don't have access to a mainframe or supercomputer, but have development resources.


Yes, because as we all know, university teams don't have access to hardware, but are swimming in money. (obviously the whole point of doing cloud for the cloud providers is that owning the hardware is cheaper)

Whilst I would agree that university teams probably should use the resources the cloud providers make available freely, they should probably stay away from actually using capacity on the cloud and instead have their own hardware.

Besides, what I keep hearing from machine learning researchers is that no matter where you work, developing on your own machine ... there's no beating that, time and productivity wise.


I used to do IT for a large research university. I can confirm that we were practically drowning in funding for new hardware. If we made a business case for upgrading our infrastructure it wasn't uncommon to get an extra million or so dollars in the budget without much thought.


Can't disagree with that, but for some genomics work it's just not a realistic option when you can get output from a slice of mainframe in 1/10th the time, and that time is like 2 and some change days.


Related HN post to your blog post on the benchmarks on CPUs vs. GPUs on Google Compute Engine: https://news.ycombinator.com/item?id=15940724


I'm always disappointed in these comparisons since they never look at AWS spot instances, which are the real competitors for hobbyists.


I think the real competitors for hobbyists are physical GTX 1060s. Hobbyist cloud-computing is a different question though.


Doesn't Nvidia's new eula make that Difficult. Anything requiring more than a handful of gpus would be classified as a data center deployment. Which is against the eula.

IANAL though so I might have interpreted this incorrectly.

http://www.nvidia.com/content/DriverDownload-March2009/licen...


I doubt that any court would consider a half-rack in someone's closet to be a "datacenter", nor would I expect Nvidia to enforce that EULA term against a hobbyist.


But if that hobbyist ends up creating a billion dollar business, that's leverage Nvidia has for a lawsuit.

Similar strategy of Adobe won't sue a single user for pirating Photoshop, but the second they have a successful business...that's a different story.


> ends up creating a billion dollar business, > that's leverage Nvidia has for a lawsuit.

A great problem to have. Maybe first concentrate on creating a billion dollar business and by that time you can afford to get some 'approved' cards.. ;)


They (NVidia) don't have any recompense beyond withdrawing support. The ELUA is on the (free) drivers, so a court would find no monetary damages. (IANAL etc)


I'm not getting it. Do they want to sell a product or a service?

And instead of restricting our rights, shouldn't we get a discount when buying multiple GPU cards?


When I bought a 2016 MacBook Pro, I was expecting to buy an external GPU + an nVidia card so I could do deep learning/video gaming.

Unfortunately, using nVidia GPUs with a Mac is still fussy even in High Sierra. And with the GPU instance price drops making deep learning pay-as-you-go super affordable, it's no longer worth the physical investment in a card, especially because they depreciate quickly.


Depends how much are you using this. Looks like right now, for k80+4cores/15G of ram, you will have to pay $0.26 per hour. PC with 1060 would run you probably less than 600USD, so ~100days of using VM 24/7, excluding network traffic on one side, and power consumption on other (although, latter should not be that much when looking at the whole cost).

And you will still have pretty powerful PC at your home for everyday use/gaming


I wouldn't call someone who trains models 24/7 a hobbyist.


One benefit to owning the hardware is that you can use it mine cryptocurrencies when you're not using it. I bought my GTX 1060 for $200 used in March 2017, and it's generated around $1300 worth of Ethereum...


What's your power bill like compared to before you started running a miner?


    0.3 kW/h * 0.20 $/kW * 24 h * 360 days = $518.4 per year of electricity
Your mileage may vary with the costs of electricity in your region and whether you really run 24/7.

In my experience, people always operate around break even. They only make good money if they held their coins and the price increased over time.


How much did it generate before the price of cryptocurrency exploded in November?


It's been at around $40/mo net since March, briefly shooting up to over $100/mo in July (when the price of Eth increased faster than network Hashrate) and now it's back down to $40/mo. The overall profit is higher than the sum of the monthlies because I managed to hold some Eth rather than selling.


Whoa! Where/How can you get a powerful PC with a 1060 for less than 600?

I'm not even being sarcastic, I'm thinking of building my first gaming PC this year.


A short guide: https://www.reddit.com/r/buildapc/comments/6i9jbg/guide_used....

TLDR: buy a used desktop business-class with a decent PSU for $300-$400, and stick in a GPU in the 1060 class.


I put a note about spot instances at the end (the economics of spot instances are a bit tricker for calculating cost-effectiveness due to price variability).

Although the cost of a K80 preemptible instance on GCP is now close to the approximate cost of a K80 spot instance on AWS, so there's a bit of competition.


Disclosure: I work on Google Cloud (and helped launch this).

Note that our pricing is flat regardless of number of GPUs attached (and you don't need to buy as many cores to go with them). By comparison, Spot often charges greater than on-demand pricing for anything other than single GPU instances.

Thanks again for your write up, sorry about the confusion as we delayed announcement until the new year.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: