No, but they will pay 2% more to fix something if it means 5% of customers don't write them off as "awful, unethical, big corp" and not buy stuff from them.
There are plenty of examples of companies that did or did not react to negative PR, but the effect is in the statistics, not the individual anecdata.
What are the statistics? All of these companies are still contracting out their labor to factories in countries with lower labor and environmental standards.
Nike chooses not employ any laborers for a reason. I do not see them advertising that the people making the clothes badged with the Nike symbol are working maximum 8 hour days, or maximum 40 hours per week, or getting vacation days.
It is all prose. The sweatshop conditions and disparities in quality of life at work between developed and developing countries have been known for 30+ years. The only thing causing improvements for the laborers in developing countries is increased demand for their labor, not some unverifiable PR response by Nike.
Yeah, I've noticed this too. Most of the time when you look up something gRPC related, the first links are for some of Microsoft's gRPC documentation!
gRPC is one of those projects that I'm not sure if Google really wants uptake or if it's just a dump as a means to open source other things they want uptake in (GCP client libraries, tensorflow, etc). The documentation is awful.
Idoubt google care if you use grpc for its own sake, unless adopting it for its own sake helps you migrate your important tech to google-preferred versions.
It's a tradeoff between software and infra complexity.
I would argue that if you've built a system up from scratch, this is much easier to debug and maintain than a foreign piece of software. Rabbit is just way overkill if you have a couple of jobs per hour, for example.
The only thing I need (and is sorely missing from Restic) is that the metadata be kept separate from the actual data. That way I can store the data in AWS S3 Deep Glacier at a cost of nothing per year, and still do incremental backups. Currently the architecture of Restic for instance requires all data to be quickly and cheaply accessible; which makes it impossible for this.
I have terabytes of data that I'd be happy to dump encrypted and compressed in Deep Glacier and happy to pay $500 to retrieve if I were to mess up my hard drives, but otherwise don't want to pay for the costs of normal S3.
Does Kopia separate metadata from the actual encrypted/compressed blobs?
Yes, Kopia does segregate metadata from data. Directory listings, snapshot manifests and few other minor data pieces are stored in pack files starting with "q" while the bulk of data is stored in pack files starting with "p". Indexes into both kinds of pack files are stored in filenames starting with "n".
This has some interesting properties. For example we can easily cache all metadata files locally, which provides fast directory listing, manifest listing and very fast verification without having to download actual data.
Google Workspace is still around $10/month for unlimited. 50TB here and counting, uploaded a few more TBs just this week. Incredible value proposition! Only some limits on traffic, 750GB/day ingress. Works very well with rclone.
Legally that price is supposed to be for 1TB storage but the quota is still not being technically enforced, at least for me with a grandfathered gsuite account. Not sure about new accounts, it seems something may have changed couple of weeks ago with new ToS.
It's been an open secret for many years that they don't enforce quotas. It's not profitable but google has really deep pockets and can afford to not care. Not sure if they really closed it this time - like I said, I'm still able to upload multiple extra terabytes onto my supposedly 1 TB account even now. So it doesn't seem like they started enforcing quotas to me.
Look/ask around in r/DataHoarder for recent experiences of other people, they also discuss other storage services in general a lot.
Oh yeah I know, I just didn't think that people wre doing that much with it! I definitely lurk r/datahoarder, absolutely love seeing people that are so excited about storage (and haven't been found by mainstream reddit for the most part yet). r/zfs is also pretty good for nerdy drive stuff from time to time.
Thanks for sharing! This is why I asked... Wasabi prices 4TB of data, with 100% of it downloaded every month @ $287/year according to their price calculator.
Backblaze B2's calculator is a little more sophisticated, and putting in the numbers for an absolutely pathological usecase where you start with 4TB, download, delete and upload that same amount every month puts you at $720/year. A much less pathological use case (I think) that assumes you upload, delete and download 1TB/month puts out around $360/year.
Hetzner storage boxes offer 10TB for ~$48/month, which is $576 a year -- free ingress/egress, no hidden fees for operations or whatever else, but you do have to set up a node with minio (or use FTP, etc).
Amortized over the happy time (time where you don't need to rely on your backups) this does make sense, but I wonder what the percentages look like on that kind of metric. To be fair I haven't had to restore from backup for years so this probably makes a lot of sense. I guess there's no need to test your backups/restore either if you're using a tool like borg/restic/etc and have tested it with local hardware.
Also, what happens if you have to retrieve data twice from Glacier? You've got access to it for 24 hours so I assume you're planning on just keeping the data on some non-glacier storage medium for a while after the initial post-disaster pull?
This wouldn't be the primary backup, but Deep Glacier is just such a good deal that: I'd be happy to pay the $50 per year for a call option on my data, it'd make me sleep better at night!
Part of my calculus is that I have quite strong confidence in AWS in terms of business continuity and reliability/availability. If I dump my files on AWS, I have high confidence in the files (and AWS) being around in 10 years and retrievable for roughly the same price (or at least no more).
Hetzner would have much lower durability. I'm a bit suss on Backblaze, though I do trust them to be more durable than my self-managed disks (and uncorrelated to my failures). I don't know much about Wasabi; but it's not a good sign for me that their landing page touts their latest funding round at the top: seems young and you never know if the price is subsidized with VC money (and won't be in n years) or similar.
> Also, what happens if you have to retrieve data twice from Glacier?
The killer is the egress. I'd just buy a new set of disks and download it straight there.
My question was more about just how many TBs and ingress/egress was making AWS S3 cost prohibitive -- Wasabi's sticker price is $5.99/month (so 6x glacier but ~0.2x regular S3), and I know that Hetzner will give you a storage box that is 1TB for 9.40EUR (but the kicker there is that 10TB of traffic is included which is amazing), and there are no API/operation fees when you run your own Minio (or just use FTP/all the other built-in access methods).
Network is one thing but what am I missing here? Maybe I just think $10/month is reasonable for 1TB (because I don't have enough TBs? or use remote storage enough?), and that's different from most people who are interested in this.
I have been studying these key-value stores with efficient range iteration lately (such as LevelDB, RocksDB, BigTable, FoundationDB, etc). This is a great reference on how to make such a simple abstraction do a lot of useful things.
TBH it's very routinely done with onboard GPS tracking instruments which also have accelerometers, (or apps running on phones) - used by insurance companies, car rental agencies, and other fleet operators.
This would be more fun if this could be done with the median. Then the "true" answer to any question would be 50% on both sides. (For example, more than 50% of people have a lower income than the mean income, due to extreme outliers.) I suppose in lay language the distinction gets muddled.
The Friendship paradox has nothing to do with what people think: it's an observed fact and related to the way that node degree in social graphs work (or random graphs in general).
There are plenty of examples of companies that did or did not react to negative PR, but the effect is in the statistics, not the individual anecdata.