Hacker Newsnew | past | comments | ask | show | jobs | submit | miken123's commentslogin

It is mentioned in their list of subprocessors: https://www.digitalocean.com/trust/subprocessors


Because these companies never lose data, like during some lightning strikes, oh wait: https://www.bbc.com/news/technology-33989384

As a government you should not be putting your stuff in an environment under control of some other nation, period. That is a completely different issue and does not really relate to making backups.


“The BBC understands that customers, through various backup technologies, external, were able to recover all lost data.”

You backup stuff. To other regions.


But the Korean government didn't backup, that's the problem in the first place here…


Sure. Using a cloud can make that more convenient. But obviously not so if you then keep all your data in the same region, or even “availability-zone” (which seems to be the case for the all “lost to lightening strikes” data here).


>As a government you should not be putting your stuff in an environment under control of some other nation, period.

Why? If you encrypt it yourself before transfer, the only possible control some_other_nation will have over you or your data is availability.


You're forgetting that you're talking nation states, here. Breaking encryption is in fact the role of the people you are giving access.

Sovereign delivery makes sense for _nations_.


You can use and abuse encrypted one time pads and multiple countries to guarantee it’s not retrievable.


Using a OTP in your backup strategy adds way more complexity, failure modes, and costs with literally no improvement in your situation.


You're assuming a level of competency that's hard to warrant at this point.


If your threat model is this high that you assume encryption breaking to be into your threat model, then maybe you do need a level of comeptency in the process as well.

They have 2 Trillion $ economy. I am sure that competency shouldn't be the thing that they should be worrying at that scale but at the same time I know those 2 trillion $ don't really make them more competent but I just want to share that it was very possible for them to teach/learn the competency

Maybe this incident teaches us atleast something. Definitely something to learn here though. I am interested in how the parent comment suggests sharing one time pad or rather a practical way for them to do so I suppose since I am genuinely curious as most others refer to using the cloud like aws etc. and I am not sure how much they can share something like one time pad and at the scale of petabytes and more, I can maybe understand it but I would love if the GP can tell me a practical way of doing so to atleast have more safety I suppose than encryption methods I suppose..


I think it doesn't need to be the encryption breaking per se.

It could be a gov laptop with the encryption keys left at a bar. Or the wrong keys saved on the system and the backups can't actually be decrypted. Or the keys being reused at large scale and leaked/guessed from lower security area. etc.

Relying on encryption requires operation knowledge and discipline. At some point, a base level of competency is required anyway, I'm not just sure encryption would have saved them as much as we'd wish it would.

To your point, I'd assume high profile incidents like this one will put more pressure to do radical changes, and in particular to treat digital data as a more critical asset that you can't hand down to the crookest corrupt entity willy nilly just for the kickback.

South Korea doesn't lack competent people, but hiring them and letting them at the helm sounds like a tough task.


First of all, you cannot do much if you keep all the data encrypted on the cloud (basically just backing things up, and hope you don't have to fetch it given the egress cost). Also, availability is exactly the kind of issue that a fire cause…


Yeah backups would’ve been totally useless in this case. All South Korea could’ve done is restore their data from the backups and avoid data loss.


What part of the incident did you miss: the problem here was that they didn't backup in the first place.

You don't need the Cloud for backups, and there's no reason to believe that they would have backuped their data while using the cloud more than what they did with their self-hosting…


For this reason, Microsoft has Azure US Government, Azure China etc


Yeah, I heard that consumer clouds are only locally redundant and there aren't even backups. So big DC damage could result in data loss.


By default, Amazon S3 stores data across at least separate datacenters that are in the same region, but are physically separate from each other:

Amazon S3 provides a highly durable storage infrastructure designed for mission-critical and primary data storage. S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive redundantly store objects on multiple devices across a minimum of three Availability Zones in an AWS Region. An Availability Zone is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. Availability Zones are physically separated by a meaningful distance, many kilometers, from any other Availability Zone, although all are within 100 km (60 miles) of each other.

You can save a little money by giving up that redundancy and having your data i a single AZ:

The S3 One Zone-IA storage class stores data redundantly across multiple devices within a single Availability Zone

For further redundancy you can set up replication to another region, but if I needed that level of redundancy, I'd probably store another copy of data with a different cloud provider so an AWS global failure (or more likely, a billing issue) doesn't leave my data trapped in one vendor).

I believe Google and Azure have similar levels of redundancy levels in their cloud storage.


What do you mean by "consumer clouds"?


I refer to stuff like onedrive/gdrive/dropbox.


It's certainly not the case for Google Drive, which is geo-replicated, and I would be very surprised if it's true for any other major cloud.


I mean… at the risk of misinterpreting sarcasm—

Except for the backup strategy said consumers apply to their data themselves, right?

If I use a service called “it is stored in a datacenter in Virginia” then I will not be surprised when the meteor that hits Virginia destroys my data. For that reason I might also store copies of important things using the “it is stored in a datacenter in Oregon” service or something.


You might expect backups in case of fire, though. Even if data is not fully up to date.


...on a single-zone persistent disk: https://status.cloud.google.com/incident/compute/15056#57195...

> GCE instances and Persistent Disks within a zone exist in a single Google datacenter and are therefore unavoidably vulnerable to datacenter-scale disasters.

Of course, it's perfectly possible to have proper distributed storage without using a cloud provider. It happens to be hard to implement correctly, so apparently, the SK government team in question just decided... not to?


Lovely that their blog with privacy propaganda has a cookie banner that is not compliant with any privacy law in any way. Says everything about their efforts, I guess.


It was a trap. Clickbait to get every privacy-minded tech enthusiast on their site. Now, simply because they interacted with the page, facebook, in their opinion, gets a blank cheque to track you wherever they wish, on and off-site.


Having actually worked for Meta in both security and privacy capacities, I guarantee you that it's really not that conspiratorial.

No one wrote this article with the intention of "trapping privacy-minded tech enthusiasts."

I mean no offense, but this sort of thinking (that an engineering blog is attempting to attack you) is unhinged. There is not some grand conspiracy. Companies like this are not the shadowy, highly-competent and absolutely evil entities you think they are. They are barely functional to begin with.


Yup, also work in big tech and confirm this.

One really just has to think through the situation rationally, even assuming the most greediest of intentions:

> Clickbait to get every privacy-minded tech enthusiast on their site

Turns out the market of privacy-minded tech enthusiast is tiny and they hate clicking on ads. Trying to cajole this group into giving you money is pulling teeth.

Understood.

Let's deploy the same set of company resources and effort on the 99.99% other people in the market place, increase some efficiency by like 0.1% and make waaaayyyy more money.


Having worked elsewhere, this. Every part of it. Especially the "barely functional".

Different parts of the company working together is hard/rare enough. Them conspiring together... forget it.


“Never attribute to malice that which is adequately explained by stupidity.”

https://en.m.wikipedia.org/wiki/Hanlon%27s_razor

Also, I don’t think that the parent comment was being serious.


I was indeed not very serious, neither is the comment I would write in response to this:

-- Ah yes, Hanlon's razor, one of the CIA's more successful PsyOps. --

But then I was shocked to learn that the Razor's namesake Robert J. Hanlon actually did work for the CIA and now I dont know what to think.

https://wydaily.com/obits/2019/04/09/robert-j-bob-hanlon-70-...


The ratio of "people who have opinions about what google/meta/etc might be doing" vs "people who have actually worked privacy/security in google/meta/etc" is abysmally low.

Most of what's said by people who actually known what they're talking about is drowned out by low-effort, conspiratorial, semi-intellectual laziness.


Yeah, this is the main reason I stopped using Reddit when I entered the industry.

Taking it a step further - I frankly don't think normal people are positioned to make any decisions or hold any opinions strongly about tech. They are so mislead by journalism it's not even funny.

My doctor friends feel similarly about medicine and how it's reported on (and the populace's common opinions on medicine.) The average person/voter is immensely mislead in basically every field they themselves are not an expert in.


> I mean no offense, but this sort of thinking (that an engineering blog is attempting to attack you) is unhinged. There is not some grand conspiracy.

“You know, since we're trying to Z but don't have Y, we could probably use X to get Y…” said no inventive engineer ever.

No conspiracy needed. This happens.


X to get Y happens.

A tech company using a blog to get whatever imaginary consent from random anonymous privacy-aware individuals is so many levels of unhinged that it makes absolutely no sense whatsoever.


The company wouldn't. Someone retroactively realizes they have the data, and then it does.

I'm certainly not saying it happened, or will happen, here. I'm saying it definitely happens.

This is why in regulated industries, there's an emphasis on "data minimization". Much like the principle of least privilege, but applied to whether you're letting your people or systems be exposed to it in the first place.

It's easy to follow a least privilege policy if there's an actual technical control not just agreement, and even easier if the control is "I never had it, didn't derive it, and made sure I couldn't if I wanted to".

If you aren't collecting it for any use, even inadvertently, you can't retcon it into availability for alternative uses.


> Someone retroactively realizes they have the data, and then it does.

This simply isn't within the realm of reason.

Engineers at Meta have far more impactful problems to solve than attempting to reverse engineer the browsing habits of the 12 privacy-sensitive tech enthusiasts reading their engineering blog.

From a ROI/time perspective, it is far in the negative for a single junior Meta engineer to spend even 10-20 minutes investigating this. It literally is not worth anyone's time.


Can you explain?


An “accept” only cookie notice isn’t generally permissible in the EU.

They may be serving more compliant versions based on geolocation, though.

> To help personalize content, tailor and measure ads and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies.


Many reasons:

- They are not asking for consent, there is just an ok button. - They assume consent when you navigate further on the site, this is not valid consent. - Consent needs to be for specific, well defined purposes. “help personalize content, tailor and measure ads and provide a safer experience” are three purposes in one and none of them are well defined. - They are probably already setting the cookies in your first request, when you have not seen any information yet (did not check)


> Imagine 2007 and the following years already with the EU act in full effect. AppStore would be dead on arrival.

No, it wouldn't have been, as the DMA only applies to 'gatekeepers' and if you're new, you're simply not a gatekeeper. You need at least 45 million monthly active users and 7,5 billion of revenue for three years.

> So EU should change Apple’s and Google’s status from producer to provider of essential service like electricity for example.

That's the whole point of the DMA: designate these parties as 'gatekeepers', providing essential services ('core platform services' in terms of the DMA). Once you are, you have certain obligations that should allow proper interoperability with other/smaller parties.


> You need at least 45 million monthly active users and 7,5 billion of revenue for three years

Minor nitpick that does not really changes the point but might provide context: these are the criteria to automatically be classified as gatekeeper. You can be a gatekeeper even if you do not meet them, but the EU needs to prove that you meet some other more detailed criteria.


> Hotels are concerned that direct booking clicks are down as much as 30% since our compliance changes were implemented. These businesses now have to connect with customers via a handful of intermediaries that typically charge large commissions, while traffic from Google was free.

That's because your search engine results are a joke, not because of the DMA.

If I search for 'hotel <city>', I get an ad from Booking.com, then some hotel ad, then an ad from Trivago, then some Google map with hotels (make sense, but all results there are sponsored by intermediaries), then Booking.com, then Booking.com again, then Expedia, then Tripadvisor, then Trivago.

If you only present me sponsored results from intermediaries, then don't be surprised that people only click on sponsored results of intermediaries.


https://google.com/hotels

Is what I'm assuming they're talking about?

Also

https://google.com/flights

I use flights all the time. I think mostly because I've not found any of the intermediaries all that good. Google flights provides links to multiple places to buy. Usually I click the link directly to the airline.

I've looked at hotels but rarely use it. Hotel websites, unless they're a major chain, are often pretty crap so booking.com is what I generally use.


Google flights is the only tool (I've seen) that offers so many helpful filter options. It's so easy to find the perfect itinerary. And the overall UX far surpasses anything else.


This is such a weird complaint, Google searches hasn't taken you to the websites of various hotels for a decade or more, maybe a few pages in they did, but the first page of search results have been booking sites and ads for booking sites for a long time.

If the booking sites are such a huge issue for the hotels, which I can honestly believe, then don't use them, or do collective negotiations with them and demand better rates.

The complaint about the maps button not being on the search page is rather funny, because I haven't used that in ages. I've been using search engines which has a ! operator, so !gm or !maps for 10+ years every time I needed a map, so I just didn't notice. While I haven't read the DMA rules, it does seem strange that they can't just link to Google maps.


As far as I understand it, the DMA forbids them from exploiting their dominance in the search engine space to promote Maps. This is similar to the mobile space where users must be presented with a choice of their preferred browser and search engine.

I believe they would be legally okay if they presented users with a choice of their preferred Maps provider (e.g. Google Maps, OSM, Bing, or Apple Maps), but they've decided to implement it in the most obnoxious way possible.


Except that, you can actually disable them.

https://support.signal.org/hc/en-us/articles/360007061452-Do...


Well that's new then. You used to not be able to.


You're talking about the Digital Markets Act (DMA) which will come into effect in the EU in about two weeks. It does exactly what you say, although Apple is still actively trying to sabotage it with a sloppy implementation.

Hopefully the US will follow some day.


> A country like the Netherlands has its own issues (mainly housing) but doesn't have the myriad of pain points you can find in Germany like Schufa

In the Netherlands we have BKR, which is less all-encompassing than SCHUFA, but also needed to be fined before giving proper right to access under the GDPR: https://edpb.europa.eu/news/national-news/2020/national-cred...


> Yeah, especially when in the case of the fine for consent about personal ads, the fine was for violations before the law was in place.

Do you have any source for that, or are you just making up things on the spot here? (I can help you, it's the latter)


Ben Thompson wrote exactly about my point in his newsletter. But thanks for your accusation.


It's great that you have a non-publicly available newsletter as a source, but if you read the DPC item [1] you'll see that it's just about the GDPR. What Ben probably does not understand, is that the fine is not about the TOS change _before_ the GDPR came in effect, but about the fact that data was processed without valid legal basis _after_ the GDPR came in effect as the TOS change and an 'I accept' button was simply not sufficient.

[1]: https://www.dataprotection.ie/en/news-media/data-protection-...


Please see my comment below, where I quote Ben Thompson.


Can you provide a link or the full quote of the relevant section, so that we can see what’s going on here? My expectation is that he’s misunderstanding the situation.


To be fair, it is not 100% clear on my original post that I am not talking about this fine but a previous one. But if you read it carefully that is was I say. And here is Ben thompson on 11th January 2023:

"In short, Meta can not make access to its personalized social media services contingent on accepting personalized advertising; moreover, the company was fined for having done just that, despite the fact their regulator agreed with them that that was acceptable.

I find this decision disturbing for two reasons. First, it seems unduly punitive that Meta can be fined a material amount for an approach that is not only reasonable on its face (more on this in a moment), but also one that its primary regulator agreed was appropriate. GDPR is not clear on this point, and it’s ridiculous that a company can be retroactively fined for not reading the minds of EU bureacrats.

Second, and more importantly, the fact that Meta must offer personalized social networking to users — which uses their data! — but cannot tie that to offering personalized ads — which uses their data in the exact same way, and without sharing or selling that data to advertisers — is a completely arbitrary attack on Meta’s ability to do business. Let me reiterate that point: serving a video or a post in your Facebook feed is no different from serving an ad; it is Facebook that is choosing what to serve you, not an advertiser, who has no access to users or their data. It’s all just bits, it just so happens that some of those bits make Meta money. That, apparently, is the crime here (and a callback to the Google Shopping case)."

Do you still think the EU is acting like a fair player? When in fact they are both a player and the referee? This type of behavior is exactly why there wont ever be any major innovation in the EU. And I live here btw.


There is nothing in there about retroactively applying laws. If there is anything, it may be that the laws were(/are) unclear.

The Irish DPC states they never approved anything (but there has been discussions between various European data protection authorities and the EDPB during the investigation). See for example 2.44 or 2.46 of the report [1]:

> It is factually not the case that the Commission endorsed or approved of the Terms of Service and Data Policy of Facebook or indeed of any other organisation

and

> To the extent that Facebook seeks to rely on or has ever relied on any consultative process with the Commission in order to defend the lawfulness of a particular practice, this has been in error. More pertinently, for present purposes, Facebook makes no such argument in the context of this Complaint. This is because the Commission never provided any such approval in this case nor does it do so in the context of its engagement and consultation role more generally

If you look at the final decision on Meta, you'll see that most of the fine (80+70=150 million) is for the fact that Meta was not clear on what they were doing with user's data. Only the last 60 million is about the actual legal ground of the processing. So the past that Ben Thompson mentions essentially skips 70% of the fines.

And yes, the data protection authorities are acting like a fair player and they are not the referee. We have courts for that (and this will find it's way through the courts, no worries).

[1]: https://www.dataprotection.ie/sites/default/files/uploads/20...


Yes, it wasn't clear initially which fine you were writing about. But, if the fine was referring to activity that occurred after the GDPR entered into force, it is inaccurate to say that it was retroactive - it was simply that the Irish regulator misunderstood or misenforced the law according to the courts, but for a company as sophisticated as Meta, that shouldn't be an excuse. They have far more legal expertise on staff than the Irish DPA.

And, no, nobody is forcing Meta to offer personalized social networking to users in the EU. Meta is free to decline to do so. The EU is simply forcing them to decouple consent to such a service from consent to receive personalized ads. It is a valid social policy choice to want to differentiate those processing purposes, even if Meta doesn't like it. I do.

To the extent that the EU is not acting like a fair player here, it is by disrespecting the democratic will of the EU population as expressed through their EU representatives by inadequately enforcing the GDPR.

I think it's good that the regulatory capture and the intentional Irish government policy of underfunding which currently applies to the Irish DPA is not as successfully a "get out of jail free card" for non-compliance in the EU as equivalent circumstances are in the US. Many thanks to noyb.eu for filing complaints and to the EU courts and the European Data Protection Board for taking them seriously.


But is it an optimisation to send the SVG contents in every request, instead of just letter the browser cache the image?


Yes, retrieval from disk caching is not as fast as one would expect. Detailed article: https://simonhearne.com/2020/network-faster-than-cache/


Speed is not the only criterion. Using network is a waste of energy and materials when the local cache was enough (even more when nobody sees the perf difference)


Since the svg is not inline but rather loaded externally it is being cached aswell. (source https://stackoverflow.com/questions/37832422/how-can-we-cach...)


That's a valid point, for subsequent loads in this case the inline svg is so small that when compared to the image tag it's replacing the difference is pretty small.

But the other side of this is cache latency, this depends on the caching policy defined in the http header, for example some modes require validating caches with the server which incurs a round trip even if it doesn't require always reloading the resource. If it's fully offline caching then as a sibling comment pointed out, disk caching is not free either, under some threshold (which definitely applies here) inline is going to be the fastest way to get an svg rendering.


When optimizing for lighthouse pagespeed metrics, yes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: