Hacker News new | past | comments | ask | show | jobs | submit | Msurrow's comments login

> Sounds like a purely academic exercise.

Well, yes. It’s an academic research paper (I assume since it’s submitted to arXiv) and to be submitted to academic journals/conferences/etc., so it’s a fairly reasonable critique of the authors/the paper.


Same in Denmark. Actually often needed to get from one auditorium across campus to another auditorium


I genuinely curious here, if you are the trusted decision person, is it not accepted if you say “the decision is we do not have enough information at this time to make a decision, thus we need to [research x/do a timeboxed poc/drill down on y part of system/etc]”?


> is it not accepted

I haven't been fired, if that's what you mean.

But I also just read an article calling out my non-committal cowardice.

I'm OK-enough at managing up these days that you could weasel word around and say it doesn't apply to me. But it's a failure pattern of management that I've seen happen to more than just myself.


I think it does address the main problem. What he is saying is that multiple layers of security is used to ensure (mathematically and theoretically proved) that there is no risk in sending the data, because it is encrypted and sent is such a way that apple or any third party will never be able to read/access it (again, based on theoretically provable math) . If there is no risk there is no harm, and then there is a different need for ‘by default’, opt in/out, notifications etc.

The problem with this feature is that we cannot verify that Apple’s implementation of the math is correct and without security flaws. Everyone knows there is security flaws in all software, and this implementation is not open (I.e. we cannot review the code, and even if we could review code we cannot verify that the provided code was the code used in the iOS build). So, we have to trust Apple did not make any mistakes in their implementation.


Your second paragraph is exactly the point made in the article as the reason why it should be an informed choice and not something on by default.


If you don’t trust Apple to do what they say they do, you should throw your phone in the bin because it has total control here and could still be sending your data even if you opt out.


Bugs have nothing to do with trust. You can believe completely that someone’s intentions are pure and still get screwed by their mistake.


Oh yeah, the well known "blind trust" model of security. Never verify any claims of any vendor! If you don't trust them, why did you buy from them?!


As someone with a background in mathematics I appreciate your point about cryptography. That said, there is no guarantee that any particular implementation of a secure theoretical algorithm is actually secure.


There is also no guarantee that Apple isn't lying about everything.

They could just have the OS batch uploads until a later point e.g. when the phone checks for updates.

The point is that this is all about risk mitigation not elimination.


> There is also no guarantee that Apple isn't lying about everything.

And at that point all the opt-in dialogs in the world don't matter and you should not be running iOS but building some custom Android ROM from scratch.


> There is also no guarantee that Apple isn't lying about everything.

Other than their entire reputation


A reputation has to be earned again and again.


Maybe your threat model can tolerate an "oopsie woopsie". Politically exposed persons probably cannot.


If you don't personally write the software stack on your devices, at some point you have to trust a third party.


I would trust a company more if their random features sending data are opt-in.

A non-advertized feature, which is not independently verified, which about image contents? I would be prefer independent verification of their claims.


Agreed, but surely you see a difference between an open source implementation that is out for audit by anyone, and a closed source implementation that is kept under lock & key? They could both be compromised intentionally or unintentionally, but IMHO one shows a lot more good faith than the other.


No. That’s your bias as a nerd. There are countless well-publicised examples of ‘many eyeballs’ not being remotely as effective as nerds make it out to be.


can you provide a relevant example for this context?


That was an entire body of research at the University of Minnesota and the “hypocrite commits” weren’t found until the authors pointed people to them.

https://www.theverge.com/2021/4/30/22410164/linux-kernel-uni...


How long did the log4j exist?

https://www.csoonline.com/article/571797/the-apache-log4j-vu...

What was the other package that had the mysterious .?


And yet they were found. How many such exploits lurk unexamined in proprietary codebases?


yet you say this like Apple or Google or Microsoft has never released an update to address a security vuln


Apple[1], Google[2], and Microsoft[3] you say?

You say this as if being shamed into patching the occasional vuln is equivalent to security best practices.

Open code which can be independently audited is only a baseline for trustworthy code. A baseline none of those three meet. And one which by itself is insufficient to counter a reflections on trusting trust style attack. For that you need open code, diverse open build toolchains, and reproducible builds. None of which is being done by those three.

Are you getting your ideas about security from the marketing department?

1: https://arstechnica.com/security/2024/03/hackers-can-extract... 2: https://www.wired.com/story/google-android-pixel-showcase-vu... 3: https://blog.morphisec.com/5-ntlm-vulnerabilities-unpatched-...


Go ahead and put that cup of kool-aid down for a minute. There are so so many OSS packages out there that have never been audited? Why not? Because people have better things to do. How many packages have you audited? Personally, I don't have the skillz to do that. The people that do expect to be compensated for their efforts. That's why so many OSS packges have vulns that go unnoticed until after they are exploited, which is the same thing as closed source.

OSS is not the panacea that everyone touts it to be.


> There are so so many OSS packages out there that have never been audited? Why not? Because people have better things to do.

I'm not aware of any major open source projects that haven't experienced some level of auditing. Coverity alone scans everything you're likely to find in a distribution like Debian or Fedora: https://scan.coverity.com/o/oss_success_stories

> How many packages have you audited?

Several on which I depend. And I'm just one pair of eyeballs.

> Personally, I don't have the skillz to do that.

Then why are you commenting about it?

> OSS is not the panacea that everyone touts it to be.

I don't know who's touting it as a panacea, seems like a strawman you've erected. It's a necessary pre-requisite without which best practices aren't possible or verifiable.


The developer-to-user trust required in the context of open-source software is substantially less than in proprietary software. this much is evident.


I’m stealing your information.

Hey! That’s wrong.

But I promise I won’t do anything wrong with it.

Well ok then.


This is still a very dishonest representation of what’s actually happening.


You're welcome to check their implementation yourself:

https://github.com/apple/swift-homomorphic-encryption


Hypothetical scenario: Theo de Raadt and Bruce Schneier are hired to bring Apple products up to their security standards. They are given a public blog, and they are not required to sign an NDA. They fix every last vulnerability in the architecture. Vladimir Putin can buy MacBooks for himself and his generals in Moscow, enable Advanced Data Protection, and collaborate on war plans in total confidence.

Where are the boundaries in this scenario?


Theo de Raadt is less competent than Apple's security team (and its external researchers). The main thing OpenBSD is known for among security people is adding random mitigations that don't do anything because they thought them up without talking to anyone in the industry.


I mean half the reason the mitigations don't do anything is that nobody actually cares to target OpenBSD


Freedom of speech can not exist without private communications. It is an inalieanable right, therefore privacy is as well.


I am pretty sure that if we had those people in charge of stuff like this there would be no bar above which "opt in by default" would happen, so I am unsure of your point?


Except for the fact (?) that quantum computers will break this encryption so if you wanted to you could horde the data and just wait a few years and then decrypt?


Quantum computers don't break Differential Privacy. Read the toy example at https://security.googleblog.com/2014/10/learning-statistics-...

>Let’s say you wanted to count how many of your online friends were dogs, while respecting the maxim that, on the Internet, nobody should know you’re a dog. To do this, you could ask each friend to answer the question “Are you a dog?” in the following way. Each friend should flip a coin in secret, and answer the question truthfully if the coin came up heads; but, if the coin came up tails, that friend should always say “Yes” regardless. Then you could get a good estimate of the true count from the greater-than-half fraction of your friends that answered “Yes”. However, you still wouldn’t know which of your friends was a dog: each answer “Yes” would most likely be due to that friend’s coin flip coming up tails.


> Except for the fact (?) that quantum computers will break this encryption […]

Quantum computers will make breaking RSA and Diff-Hellman public key encryption easier. They will not effect things like AES, nor things like hashing:

> Client side vectorization: the photo is processed locally, preparing a non-reversible vector representation before sending (think semantic hash).

And for RSA and DH, there are algorithms being deployed to deal with that:

* https://en.wikipedia.org/wiki/NIST_Post-Quantum_Cryptography...


Quantum computers don't and won't meaningfully exist for a while, and once they do exist, they still won't be able to crack it. Quantum computers aren't this magical "the end is nigh" gotcha to everything and unless you're that deep into the subject, the bigger question you've got to ask yourself is why is a magic future technology so important to you that you just had to post your comment?

Anyway, back to the subject at hand; here's Apple on that subject:

> We use BFV parameters that achieve post-quantum 128-bit security, meaning they provide strong security against both classical and potential future quantum attacks

https://machinelearning.apple.com/research/homomorphic-encry...

https://security.apple.com/blog/imessage-pq3/


Good question, I’m curious as well. I love the self-host as a plan approach. And it make sence its a lifetime thing.. well maybe, is it a lifetime price for all future versions and features? Then maybe 800 makes some sense, but still, as you also say, a pretty hefty stack to drop as a private person. Also curious why not a lower price, eg 100 or whatever, but then only for minor updates (not major versions). I would think it would make more digestable private persons.

“For employers” as a perk is also a great idea!


One screen for IDE (center)

One screen for documentation/browser

One screen for running the application (being developed).

Please, go ahead a explain to me how I don’t know what I’m doing.


I normally work with a 40", I'm using a a hammerspoon to divide the screen, but normally I end using one main window, with some smaller window at the side and cmd-tabbing between info. How do you manage the distraction of so many information at the same time? Do you switch between apps? use the mouse? don't you loose track of where the focused window is?


There are always good exceptions. But it's a rare sight.


If you read the article you’ll see the judges explanation as to why last minute was not acceptable.


See, now this is proper science. Its with pleasure I note that “poopgoblin” has a non-zero frequency.


Generally, nope. IANAL but fundamentally there are personal rights you just cannot sign away. E.g. even if you sign an employment contract saying explicitly that you have zero vacation (just an example), it just won’t be valid. Another big difference in general is that you cannot just be suing people for whatever reason like in the states. A more relevant example for this is GDPR: nothing written in an eula can “release” the company from basic GDPR rights and principles.


Its disgusting that every thing always have to be exploited for profit. And then companies use part of that money to lobby (ie effectively pay off) politicians to let them do it. And it’s even legal. Simply discussing.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: