> But if you are going to my understanding is physical documents is still the safest option.
OpSec wise it’s unclear why physical delivery would be more secure. From mail carriers having non-public methods of identifying anonymous/fake senders, to printer & content steganography, device tracking, biological forensic identification, etc.
Why to you is physical delivery a more secure option?
Printer marks only apply to commercial laser printers, and only some brands. You are probably safe when printing at home.
There are still plenty of mailboxes. No need to be anywhere near another human.
Leave the phone at home. Walk to the mailbox at night. Drive closer to it but not up to it if need be it.
You aren't going to get much except fingerprints from a document. Wear gloves when handling the document if that is a theat.
Remember your adversary is probably not all powerful and knowing. They are willing to expend some cost to chase you, and you only need to be more expensive to unmask then that amount.
We STILL live in the world where the most common security breach is having no password or a re-used password. Documents are leaky. Metadata kills.
I think you are very wrong about a couple of things here. According to EFF: "IT APPEARS LIKELY THAT ALL RECENT COMMERCIAL COLOR LASER PRINTERS PRINT SOME KIND OF FORENSIC TRACKING CODES, NOT NECESSARILY USING YELLOW DOTS." [1]
And if by "commercial" you mean "for the business market, not the consumer/home market", then that is also wrong. Plenty of consumer grade laser printers, including mine, print yellow tracking dots. If the EFF is correct, essentially all laser printers sold now put some sort of tracking info on the page.
Printer identification has only been seen in color lasers (and some weird inkjets).
Lasers, especially cheaper ones, naturally DO have laser variances, etc. There isn't enough data in a b&w laser to leave a serial.
But let's say that you mess up and do leave a serial number... that's not useful. How many people register their printers? How many people buy printers from third parties or, gasp, second hand?
The printer serial code is useful only when you have the printer already and want to ask if it produced a given document. "This came from a Xerox printer with serial number xyz" isn't useful information for identifying a leaker from your corporate staff.
If you are worried about it, buy a crappy thrift store printer and donate it somewhere else.
For starters, if some how the leakers becomes a person of interest and still has the printer that matches the serial number present in leak documents, that would obviously link them to the leak.
Claim that data is able to be hidden in black & white laser prints is obviously false; for example, printer could intentionally embed information by make small algorithmic changes to the fonts that are unnoticeable to an untrained human eye.
Again, sure, possible this is over kill, but then so is SecureDrop. Anyone that’s worried about OpSec needs to understand their threats and related risks, then decide what to do, not just say do X just because Y said so. If mailing in documents was safer, why is that not presented as an alternative?
> For starters, if some how the leakers becomes a person of interest and still has the printer that matches the serial number present in leak documents, that would obviously link them to the leak.
Your adversary doesn't need to be logical. You assume they need good evidence that is true - they don't. They can decide they don't like your face and that makes you guilty (and that has been the default for a lot of human history). They can also decide you are just nervous. Or that you seem like the leaking sort. They can jump to whatever conclusion they want, including the "let's hit them with a $2 wrench until they admit guilt".
If they are in your house and sampling your printer, they are also going to pull your electronic storage, physical storage, etc. SecureDrop doesn't help you here either - and a bunch of demag'd harddrives is pretty smoking gun.
> printer could intentionally embed information by make small algorithmic changes to the fonts that are unnoticeable to an untrained human eye.
The printer could have a secret implant, or broadcast a vhf beacon of what it prints, or have left an imprint on a second page of paper or....
But those things are unlikely. That you can theoretically think of some potential gotcha is not "opsec". That isn't risk analysis. That is you playing secret agent. That's fine - but don't confuse it with risk analysis.
> Anyone that’s worried about OpSec needs to understand their threats and related risks
Correct, the REAL actual threats, and the CHANCE of those threats happening.
You can not zero out a risk. Risk does not go to zero. You can only reduce a risk to a mission tolerable degree.
> possible this is over kill
The point is to achieve the goal. "OpSec" helps reduce risk. Let me repeat this.
There will always be risk. You can not remove the risk. The goal is not to remove the risk. The goal is to reduce the risk to a tolerable degree such that the goal can be achieved.
"But I can hallucinate a theoretical attack!" - Great, write a spy thriller. That has no bearing on "Opsec" or even risk analysis. At the very least you have to show the attack CAN happen, your threat actor CAN (theoretically) execute it, and ideally they are willing to (resourcing).
Give people practical advice. Prepare them for reasonable scenarios.
You’re assuming leaker is not actively being monitored, mail carriers’ non-public methods of identifying anonymous/fake senders do not overlap OpSec failure of sender, assuming printer & content steganography doesn’t apply, unusual device tracking counter surveillance techniques are not a red flag, leaker understands counter biological forensic identification measures, etc.
Point is OpSec is HARD — and telling someone “just physically deliver it” is a recipe for failure.
The theory that if you ever leak something that you are going to be positively identified by CSI style forensic science and it is basically impossible to leak something without spy agency level methods is, itself, a tool to stop leaks.
We know your printer! We track you everywhere! We have your fingerprints! We can pull your DNA from a letter! It is beneficial to the powers that be that ordinary citizens believe they are good at their jobs and omniscient.
They are not.
Yes, to a great degree, this matters on your adversary. The truth is that most whistleblowers are not Edward Snowden. They are whistleblowing on their employer, who is probably a private company or small government org. The bar to exceed detection does not require you to be James Bond or to understand quantum cryptography. It requires gloves, a cheap printer, and maybe a trip to the thrift store and the post box.
If your adversary is the NSA/FBI/KGB/Whoever, well, you know, plan accordingly. But that probably isn't your adversary. Your adversary is probably a mediocre IT security company that has trouble getting their techs to change their passwords and struggles to analyze whatever data they do collect from client endpoints.
Don't under-estimate your adversary but also don't over-estimate them either.
Risk is not binary. It's a statistical function predicting the likelihood of an outcome.
We can bucket that risk in most cases. There are schelling points.
People, regular boring unsophisticated people, frequently successfully mail hardcore drugs through the postal system. This is with the postal system having an actual objective to stop them. Overwhelmingly the postal service fails at this, and even when it does interdict the drugs, it almost never identifies the sender. How much more quaint is it to send documents.
The goal of "OpSec" is to achieve the goal. If "OpSec" acts as a forever barrier because it can't successfully predict real life risk, you're doing it wrong.
In my experience you’re wrong. Most OpSec failures are not magical or require massive resources but minor single mistake. Beyond that, OpSec is a skill that requires practice. If you only apply only when needed, odds of failure significantly increase.
Since you claim to know so much about what national mail systems know about a piece of mail, what specifically is your understanding of what they know?
Mail enters postal services from multiple points. This can be postboxes, collection and so on, but also other vendors, non-national post offices (universities, big corps), and international entries.
These are batched, scanned (OCR applied), with method of entry tagged to the mail piece. Pieces are risk scored using an unknown to us algorithm. Pieces that trigger being suspicious can be x-ray'd, CT scanned or imaged (eg TSA style scanning) and may also be opened by the postmaster.
There's a whole long bit of what happens once a package is found to actually be contraband that I am going to skip because it's long.
Any piece of mail has attached to it the information on it, it's weight, dimensions and entry point into the system.
Your collection point will potentially capture more materials, depending. Mail offices are government offices and behave like it - security footage, ID checks when doing business, cameras typically behind the desk to capture faces. That footage is typically housed locally unless there is a reason to send it. There is a standard retention policy but variances exist per office.
Mail from other points is much, much less secured and may have no additional security. Mailboxes, in the US especially, do not get government operated video monitoring. Private parties such as HOAs or cities may add their own.
Vehicle have long list of identifiable traits, many of which are possible to systematically monitor; not to mention if individual is actively being watched it’s now very easy to put a variety of tracking devices on a vehicle that are very hard to find.
When mailing item anonymous, never mail identifiable items from same point, especially in the same batch. Never use same drop. Beware that every anonymous item is traceable to point of entry. Numerous anonymous items mailed in a way the potential pattern analysis that when combined with other metadata might be identifiable; for example, mobile phone records.
Realize you believe degree to which I look at risks is unnecessary, but to me, it’s irresponsible to neglect mentioning how complex good OpSec is — and how frequently minor mistakes, especially over extended duration add up.
If you deliver physical documents, don’t forget that printers have yellow dots to track the originating printer. Nowadays it’s certainly possible to track the credit card that purchased them.
Physical documents requires you either to send it somehow with the hope that it doesn't get noticed, or personally deliver it assuming you are not getting tracked 24/7, which is a real possibility if you have anything of value for these news organizations. Guarantees provided by E2EE, among other things, do not exist in real life.
That’s a strange response. Even if people and organisations generally agree with you and your campaign it does not make your campaign their campaign. They likely have other things to do that are more important to them so they do those things.
FYI: Advice given in that guide is horrible for using Signal anonymously; burner phone OpSec is hard; Google Voice requires phone number for signup; landlines are easy to trace.
Core point is that survey requests prior experience using SecureDrop, then uses Signal, email and/or https-form to communicate. Why? Feel like at very least SecureDrop should themselves be explicitly stating why they opted use alternatives instead of their own service.
Google requires a phone number for most users to create a new account; there are currently ways around this, but for obvious reasons, I have no intention of sharing how to do this.
Google Voice has always required a phone number for signup, since its legacy design was to forward calls. When creating the account you’re able to allow Google to generate a Google Voice number for the account, disconnect the setup number, and use Google without a phone number.
Why though go to all that trouble when Signal just needs a PIN one-time from a throw away number; at which point Signal allows user to lockout future registrations from that number as long as the Signal account is active per their definition.
Signal supports anonymous connections, anonymous user accounts, secure operating systems, etc — not saying it’s easy, but neither is securely using SecureDrop to my knowledge. OpSec is hard by definition, if it was easy and cheap, counter measures including laws banning such activities would rapidly appear.
I agree SecureDrop is complex (in fact I mentioned that in another comment). It's way more complex than sending a file with Signal or other modern privacy apps, but at least all parts are private and secure (as far as we know) so it's up to the user to not make mistakes.
With Signal we know that Signal itself may be compelled to provide information about its users and I think that's why they wouldn't recommend their application for such use cases.
Again, Signal supports anonymous connections, anonymous user accounts, secure operating systems, etc. If anything, it’s more secure than SecureDrop because it doesn’t require using Tor, which is a huge red flag and appear to acknowledge odds of it being secure are questionable at best. Also, assuming user uses an anonymous connections, anonymous user account, secure operating systems, etc — Signal would have no information it could be compelled to provide.
It’s not, I have no affiliation to author of that post; assumed that was obvious when I said “here’s one author’s” in the prior comment.
Please respond to the question — and review HN’s guidelines, specifically:
> When disagreeing, please reply to the argument instead of calling names. Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith. Eschew flamebait. Avoid generic tangents. Omit internet tropes. Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
If you’re curious, found link via simple Google search, since I wanted to quickly check if there was an obvious relevant use case I was not aware of and after reviewing it thought it might be of use in you replying:
Strange, what was the difference, link has been the exact same and it is the same link Google provided. Regardless, you made claim without any explanation to which both I and another user have requested clarification on why the technology you recommended is relevant.
Using mobile, i received a page with a big face of the author with blah about him surrounded by all sorts of big "login with" overlays and advertisement.
Obviously, i jumped to conclusions too quickly as my second attempt after reading your comment presented a slick & clean page with relevant information. ( And no half of the screen filling signup / login overlays ) Hence the "apologies" in the message above.
Regarding your inquiry; because it allows for the original author to do mutations and calculations on (your) data carried or distributed by untrusted third parties.
This way someone could for example provide updates while being at less risk.
Various asymmetries between the amount of data dumped and the amount of resulted output could also be helpful in certain niche scenarios.
(Appreciate the clarifications; yes, LinkedIn is randomly highly aggressive to users, gatekeeps content, etc.)
To me, while I agree such a use case might make sense in a very narrow context, homomorphic encryption currently supports a limited subset of resources normally available on a computer and even doing basic operations on a small amount of data takes a very long time, adds a lot of complexity, custom information formatting, etc.
As such, in my opinion, unless I am something, homomorphic encryption would be a poor fit for a use case similar to SecureDrop’s use case.
Thanks, appreciate depth of your response and taken at face value, efforts to insure alternatives exist for secure private exchanges. I will take a look at the audits/auditors.
But if you are going to my understanding is physical documents is still the safest option.