Hacker News new | past | comments | ask | show | jobs | submit login

I have a lot of thoughts on this, having worked on security teams, and now running a company.

1. Employees and customers are who the company should be serving. It isn't "employees at the cost of customers", or vice versa. If your business can't do both it shouldn't exist. In this case the thought was "It's worth making employees upset because this addresses a real customer concern".

2. Phishing tests are silly. You should just assume someone will get phished. If you want to do trainings, do trainings. Or, better yet, make phishing pointless - at my company anything important is gated by a U2F token, among other things. We are simply 'immune' to most types of phishing - the one major source left being wire transfer fraud, which is fairly easy to avoid.

We only do phishing training for compliance purposes.

3. This is the wrong kind of security. It's "blame the user" security. It's a silly, backwards, outdated, ineffective attitude.

If the goal is "Make sure people report phishing", you can do that without trickery. Just send an email and request that users report it to whoever is responsible - you'll soon find who does or does not know how to do so.

4. (Editing in) Security teams should be aware of how what they're going to do is impacting the company. The impact of this phishing test has had public impact on the company - that's a disaster. A security team is about managing risk, and here they've instead subjected the company to concrete, public scrutiny. This was simply the wrong call.

Having trust in your security team is really important. Security teams need to do outreach, they need to be friendly, and be people who everyone feels good about going to with a problem. Building animosity for the sake of phishing protection is far more dangerous.

> If this was a real phish they'd have leaked real data. Perhaps even your data.

That's the security team's problem, not the victims!




The term "defense in depth" comes to mind.

Of course, ideally the IT team has screened out the phishing attempt and secured systems as much as they can.

If they do it perfectly, then your comment applies and nothing else matters.

If, however, the security team cannot be perfect, some amount of bad stuff will land with the users. This seems realistic.

So training people to be security-minded is useful. It's the next line of defense.

How do you train your users? "Just send an email and request that users report it to whoever is responsible" as you mention is one thing. But you can also imagine someone complying with that but then falling for a well crafted phish (that happened to me, as I mention elsewhere in this thread.)

Being confronted with falling for something makes people take the threat more seriously. It takes someone's attitude from "it can't happen to me" to "oh, it did happen to me."

>> at my company anything important is gated by a U2F token, among other things. We are simply 'immune' to most types of phishing

I don't know your business' threats but that seems naïve. Imagine your sales team gets a phish email asking them to list your biggest customers, including contact info and revenue "so that we can send them an appropriate thank you gift for the holidays." If someone replies to this/fills out the form, you have leaked critical information even though nobody has penetrated your system (so your tokens don't help.) Even if this type of thing isn't a problem for your business, you can imagine it is for many.


> The term "defense in depth" comes to mind.

For me the term that comes to mind is: Security at the cost of quality of life.

Sometimes security is redundant, that is fine. Sometimes it is unessisary, that could be fine as long as it is not harmful. Phishing training should be redundant at worst and unessisary at best. But these types of phishing training definitely has negative effects on the trainees. You are invoking a sense of failure and unessisary stress. Some people will have underlying stress issues, perhaps they are having a bad day, perhaps they are experiencing PTSD, and then seeing they failed a phishing test could be disastrous. Definitely not worth putting your workers at this risk for supposed (and unproven) security.


> The term "defense in depth" comes to mind.

Perhaps my least favorite phrase in security, used constantly to justify weak, meaningless, or harmful work as "another layer".

U2F is just one layer. It's a real, meaningful layer - it addresses real threats in a non-phishable way. Gating access by device, acls, etc, or locking down sharing permissions, are other real and meaningful layers.

> some amount of bad stuff will land with the users

That's the security team's problem to solve.

> But you can also imagine someone complying with that but then falling for a well crafted phish (that happened to me, as I mention elsewhere in this thread.)

I can just assume that one person will fail the phishing test already, and work from that angle. Way better than assuming I can teach an entire company to be untrickable.

> Is there no data that humans can leak out by entering it in the wrong place?

Wire transfers were one example - there are others, and we have other ways to mitigate those without assuming users should bear the responsibility.

By 'most types' I mean either phishing for credentials or as a delivery mechanism for malware - these would both be extremely difficult to do, or otherwise mitigated by other policies.

> but you can imagine that in many businesses there is.

That's their security team's problem to solve.

Feel free to train users, I have no problem with it. We do it for compliance purposes, but it's also a fun opportunity to engage with people about security, including meaningful security. And you do want people to be able to spot and report phishing, it's just not something you should ever rely on, or compromise for.


Note also that fake phishing emails is not the only way to train people against phishing. Arguable traditional training (i.e. explain and show the trainee examples in a setting where the trainee knows this is a demo) is both a better way to teach about the risk and protection, and is less harmful.

I used to work as a life guard and we would conduct training scenario every month at least. And we would never do live training (i.e. training scenario without knowing it is training). There are several reasons, first is safety. You would be putting workers in an unessisary risk. Second is stress, and third is that there is little evidence that we would walk away from the training having learned anything.

So if you believe phishing is a serious threat to your security, that is still no excuse to deliver fake phishing emails to your workers.


I can't tell, but it seems like you have unrealistic expectations for security teams. One the one hand, you handwave about how things are 'their problem', and on the other dismiss standard methods and models as baseless self-justification.

I would turn down a position reporting to someone with this combination of attitudes - I'm pretty sure my hands would be tied, only there to take them blame when the inevitable happened.


I've worked on security teams or for security teams for my entire career.

Yes, some things are the security team's problem - as in, security teams are responsible for managing risk. My expectation is for them to do so.

Again, you can perform phishing tests, but I think they're mostly a waste of time, a terrible substitute for real mitigations, and should never come at the cost of your employee's sanity - a security team must build trust with other teams first and foremost, not burn it because "real attackers are mean too".

> dismiss standard methods and models

My opinions are hardly controversial.


> a security team must build trust

I do agree with this part.


> dismiss standard methods and models as baseless self-justification

I would argue that live training with a “set up for failure” is non-standard.

A standard training has the trainee knowing they are in a training situation. I have never seen this type of training used for any situation other then phishing training. And before you say “fake firedrill”, no I have never seen those outside of the movies, and I would believe a Simpsons type situation where you are actively putting workers at risk is the reason for that.


I can't tell your thesis here.

I agree you can't expect people to provide full defense, and it doesn't sound like you disagree that helping people act more securely is important.

In my example, there's a difference between whether one sales person leaked their business numbers, or the entire 100 person department did. You train users to minimize the vulnerability even if you can't fully solve it.

If you agree that far - then I am not sure where we're disconnecting on this question.


I would imagine we're disagreeing with the "at what cost"?

As in, is the cost of:

1. Losing the trust of your coworkers 2. Causing public reputation damage 3. Potentially harming coworkers emotionally

worth the gain of having a slightly more effective phishing training? I would argue no.

I would also say that it isn't nearly as important as implementing other measures - U2F being a big one that I'd mentioned, but there are plenty of others. It's certainly not where I'd recommend anyone start.


You should not assume that U2F makes you 'immune' to most types of phishing.

If your assumption is that users will fall for phishing tests then it follows that they will present their U2F token or simply take some action on behalf of the attacker directly.


The point of U2F is that they can't do that. For example, if I send you a fake Google login page, you can't "trick" me into entering my token - it won't work.

Here's a good blog on the topic: https://dropbox.tech/security/introducing-webauthn-support-f...


Why do I need your token data when I can just ask you to use your creds to grant me access to my twitter account?

Or 'I'm on vacation and left my laptop, could you just run this command on the prod cluster for me so we can avoid downtime over the holidays?'


Yeah, if you'll read that blog post it'll explain exactly that.

U2F tokens are a second factor for authentication, meaning that to log into your account you need your password and the token.

Further, the token takes the domain into account when it's generating the key. That means that you can't just give me a fake Twitter page and then forward the creds/token, it won't be valid.

U2F mitigates this entire class of phishing attack.

Beyond that, we also only allow logins to sensitive services from employee laptops, validated by a TPM, and multiple other controls, but that's not really the point.

> Or 'I'm on vacation and left my laptop, could you just run this command on the prod cluster for me so we can avoid downtime over the holidays?'

Right, so this would work, but it's a huge gap between "Hey run this command for me" and the attacker being able to run arbitrary commands. It's a reasonable thing for your security team to consider and attempt to mitigate in other ways.


Neither should you assume that fake phishing emails teach workers any better then traditional training.


Almost sounds like openness and honesty goes a long way!




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: