Hacker News new | past | comments | ask | show | jobs | submit login

I am consistently surprised by people's attitudes as I read HN comments.

Full disclosure, my old employer was very serious about phishing prevention (we sat on a ton of IP and people were out to get it) and did testing of this type as well. I did fall for a phish test around last review season because the phishing email had something about reviews in it.

Let's be adult about it - it was a good test and I failed it because I wasn't vigilant enough.

The fact that GoDaddy's email was "tempting for people around holiday season to click on" is - guess what - exactly what a moderately-sophisticated phishing attack would look like.

One of the things IT teaches employees to look for in phishing emails is language that either doesn't make sense or seems designed to "get your guard down and act on it."

In GoDaddy's case it included language like "it's free money, claim it now." Really - if you slowed down for a sec and thought about security, is this the language in which your employer emails you? Also, why does your employer need you to fill out any forms about your location - they know where you are.

For all the talk on HN about security and companies not taking it seriously enough, here's a great example of a company taking it seriously (to the point of discomfort) to teach their staff about what phishing could look like, and HN is somehow objecting.

PS: I bet there are no consequences for failing this test other than needing to retake the training which is right because the people who fell for it do need the reminder. If this was a real phish they'd have leaked real data. Perhaps even your data.




I explicitly told my penetration testers to avoid anything like this.

Invoking and then revoking a bonus email in a year where folks are already hard hit financially and mentally under the guise of “education” is lazy and any security leader who does it is a fool.

Effective security culture builds rapport with business units. This type of nonsense does the exact opposite.

Yes you may have proven a tactical point, but you have just set yourself up to lose the war.


Why not have both? That is, plan to provide a real bonus and amount, even if it's just 100 bucks. Beforehand, send out said phishing email, and collect data. Provide the bonus to everyone. Once the holiday is over, notify employees who failed the test of how easy it is to prey on people's emotion and to be careful, that email was in fact in no way tied to the bonus.


That might be fine, but that’s not what GoDaddy did.


> Yes you may have proven a tactical point, but you have just set yourself up to lose the war.

you glossed over this point, but it is nonobvious. how exactly does making your users skeptical of email "lose the war"? are they going to type in their passwords out of spite now?

the gp is exactly right. if you have a line you wont cross to train your users, then youre leaving a vulnerability. if you want to make the argument that the vuln is worth not "being mean" to your employees, then make that argument, but don't pretend its more secure.


> how exactly does making your users skeptical of email "lose the war"? are they going to type in their passwords out of spite now?

No, they’re going to quit, or put less effort into their job (which is often worse for the employer than quitting), or tell other people not to work there, or—as in this case—tell everyone on the internet what you did and then your company’s reputation takes a well deserved hit.


Or they will learn not to trust any email they receive. Or they will learn that phishing email come from internal training, and therefor do not need to be reported.


>if you want to make the argument that the vuln is worth not "being mean" to your employees, then make that argument, but don't pretend its more secure.

This is not a good scam. People fucking notice when they expect a 650 dollar bonus and it doesn't come. It's not some arbitrary "check out this cool link" style email that an employee might click and then forget they've ever engaged with.

Not to mention that it caught nearly 10% of the fucking company - That's an insanely high click through for any phishing test I've ever seen at a software company. And I don't even know if they sent it to all 7k employees (my guess is no). So either GoDaddy has incredibly incompetent employees, or this looked fairly real (and the email does actually look pretty damn good for a phishing email - custom image, no spelling mistakes, internal domain, aware of different geos the company operates in)

So you pissed of 10% of your company with an email that wouldn't ever be sent as a real scam (because a week later the employee is going to ask where the 650 bucks is and alarm bells will go off).

I don't have the context of a Godaddy employee, so maybe there's an obvious mistake in there, but general speaking - this was a dumb play by the security team. It's now also losing them goodwill in public.


Phishing doesn't have to be a literal scam. The more insidious kind that companies are actually wary of are the ones intended to steal credentials and get into internal systems.

Phishers can easily get what internal company emails look like. Send an email that looks like "Hey, we're giving you a $900 bonus. Make sure to fill out these forms on workday", with a link that is a Phishing website that looks exactly like workday.

Fill in your credentials - hey, also need your 2factor code - and bam, hackers have an in to the system. It doesn't matter that the employee realizes 10 days later they were phished.


It absolutely does matter that they realize a week later.

Security is like an onion - It's layered into place across different levels, and different systems.

If you just want access to a single employee's machine, for a short duration - sure this would work. If you really want to compromise the whole internal net, it's going to take longer than a few hours to work your way into other systems. Generally you either need your target to access the system you want, or you have to spread to other machines to find a machine with the access you need.

The most effective attacks are the ones that get in, and have weeks to spread through the whole internal network. Take the Target breach in 2013 - They were in the system undetected for 20 days, nearly 3 weeks.

As soon as the company knows they have a problem, any decent security team is going to check every system on record.


Maybe "doesn't matter" was too strong, but the point stands. The fact that the employee may eventually realize the folly in no way prevents damage from being done.

In the 10 days or 1 day that it takes between realizing they were phished, all sensitive information they can get access to can be stolen. Furthermore, more sophisticated phishing links can then be sent from their account. After all, who's going to suspect an actual email send by a colleague as a phishing attempt?

A holiday bonus type of phishing attack absolutely can work, and be extremely effective at credential theft. It may not be effective at literally scamming money from the employee, but who cares.


Fake phishing is not some great methodology to better security, it's a tool to embarrass people and hope that that embarrassment leads to better security, like the Wall of Sheep. (Which, by the way, don't anyone ever implement that at work)

Know what doesn't build better security culture? Trying to trick your users. Know what does? Working with them closely to help them understand security, finding out when and how people get tricked, and working to solve those issues.

The anti-phishing efforts I've seen so far have been lame and ineffective. Rather than trying to find new ways to make people fail, security teams should be finding new ways to prevent people from falling victim.


if you make your employees feel bad on purpose and have a track record of insensitivity, they won't like you. People who don't like you won't act in your interest to a high capacity.

I don't think GP was ambiguous at all, to be honest. This point was extremely clear. It feels like you came to the discussion with an axe to grind.


You are teaching your users that security teams are assholes who deserve whatever petty revenge is possible.

And people will treat you like that, most often behind your back.


> if you have a line you wont cross to train your users, then youre leaving a vulnerability.

How about a phishing test that downloads something that nukes your hard drive?


> > if you have a line you wont cross to train your users, then youre leaving a vulnerability.

> How about a phishing test that downloads something that nukes your hard drive?

The phishing weakness opens that possibility. The act of wiping the HD does nothing to improve behavior or identify weakness.


Now that's a good idea - where I work, our security policy is such that I should be able to destroy any (company) computing equipment an end user is using and the loss of data should be restricted to their unsaved changes. A test like you propose would be a very memorable reminder for information security awareness, and so might be really effective before infrastructure upgrades if restricted to people with hardware about to be phased out, you need to nuke old hard drives at that point anyway...


Not exactly related but 7-8 years ago a company in the region had a major virus outbreak. There was some kind of zero day worm running through their environment and despite having updated endpoint protection they still got got.

They didn't feel they had any choice but to hire about 50 temp folks from a local contracting company and run through the entire desktop environment and pull drives, re-image, reinstall. They did this over the course of a weekend and on Monday when employees came back they found their desktops wiped clean. Within an hour the site lead started getting calls and visits from panic stricken employees that had kept all sorts of personal info on those machines...photos, emails, documents, etc.

All gone.

They ended up missing one machine that got powered up that Monday and re-infected the entire campus. Had to do the same thing again.


How about gov sec launching nuclear attacks on every major country?


some people run scripts that deliberately kill prod nodes at random to test for resiliency.


This one is tricky. Effective phishing preys on the weak and the vulnerable. For better or worse, we do have to train people -- even weak and vulnerable people -- not to fall for scams, and there's no better way we know of than to do training exercises. Cruel though it may sound, unless we can think of a better way, I think it's the most effective option.

The fact that employees are failing this proves the point, and until we can show that they are no longer falling for it, it shows how vulnerable we remain from an infosec posture.


Again, while you may be proving a point and obtain a localized victory, it is Pyrrhic. There are other phishing scenarios that can be tailored for the proclivities of each team. Overdue POs for purchasing. “Hey I saw some source code leaked on this site” for engineering. Etc etc.

The only reason you would have to send something like this is because you are lazy, need to generalize quickly, and blast it to the company. And you will teach a lesson. And you will kill any goodwill to the security team as a byproduct.


I don't agree with you, but with the other posters in this thread. You can't just pull your punches, phishing and ransomware crime is currently skyrocketing. This is exactly the type of scenario that will entice many people to infect themselves with malware. This is also a scenario I've seen multiple _real_ e-mails of in recent weeks. It's not like your employees are not likely to receive something just as nasty as this over the course of their employ.

That said, I do agree with you that money is better spent elsewhere. A company with > 1000 employees should just assume there's an infected machine in their network 24/7. Defense in depth stipulates that this should not matter for their security.


You can, and must, pull your punches and gauge the trade offs when you run a security organization that has to work with and collaborate with other teams. If you truly disagree, I encourage you to replicate this particular test in your organization and see how that works for you.


I can attest that I have ran tests in a very similar vein at least three times. One of which also caused an uproar.

I also prefer my security measures procedural, technical and layered. Like I said before, phishing simulations don't really work for me at all... Currently many places where I do some work the security department have a large say in what happens, regardless of goodwill. Security measures always suck, are less efficient and cause no end of headaches, you mostly have 0 goodwill anyway.


> Security measures always suck, are less efficient and cause no end of headaches, you mostly have 0 goodwill anyway.

Isn’t your job to make sure security measures don’t suck, aren’t less efficient, and don’t cause headaches?

This seems to be a very defeatist attitude to take.

From my personal experience one of the following happens when a security team takes your perspective: 1.) users work around security mitigations, causing worse security issues 2.) workers quit the company due to friction when working 3.) the security manager gets fired because they won’t ever compromise


Can you say who you work for? I don’t want to work for you either.

You seem oblivious to the fact that you can still be effective but not a shitty security team.

Your way of approaching this literally makes people who should listen to you and trust you want to do the opposite. What an awful security team you’re part of.


Over time security measures always cause friction. You'll always be the annoying presence, the naysayer, the 'needlessly' difficult person. Effective security imposes restrictions, hurts egos and interferes in natural social responses.

It's funny you say that my way of working literally makes people want to do the opposite of trust me, when I send them a phishing e-mail that's exactly what I'm aiming for ;P


I've worked at a place where the security team were a detached, nagging presence. Devs only interacted with them when they had to, so security became an afterthought.

I've also worked at a place where the security team were trusted collaborators. Devs were comfortable communicating with them. Their security skills improved over time, and so did the security of the software they wrote.

The latter strategy is far more effective at moving the needle over the long term.


can you rephrase this without the acerbic vitriol? I can't really take a security discussion seriously with someone who can't communicate professionally.


Treating people this way is not itself professional. The inhumanity of the people with power and the willingness to exert it upon others in this fashion is deserving of a little, as you say, vitriol. (I would say "contempt".)

Treat. People. Decently.


Why did you post this to GP? He's being an asshole.


Weren't you just complaining about profanity and name-calling?


Beginning to think he’s just a troll


Ah, yes — merely supporting fake bonuses during a historic economic crisis is perfectly professional, but describing that as awful is “acerbic vitriol”.


I'm sorry, did you read the post? It is full of profanity and name calling.


“Full of profanity and name calling”

I said shitty. No other swearing and no name calling. What exactly are you reading?


Can you elaborate on the uproar?


It was pretty much this scenario years back, as a result of that exact same scenario being used by bad actors.

The real thing happened, was detected and shut down. Then they did some trainings over the course of a year, and had us test the same thing again next year. Results were not particularly good, and people got upset that they did not receive the promised Christmas gifts.

Afaict, these trainings stop being effective at around 10%. That is, 10% of all people being phished like this will do everything you tell them to. Up to and including sending your their password and installing malware on their machines following your e-mail instructions.


If your company culture is bad enough you fear working on real world security problems with them you've already got a collaboration problem and it's going to preclude any security theater you perform when it comes to actual risk.


Hmm. Nowhere did I say my culture is bad. In fact it is the opposite. Where teams proactively think about security from the ground up and bake it into their respective products. A gotcha stunt like this diminishes that culture all so that you can pat yourself on the back for emulating a phish with a high click rate.


I'm aware your claim is your culture is good, I never claimed you admit it's bad. My claim was needing to avoid real attack vectors is a good sign the cultural is bad and security is theater about how much it's worked on rather than how real risks were discovered, tracked, and approached.

I agree this shouldn't need to be a "gotcha stunt" though but if this would be to you then yes you're doing it/communicating it/following it up wrong. It should be an awareness call that phishing is going to target you in very thoughtfully real and meaningful ways, not just boring fake tickets/issues or the like. If you can't find a way to send that message in your org then again I claim your org already has bad culture as it can't talk about or examine real security issues and instead has the first reaction to a real risk that you're just trying to improve your reporting numbers again.

Also a small year end bonus being not only considered exciting but not existing at all is probably the real callousness problem in the org, not that one points out the allure of money is used by malicious actors to get people to click things. But that's separate from the whole role of security discussion.


My assertion is precisely that I can find ways to send the message without resorting to this.


Your assertion that the only output of this could be "to send a message" is exactly the kind of thinking at a place with cultural issues relating to security policy. See my child comment on how something like this is supposed to be used in a way that focuses on risk-reduction and active feedback instead of group politics and message marketing.

At the end of the day if you've got a threat vector you're too afraid to actively measure it doesn't matter how much you message other departments about it you'll never know how big of a risk it is or if appropriate action was taken unless you are compromised by it a later date. This is true of human risk monitoring as much as it is of technological risk monitoring.


What do you think would be an appropriate follow up roughly?


I'm involved with some of our security testing and the way something similar (relating to monetary compensation from work) was done at our org (not to be named obviously) recently was:

1) An uptick in phishing emails (both user reported and data analyzed) was noted with a few patterns of attack in particular being on the rise in the last month.

2) Communications about the attack profiles were sent out company wide followed by invitations to material (written and short video form optional for low risk groups in the org based on use of email and HR training courses for those with heavier/job required use patterns that fit the attack profile, usually IT).

3) About a month time pass, user based reporting numbers were looking better and communication to leadership that a test email was going to go out to users matching the attack pattern to see what click through rates would look like on a well tailored email that wasn't quickly removed from inboxes. Leadership approves.

4) Click troughs result in dropping at an internal landing page hosting links to the above communications and adds the HR training course to the users profile.

For more detail on step 4) the opening of the landing page is a forgiving style not a scolding style, think "Oh no, this could have been from one of the phishing attacks we've been experiencing lately. Don't worry, it was just an example attack from us - but did you notice any of signs of a...".

It's also worth noting as part of 2) it was discussed the official communications on end of the year packages would be sent out _before_ the security test, not sure if that was the case in GoDaddy's scenario. This year didn't include cash bonus at Christmas (we did one in the summer for COVID) but did include unlimited PTO rollover and similar non-cash perks given the situation (and explained there would be no cash bonus).

3) Can be tricky when trying to fend attacks on the leadership level, it usually uses a modified approach to this list where lower management of each department is involved instead.

As far as the numbers themselves the higher the click through rate the WORSE the score the security group gets, it's a score for the amount of improvement from the planned action not a score for how many people you could trip up before you get a pat on the back. At the same time it was modeled after the COVID related phishing emails we had been getting, not something someone made up because it seemed easy or hard to pass.

It's a very large org and in the end this went well for all teams (even though improvements weren't quite as good as we had hoped they were still pretty darn good numbers) and we were able to show with data we had improved security against a real threat. I'm sure there were a few (given numbers) that were a bit let down after clicking it but given the approach and planning I don't think that was a failure of the test approach (though we're always open to refinement when an opportunity is brought up. Part of the training material is freeform feedback on how you think the org could better handle the challenges from your PoV).

.

Anyways the point is it shouldn't have to be "tackle real risks or have other groups hate security", you need to find a way to do both at your place (which may be different than how to do both at another place). And that should be true regardless if GoDaddy managed to implement the test poorly this time or not.


Exactly! I kept having this feeling as I read their replies and you just nailed it. "I can't test the real threat because my users will rebel" might be a real case in his situation but speaks poorly of the company. What other things are they not dealing with head-on?


It literally beggars belief that in a time of massive human uncertainty that the active malice of having the company participate in dangling a fake bonus in front of employees is just given this weird shrug.

The most good-faith reading of this whole tire fire that I can manage is that if you're going to pull this, you had absolutely and without question better have actual bonuses, of no less and frankly probably more than the phishing attempt, in the pipeline for every employee.

If you don't, you should be fired because it is inhumane to act this way to other people. It is, and I do not use this word lightly, evil.


Why is this specific scenario "the real threat" and not any of the alternatives you could use in your test? How large do you think the benefit is you get from using this specific example over another?

What is bad about a company culture where going for maximum emotional impact over a weighted approach causes an uproar? (I'd be much more wary about one where it doesn't, because it tells me employees expect the company to yank their chain, and expect bad treatment from other teams, and know complaining about it won't help. that's a broken environment.)


How would this other form of exercise you propose demonstrate that the staff isn't vulnerable to a fictitious promise of a bonus sent by a malicious outside actor?

I disagree that one can characterize the simulation of a specific scenario as "lazy." Is it unusually attractive to the target? Yes. Lazy? No.


Ask yourself what the purpose of proving this point is other than a vanity metric. Then ask yourself whether that should be the actual goal. Because it seems to me the actual goal is to have a workforce with generally high awareness and caution around phishing risks in general - and that can be built without goodwill-destroying tactics like this.


Hey, I'm all for making it better if it's possible! But the evidence isn't clear that other forms of training are effective in stopping an attack that involves this sort of messaging. If there are, great - but we need to prove that out first.


No - since you are planning to inflict some harm to your employees, the onus is on you to prove that the harm is warranted.

Specifically, you would have to prove that sending phishing training emails with more neutral topics (e.g. a package arrived, IT policy change - ACTION REQUIRED) is less effective than sending the more potentially harmful.


In fact you should first show that fake phishing email are more effective then traditional non-phishing emails that simply warns you about the risk of phishing and gives a clear example of a phishing email without any trickery.

Something like: “SECURITY INFORMATION: Phishing emails target holiday bonuses to increase engagement. Always be on alert” along with a few points on what you are likely to see in a phishing email, how you could spot one, and what to do if you get phished.


> there's no better way we know of than to do training exercises

Are you sure about that? Is there any evidence that supports this claim?

It certainly fails the sniff test. Behavioral science does not hold setting you up for failure as an effective learning contingency. I mean try teaching your dog that way, see how far you’ll go. You will also teach your workers that phishing emails come from internal so there is no need to report it.

When I was working as a life guard we never went into training situations unknowingly. There is a reason for that, a) it is dangerous as workers might act irrationally creating a dangerous situation, b) it is stress invoking and not healthy for anyone, especially those that have underlying stress issues or conditions, and c) there is no evidence that live training is any more effective or teaches you anything more than traditional training does.


At a certain point people will straight up hate the employer over these sort of things.

Result at my work. Email is completely ignored by most everyone. All communication is via Slack, even when not appropriate.

I check my email about twice a week, and by the end I’m usually browsing job boards out of disgust.


> Effective security culture builds rapport with business units. This type of nonsense does the exact opposite.

But these emails presumably weren't even coming from the company's domain, since they were phishing emails.


Internal domain, custom image with GoDaddy logo, no obvious spelling problems or mistakes.

Go look at the image in the article.

I don't work there so maybe something in it should have set off red flags for employees, but generally speaking - That's a very high quality phishing attempt.


Unless I misread the article, these were from the company's domain:

> Sent by Happyholiday@Godaddy.com


You misread the article. The article is saying that the emails said they were from the godaddy.com, but it doesn't say that they were actually sent from godaddy.com.


GoDaddy doing something unethical?! Color me shocked.


Regarding the "it's free money, claim it now" language... my employer also does regular phishing tests yet I wouldn't be surprised to get this as a legitimate email. Benefits from company partners, wellness incentives, etc. all read something like this and you never really know who the company gave permission to contact you until the message comes. A few anecdotal email subjects, all to my work address, from a 15,000 head company on the S&P 500:

1. "Claim your check now! $1.50 is waiting for you" Apparently a legitimate email refunding me for one time the vending machine failed to read my card properly (but had my work ID scanned).

2. "You could be earning an additional $1000 per week– find out before it's too late" Something about our 401k but included a link to some survey that HR required we fill out.

3. "Reminder: Claim your wellness check now" I expected this to contain a link to the health survey we're supposed to complete which gives me a $5/week discount on the health insurance I get through the company. Turns out this was actually related to some other company wellness incentive which gave us gift cards for participating in various events (bike to work month, etc.).

I could probably dig out more but I passed all of those along to the corporate email check to make sure and each was verified as legitimate. I'm sure plenty of other companies have made attempts to increase "social engagement" without realizing the consequences it has for security.


I have a lot of thoughts on this, having worked on security teams, and now running a company.

1. Employees and customers are who the company should be serving. It isn't "employees at the cost of customers", or vice versa. If your business can't do both it shouldn't exist. In this case the thought was "It's worth making employees upset because this addresses a real customer concern".

2. Phishing tests are silly. You should just assume someone will get phished. If you want to do trainings, do trainings. Or, better yet, make phishing pointless - at my company anything important is gated by a U2F token, among other things. We are simply 'immune' to most types of phishing - the one major source left being wire transfer fraud, which is fairly easy to avoid.

We only do phishing training for compliance purposes.

3. This is the wrong kind of security. It's "blame the user" security. It's a silly, backwards, outdated, ineffective attitude.

If the goal is "Make sure people report phishing", you can do that without trickery. Just send an email and request that users report it to whoever is responsible - you'll soon find who does or does not know how to do so.

4. (Editing in) Security teams should be aware of how what they're going to do is impacting the company. The impact of this phishing test has had public impact on the company - that's a disaster. A security team is about managing risk, and here they've instead subjected the company to concrete, public scrutiny. This was simply the wrong call.

Having trust in your security team is really important. Security teams need to do outreach, they need to be friendly, and be people who everyone feels good about going to with a problem. Building animosity for the sake of phishing protection is far more dangerous.

> If this was a real phish they'd have leaked real data. Perhaps even your data.

That's the security team's problem, not the victims!


The term "defense in depth" comes to mind.

Of course, ideally the IT team has screened out the phishing attempt and secured systems as much as they can.

If they do it perfectly, then your comment applies and nothing else matters.

If, however, the security team cannot be perfect, some amount of bad stuff will land with the users. This seems realistic.

So training people to be security-minded is useful. It's the next line of defense.

How do you train your users? "Just send an email and request that users report it to whoever is responsible" as you mention is one thing. But you can also imagine someone complying with that but then falling for a well crafted phish (that happened to me, as I mention elsewhere in this thread.)

Being confronted with falling for something makes people take the threat more seriously. It takes someone's attitude from "it can't happen to me" to "oh, it did happen to me."

>> at my company anything important is gated by a U2F token, among other things. We are simply 'immune' to most types of phishing

I don't know your business' threats but that seems naïve. Imagine your sales team gets a phish email asking them to list your biggest customers, including contact info and revenue "so that we can send them an appropriate thank you gift for the holidays." If someone replies to this/fills out the form, you have leaked critical information even though nobody has penetrated your system (so your tokens don't help.) Even if this type of thing isn't a problem for your business, you can imagine it is for many.


> The term "defense in depth" comes to mind.

For me the term that comes to mind is: Security at the cost of quality of life.

Sometimes security is redundant, that is fine. Sometimes it is unessisary, that could be fine as long as it is not harmful. Phishing training should be redundant at worst and unessisary at best. But these types of phishing training definitely has negative effects on the trainees. You are invoking a sense of failure and unessisary stress. Some people will have underlying stress issues, perhaps they are having a bad day, perhaps they are experiencing PTSD, and then seeing they failed a phishing test could be disastrous. Definitely not worth putting your workers at this risk for supposed (and unproven) security.


> The term "defense in depth" comes to mind.

Perhaps my least favorite phrase in security, used constantly to justify weak, meaningless, or harmful work as "another layer".

U2F is just one layer. It's a real, meaningful layer - it addresses real threats in a non-phishable way. Gating access by device, acls, etc, or locking down sharing permissions, are other real and meaningful layers.

> some amount of bad stuff will land with the users

That's the security team's problem to solve.

> But you can also imagine someone complying with that but then falling for a well crafted phish (that happened to me, as I mention elsewhere in this thread.)

I can just assume that one person will fail the phishing test already, and work from that angle. Way better than assuming I can teach an entire company to be untrickable.

> Is there no data that humans can leak out by entering it in the wrong place?

Wire transfers were one example - there are others, and we have other ways to mitigate those without assuming users should bear the responsibility.

By 'most types' I mean either phishing for credentials or as a delivery mechanism for malware - these would both be extremely difficult to do, or otherwise mitigated by other policies.

> but you can imagine that in many businesses there is.

That's their security team's problem to solve.

Feel free to train users, I have no problem with it. We do it for compliance purposes, but it's also a fun opportunity to engage with people about security, including meaningful security. And you do want people to be able to spot and report phishing, it's just not something you should ever rely on, or compromise for.


Note also that fake phishing emails is not the only way to train people against phishing. Arguable traditional training (i.e. explain and show the trainee examples in a setting where the trainee knows this is a demo) is both a better way to teach about the risk and protection, and is less harmful.

I used to work as a life guard and we would conduct training scenario every month at least. And we would never do live training (i.e. training scenario without knowing it is training). There are several reasons, first is safety. You would be putting workers in an unessisary risk. Second is stress, and third is that there is little evidence that we would walk away from the training having learned anything.

So if you believe phishing is a serious threat to your security, that is still no excuse to deliver fake phishing emails to your workers.


I can't tell, but it seems like you have unrealistic expectations for security teams. One the one hand, you handwave about how things are 'their problem', and on the other dismiss standard methods and models as baseless self-justification.

I would turn down a position reporting to someone with this combination of attitudes - I'm pretty sure my hands would be tied, only there to take them blame when the inevitable happened.


I've worked on security teams or for security teams for my entire career.

Yes, some things are the security team's problem - as in, security teams are responsible for managing risk. My expectation is for them to do so.

Again, you can perform phishing tests, but I think they're mostly a waste of time, a terrible substitute for real mitigations, and should never come at the cost of your employee's sanity - a security team must build trust with other teams first and foremost, not burn it because "real attackers are mean too".

> dismiss standard methods and models

My opinions are hardly controversial.


> a security team must build trust

I do agree with this part.


> dismiss standard methods and models as baseless self-justification

I would argue that live training with a “set up for failure” is non-standard.

A standard training has the trainee knowing they are in a training situation. I have never seen this type of training used for any situation other then phishing training. And before you say “fake firedrill”, no I have never seen those outside of the movies, and I would believe a Simpsons type situation where you are actively putting workers at risk is the reason for that.


I can't tell your thesis here.

I agree you can't expect people to provide full defense, and it doesn't sound like you disagree that helping people act more securely is important.

In my example, there's a difference between whether one sales person leaked their business numbers, or the entire 100 person department did. You train users to minimize the vulnerability even if you can't fully solve it.

If you agree that far - then I am not sure where we're disconnecting on this question.


I would imagine we're disagreeing with the "at what cost"?

As in, is the cost of:

1. Losing the trust of your coworkers 2. Causing public reputation damage 3. Potentially harming coworkers emotionally

worth the gain of having a slightly more effective phishing training? I would argue no.

I would also say that it isn't nearly as important as implementing other measures - U2F being a big one that I'd mentioned, but there are plenty of others. It's certainly not where I'd recommend anyone start.


You should not assume that U2F makes you 'immune' to most types of phishing.

If your assumption is that users will fall for phishing tests then it follows that they will present their U2F token or simply take some action on behalf of the attacker directly.


The point of U2F is that they can't do that. For example, if I send you a fake Google login page, you can't "trick" me into entering my token - it won't work.

Here's a good blog on the topic: https://dropbox.tech/security/introducing-webauthn-support-f...


Why do I need your token data when I can just ask you to use your creds to grant me access to my twitter account?

Or 'I'm on vacation and left my laptop, could you just run this command on the prod cluster for me so we can avoid downtime over the holidays?'


Yeah, if you'll read that blog post it'll explain exactly that.

U2F tokens are a second factor for authentication, meaning that to log into your account you need your password and the token.

Further, the token takes the domain into account when it's generating the key. That means that you can't just give me a fake Twitter page and then forward the creds/token, it won't be valid.

U2F mitigates this entire class of phishing attack.

Beyond that, we also only allow logins to sensitive services from employee laptops, validated by a TPM, and multiple other controls, but that's not really the point.

> Or 'I'm on vacation and left my laptop, could you just run this command on the prod cluster for me so we can avoid downtime over the holidays?'

Right, so this would work, but it's a huge gap between "Hey run this command for me" and the attacker being able to run arbitrary commands. It's a reasonable thing for your security team to consider and attempt to mitigate in other ways.


Neither should you assume that fake phishing emails teach workers any better then traditional training.


Almost sounds like openness and honesty goes a long way!


“Really - if you slowed down for a sec and thought about security, is this the language in which your employer emails you?”

Yes. Friendly language is becoming common in corporate emails.

“Also, why does your employer need you to fill out any forms about your location - they know where you are.”

They very often do, because things aren’t wired together within the company - and because many schemes are administered by a third party. Filling out forms with details that ‘the company knows’ is commonplace.

Companies even add ‘Warning: External email’ to subject lines of emails from outside the company, but then fail to whitelist trusted third parties, so employees get used to flagged emails actually being legitimate ... and then getting used to ignoring the warnings.


Security is important, but it does not trump any other consideration.

Will they next send employees emails claiming their loved ones are in danger, because that is something real hackers might do? Would you consider that ethical behavior? It's actually a pretty common scam, at least in my country (normally done to defraud old people, not to steal company secrets, but still).

I'd expect to see a very convincing study that would show that this type of emotional response is crucial for accurately training people to recognize real phishing before I accepted in any way that this was ethical. Absent strong evidence in this regard, this is utterly disgusting.


I guess that's subjective. In my old company, people generally just felt "yup, people are out to get us, and this test is making me realize how vulnerable I am to screwing it up and is therefore a good reminder."

However this one makes you feel, how would you feel if this was the real phish and you were the one who leaked sensitive customer data because you fell for it?


You are assuming that seeing real emotional stakes in the phishing exercise actually helps with recognizing the same in a real phishing email. I very much doubt the validity of this argument.

Note: I'm not against testing your employees for phishing attempts. That is extremely valuable. I'm against using something with an emotional impact as the pretext of the phish, when I believe a more neutral pretext will do just as well.

A separate note is that according to the pictures shown, it seems this is also a particularly bad example, as the email has legitimate headers, showing that it's coming from godaddy.com - so it would only rely on employees distrusting the contents to recognize it as phishing, which is a bad lesson to teach.


Sorry tsimionescu but I read your post as follows:

It's ok for company to prepare its users for phishing, except for the kind of phishing where the attackers:

(a) devised an emotional hook (b) put in effort to make it look legit.

You can see how that creates a vulnerability, right, because a moderately sophisticated phisher would aim to do (a) and (b) every time.


To what end?

If instead of christmas bonus, it was death of a loved one, would you still consider it acceptable (both meet your criteria)? Would it be acceptable to test employee susceptibility to extortion by taking compromising photos and then threatening them with it?

In my opinion no. Any sort of experimentation on employees needs to be ethical. If you screw people over in the name of security, you have now become the security risk. Making the security team be the enemy that the employees hate because they have been hurt by them, will lead to very poor outcomes.


My intention was to say:

It's ok for companies to prepare their employees for phishing, except that (a) they are not allowed to inflict emotional harm on their employees, and (b) the email should include the correct examples of phishing markers.

A good example would be an email coming from a realistic-looking but fake external email; or an email with faked internal-looking headers that are highlighted by the company's email system.

A bad example would be an email coming from the company CEO's real email address, claiming that the employee was promoted, with no warnings from the email system that the headers are faked. That would not teach a useful lesson, and it would inflict some emotional damage on your employees.

Note: the lesson is not useful since, if the attackers have managed to corrupt the email system well enough to send emails from internal addresses without getting flagged, they will most likely have no need to phish for further access.


It is not subjective! Do the study and get the data. Before you risk putting your workers in a harmful situation.

(edit): And by study I mean show me that fake phishing emails are more effective then traditional training where workers know they are in a training situation.


It might be a realistic scenario and thus a "good" test. It still is a total dick move to do this as an employer. Some things you just should not do to your employees. There are a million other fishing scenarios that are realistic where you don't have to be promised money by your employer.


I think the correct course of action would have been to actually award out the amount promised to everyone completely separate from their action or inaction on the phishing email. It would have gone over a lot better if they had done this.


It's a choice between comfort and survival.

There are more humane ways to train an army than to make them think of situations where someone is trying to kill them, but... if that comes at the expense of worse training (and therefore actual higher likelihood of death) then it doesn't do anyone favors.

The fact that so many people fell for this test means there's something (obviously!) around this scenario that makes it particularly sensitive and mistake-prone for people. Your IT department may chose to avoid it, but people trying to phish you, won't.


A company is not an army.

You don’t train your employees on active shooter drills by having a guy barge in shooting blanks.

Source: was in the army.


True story, but I’m not making a point here, or trying to disagree with you. Just something that happened to me once:

I was leaving a hotel near the airport in Delhi, and as were waiting outside for an Uber the manager of the hotel told us not to be alarmed if we saw some guys with guns running towards the hotel. Told us the police were running a terrorism scenario to see how the local hotels would react, and whether they would follow the plan for such an event. The guns would be unloaded, but everything else would seem realistic. The manager and the outside security knew about it (because they had to let the “terrorists” into the hotel) but no other staff or guests had been warned. We were only told because we were outside and might see them coming and he didn’t want us to give the game away.

Fortunately our Uber arrived before the “terrorists”. It’s possible the manager was just fucking with us, and none of it happened, but it didn’t seem like it and if he was that’s a pretty messed-up thing to do, too. It occurred to us that if they were actual terrorists he might be in cahoots and making sure they got into the building.

He also told us how in a real event he told his outside security guys to run away, they couldn’t stop terrorists anyway, and there was no point getting themselves killed.


Thank you for your service. Hopefully you can appreciate the tradeoff between mental/emotional discomfort and an actual problem that was the point of my analogy.


They don't. That's why they disagreed with you.

The tradeoff here is an idiotic one.

You don't set your own building on fire in order to test fire safety.

You don't destroy company morale to test phishing security.

It's really that simple.


If we're arguing analogies, OK. Does your company ever have fire drills? What if the first sound of the alarm freaks someone out? What if people find it annoying/idiotic/morale reducing to walk down the fire stairs?

At some point you just say "look, we need to make sure our people are trained, if people think fire drills are stupid or upsetting, we have to take that hit because the alternative is worse."


Your analogy is disconnected from reality. Fire drills (aside from the surprise kind which can be pretty bad) are gone into with full knowledge.

You ain't comparing the same thing. Temporary discomfort is not the same as this boneheaded move that wrecked their morale.


Fire drills without the knowledge of the participants sounds like a very dangerous stunt to play, that it surely only exists on television.


Yeah but guess what - employees put their families' well being ahead of the company. And a company that expects otherwise is cruel or delusional.


OF COURSE! We're not talking about prioritizing the company over people. We're talking about training people not to fall for what they could discern as a phish if they took a sec.


It hasn't trained people not to fall for a phishing email.

It's trained them not to believe their company when they offer a bonus.

Which might stop the same email from working tomorrow, but not the one saying "This needs to be filled out by Friday!" or "Class action settlement against GoDaddy over fake bonuses"


But that is not how people are normally trained. Normally people are trained in safe conditions where participants know they are in a training session.

You put people in unnecessary danger by putting them in an unpredictable situation. That is why training sessions are varied and thoroughly debriefed so that participants can know how what they learned in the current session can be applied at different settings.

Source, anecdotal: I’m a former life guard that had regular drills, and never entered one unknowingly.


It wouldn't have been such a mean thing if they then gave the bonus to every employee, regardless of them passing the test or not.


We actually don't know what the usual communiqués internally at GoDaddy look like. In a vacuum we can also judge this to be an effective test. In practice there are many unknowns and factors we don't know about though. In my opinion phishing is also an issue at scale when we talk about companies; meaning, there's a likelihood that some will always be more likely to fall for it.

Given how the world has been this year and what some employees maybe have gone through the employees that will fall for this particular phishing emails may actually need more support from their employer.

Either way, this isn't a vacuum and we are talking about a test that is unnecessarily cruel.

Edit: just to make this more constructive, there are always alternatives. Instead of relying on emails only employees could be informed to check in via a second channel in all matters relating to money or a company's IP.


I'm unfamiliar with corporate expectations regarding phishing emails—are you not supposed to click on links sent from an internal email address? The article clearly shows the email coming from an @godaddy.com address, and I'd think that them misconfiguring SPF/whatever is a much bigger deal.


I don't remember the exact language of the training but there's a "use your head" element. Someone you don't know is emailing you something that doesn't really make sense - stop and don't comply. There are lots of reasons the email could have internal addresses (a misconfiguration, a similar-looking domain, an internal threat, whatever) - don't rely on that if other red flags are up.


How crappy of a company would you have to be where a year end bonus doesn't really make sense.


If you read the text of the GoDaddy phish, there's something like "CLICK HERE NOW TO CLAIM YOUR FREE MONEY BEFORE ITS TOO LATE".

If it was just a notification "btw you'll find a little extra something in your check this month" without requiting any weird action from the employee (which btw is how this would look if it was real) then it's a totally different thing.


> Really - if you slowed down for a sec and thought about security, is this the language in which your employer emails you?

That is the weakest part of your argument. Companies are notoriously incompetent, yes, even domain registrars or whatever you categorize GoDaddy as.

And what if they used better language? What then? Would you simply move the goal post to another way to victim blame?

I know to avoid a lot of things because of experience, I know a lot of legitimate things act like illegitimate things. And a lot of illegitimate things can masquerade adequately as competent legitimate things.

For example: I know to look at the DKIM and SPF headers in an email, specifically because email clients do not show you when someone is spoofing an email address with the exact same domain name. Not a phishing domain name, the exact same one. This is a real, ongoing UI problem.

And there are other flaws out there too.


There are two things an employee will learn from this test:

- I’m never going to get bonus - regardless of what they promised me

- don’t trust security team: they are cruel and they will probably hurt me and my family in order to get promoted

Sure security team did a great phishing job. Did they improved security? Nope.


I wonder if you’re capable of seeing that both of these are true:

1. It’s a good test 2. It’s a bad test

That’s the thing that gets me. Are people (HN readers, other people) generally able to only see one of those points — or do they see both but dismiss one?

In your case, @xyzelement, do you see both? I know you see what’s good about the test, but do you also see why it’s a bad test?


I notice that people have difficulties explaining both positive and negative, and then explain why they pick one over the other.

Here it seems an excellent test, but you will probably upset a large part of your employees.

People seem to take sides first, and then try to fully ignore the the positives of the other side, and negatives of preferred side, in a way to convince the other party how wrong they are.

One side: it's a terrible test and is inhumane

Other side: excellent test and people need to grow up.

But of course you will never convince the other party like that.


>> In your case, @xyzelement, do you see both? I know you see what’s good about the test, but do you also see why it’s a bad test?

I totally see why some people reacted negatively to it, though I think ideally mature people can see that it is a useful test and is trying to teach them something and thus get over it.

My personal value system is that I chose tough love versus coddling because the former breeds stronger and more capable people. It's not for everyone, but for example I want people who run security for my firm to err on the former.

It's a tradeoff. I don't want to work at a place that's more vulnerable because it (rightly or wrongly) assumed that employees aren't mature enough to go through a real exercise. That's just my view.


Okay, thanks for the response.

It sounds like “maturity” and “coddling” etc have a specific meaning in your value system.

If someone else sees the benefit of this test but considers it needlessly harmful/cruel (and disproportionately so for different people) — I wouldn’t guess that “immaturity” is in the top 5 of the reasons why they would think that. So I found that surprising.

Cheers.


LeonB, I appreciate this discourse and let me explain why I see maturity as a factor here.

Let's assume that phishing is a real threat, and testing like this moves the needle on people's vigilance (as it has for me when I failed something like it last year.) Let's also assume that if this was a real phish, there would be really bad emotional and financial consequences. EG: imagine being the one who fell for a real phish and actually caused a huge data leak that ended up in the news and put your company out of business.

Regardless of how we feel about it, the above threats are real. So we can either chose to be "nice" but increase people's vulnerability to real painful consequences , or we can chose to be "tough" because we realize that in the long run it creates greater actual security for everyone. To me that's "tough love" - harsher short term decisions to help everyone in the long run.

It indeed feels adult and mature to take the tough and unpopular decision that aims to address the real risk, and conversely irresponsible to say "we can't deal with the problem because it'll be unpopular and someone may get upset."

There's maturity on the flip side too. When I failed the phishing test, it was a wakeup call. In retrospect I should have caught it but I wasn't careful enough. So I am grateful the company did it because it taught me a valuable lesson that will keep me safer in the future. If my response was instead "those fuckers tricked me" or whatever, it would have been childish, because it ignores that the risk is real and that it's really I who has the power to do better.


There are scammers that call people and claim their relative is in danger or badly hurt or in hospital dying and they need to urgently do this and that to help them - and people are deceived exactly because the stress makes their response less measured and more susceptible to deceit. That doesn't mean that ethical company would employ such tactics in a pentest. There's a limit of how far you can go in simulation. Surely, real criminals could do it - but the company can maintain vigilance without resorting to essentially replicating the villains and causing almost as much harm to the victims as the real villains do.

I'm astonished how nobody at GD could predict what emotional harm would such a trick cause. There's a lot of ways to test it without such a cruel method. Even reviews - that you mentioned - would work as well without being cruel.


First you should establish that this "training" is effective in raising security standards. I’m skeptical it does. When working as a life guard we never did live testing (i.e. training without knowing it is training) simply because results are mixed at best. Trainees are stressed in a live scenario and are unlikely to really "learn" anything from the experience. Worst case scenario, trainees will experience stress to a level where they will be harmed by the experience.

Second security should not fail on a successful phishing attempt. If a worker opens a phishing email and it compromise your security, you’ve got bigger problems.

Thirdly, don’t discount workers experience of having failed a task. It is extremely unpleasant and stressful. Workers health matters, and to subject us to unessisary stress levels is simply evil. There is no excuse. Find a better way to secure your system.


I don’t see any problem with the phishing test. But if you’re going to do it like this, when it’s over you should actually give your employees the bonus. Otherwise it’s just tasteless.


> Also, why does your employer need you to fill out any forms about your location - they know where you are.

pff, this happens all the time (although not for bonuses, but for things like branded cloths or surveys). you can't expect HR to automate this kind of stuff every time. it's much easier to just design a google form than to make a script that calls APIs.


Are you managing people or customer expectations? If so, how many already left because of the rational explanations of your lack of empathy? If not, keep it that way, you don't have the emotional capacity to lead others.

If my employer pulls this prank on me, they can either give a formal apology to everyone, give me the actual bonus, or find another developer.


I've never understood why phishers wouldn't just brush up a little on their english, or at least have their phishing email reviewed by a friend. I mean, proper English is so much more effective. At least learn some proper grammar before launching an attack


Yeah, it makes sense to test for this kind of thing, but they should have planned to give the bonuses to everyone regardless of how they responded to the phishing attempt.


Exactly right. And godaddy employees have been phished and social-engineered to transfer domains. Good for them for trying to train their employees.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: