Again, while you may be proving a point and obtain a localized victory, it is Pyrrhic. There are other phishing scenarios that can be tailored for the proclivities of each team. Overdue POs for purchasing. “Hey I saw some source code leaked on this site” for engineering. Etc etc.
The only reason you would have to send something like this is because you are lazy, need to generalize quickly, and blast it to the company. And you will teach a lesson. And you will kill any goodwill to the security team as a byproduct.
I don't agree with you, but with the other posters in this thread. You can't just pull your punches, phishing and ransomware crime is currently skyrocketing. This is exactly the type of scenario that will entice many people to infect themselves with malware. This is also a scenario I've seen multiple _real_ e-mails of in recent weeks. It's not like your employees are not likely to receive something just as nasty as this over the course of their employ.
That said, I do agree with you that money is better spent elsewhere. A company with > 1000 employees should just assume there's an infected machine in their network 24/7. Defense in depth stipulates that this should not matter for their security.
You can, and must, pull your punches and gauge the trade offs when you run a security organization that has to work with and collaborate with other teams. If you truly disagree, I encourage you to replicate this particular test in your organization and see how that works for you.
I can attest that I have ran tests in a very similar vein at least three times. One of which also caused an uproar.
I also prefer my security measures procedural, technical and layered. Like I said before, phishing simulations don't really work for me at all... Currently many places where I do some work the security department have a large say in what happens, regardless of goodwill. Security measures always suck, are less efficient and cause no end of headaches, you mostly have 0 goodwill anyway.
> Security measures always suck, are less efficient and cause no end of headaches, you mostly have 0 goodwill anyway.
Isn’t your job to make sure security measures don’t suck, aren’t less efficient, and don’t cause headaches?
This seems to be a very defeatist attitude to take.
From my personal experience one of the following happens when a security team takes your perspective:
1.) users work around security mitigations, causing worse security issues
2.) workers quit the company due to friction when working
3.) the security manager gets fired because they won’t ever compromise
Can you say who you work for? I don’t want to work for you either.
You seem oblivious to the fact that you can still be effective but not a shitty security team.
Your way of approaching this literally makes people who should listen to you and trust you want to do the opposite. What an awful security team you’re part of.
Over time security measures always cause friction. You'll always be the annoying presence, the naysayer, the 'needlessly' difficult person. Effective security imposes restrictions, hurts egos and interferes in natural social responses.
It's funny you say that my way of working literally makes people want to do the opposite of trust me, when I send them a phishing e-mail that's exactly what I'm aiming for ;P
I've worked at a place where the security team were a detached, nagging presence. Devs only interacted with them when they had to, so security became an afterthought.
I've also worked at a place where the security team were trusted collaborators. Devs were comfortable communicating with them. Their security skills improved over time, and so did the security of the software they wrote.
The latter strategy is far more effective at moving the needle over the long term.
can you rephrase this without the acerbic vitriol? I can't really take a security discussion seriously with someone who can't communicate professionally.
Treating people this way is not itself professional. The inhumanity of the people with power and the willingness to exert it upon others in this fashion is deserving of a little, as you say, vitriol. (I would say "contempt".)
Ah, yes — merely supporting fake bonuses during a historic economic crisis is perfectly professional, but describing that as awful is “acerbic vitriol”.
It was pretty much this scenario years back, as a result of that exact same scenario being used by bad actors.
The real thing happened, was detected and shut down. Then they did some trainings over the course of a year, and had us test the same thing again next year. Results were not particularly good, and people got upset that they did not receive the promised Christmas gifts.
Afaict, these trainings stop being effective at around 10%. That is, 10% of all people being phished like this will do everything you tell them to. Up to and including sending your their password and installing malware on their machines following your e-mail instructions.
If your company culture is bad enough you fear working on real world security problems with them you've already got a collaboration problem and it's going to preclude any security theater you perform when it comes to actual risk.
Hmm. Nowhere did I say my culture is bad. In fact it is the opposite. Where teams proactively think about security from the ground up and bake it into their respective products. A gotcha stunt like this diminishes that culture all so that you can pat yourself on the back for emulating a phish with a high click rate.
I'm aware your claim is your culture is good, I never claimed you admit it's bad. My claim was needing to avoid real attack vectors is a good sign the cultural is bad and security is theater about how much it's worked on rather than how real risks were discovered, tracked, and approached.
I agree this shouldn't need to be a "gotcha stunt" though but if this would be to you then yes you're doing it/communicating it/following it up wrong. It should be an awareness call that phishing is going to target you in very thoughtfully real and meaningful ways, not just boring fake tickets/issues or the like. If you can't find a way to send that message in your org then again I claim your org already has bad culture as it can't talk about or examine real security issues and instead has the first reaction to a real risk that you're just trying to improve your reporting numbers again.
Also a small year end bonus being not only considered exciting but not existing at all is probably the real callousness problem in the org, not that one points out the allure of money is used by malicious actors to get people to click things. But that's separate from the whole role of security discussion.
Your assertion that the only output of this could be "to send a message" is exactly the kind of thinking at a place with cultural issues relating to security policy. See my child comment on how something like this is supposed to be used in a way that focuses on risk-reduction and active feedback instead of group politics and message marketing.
At the end of the day if you've got a threat vector you're too afraid to actively measure it doesn't matter how much you message other departments about it you'll never know how big of a risk it is or if appropriate action was taken unless you are compromised by it a later date. This is true of human risk monitoring as much as it is of technological risk monitoring.
I'm involved with some of our security testing and the way something similar (relating to monetary compensation from work) was done at our org (not to be named obviously) recently was:
1) An uptick in phishing emails (both user reported and data analyzed) was noted with a few patterns of attack in particular being on the rise in the last month.
2) Communications about the attack profiles were sent out company wide followed by invitations to material (written and short video form optional for low risk groups in the org based on use of email and HR training courses for those with heavier/job required use patterns that fit the attack profile, usually IT).
3) About a month time pass, user based reporting numbers were looking better and communication to leadership that a test email was going to go out to users matching the attack pattern to see what click through rates would look like on a well tailored email that wasn't quickly removed from inboxes. Leadership approves.
4) Click troughs result in dropping at an internal landing page hosting links to the above communications and adds the HR training course to the users profile.
For more detail on step 4) the opening of the landing page is a forgiving style not a scolding style, think "Oh no, this could have been from one of the phishing attacks we've been experiencing lately. Don't worry, it was just an example attack from us - but did you notice any of signs of a...".
It's also worth noting as part of 2) it was discussed the official communications on end of the year packages would be sent out _before_ the security test, not sure if that was the case in GoDaddy's scenario. This year didn't include cash bonus at Christmas (we did one in the summer for COVID) but did include unlimited PTO rollover and similar non-cash perks given the situation (and explained there would be no cash bonus).
3) Can be tricky when trying to fend attacks on the leadership level, it usually uses a modified approach to this list where lower management of each department is involved instead.
As far as the numbers themselves the higher the click through rate the WORSE the score the security group gets, it's a score for the amount of improvement from the planned action not a score for how many people you could trip up before you get a pat on the back. At the same time it was modeled after the COVID related phishing emails we had been getting, not something someone made up because it seemed easy or hard to pass.
It's a very large org and in the end this went well for all teams (even though improvements weren't quite as good as we had hoped they were still pretty darn good numbers) and we were able to show with data we had improved security against a real threat. I'm sure there were a few (given numbers) that were a bit let down after clicking it but given the approach and planning I don't think that was a failure of the test approach (though we're always open to refinement when an opportunity is brought up. Part of the training material is freeform feedback on how you think the org could better handle the challenges from your PoV).
.
Anyways the point is it shouldn't have to be "tackle real risks or have other groups hate security", you need to find a way to do both at your place (which may be different than how to do both at another place). And that should be true regardless if GoDaddy managed to implement the test poorly this time or not.
Exactly! I kept having this feeling as I read their replies and you just nailed it. "I can't test the real threat because my users will rebel" might be a real case in his situation but speaks poorly of the company. What other things are they not dealing with head-on?
It literally beggars belief that in a time of massive human uncertainty that the active malice of having the company participate in dangling a fake bonus in front of employees is just given this weird shrug.
The most good-faith reading of this whole tire fire that I can manage is that if you're going to pull this, you had absolutely and without question better have actual bonuses, of no less and frankly probably more than the phishing attempt, in the pipeline for every employee.
If you don't, you should be fired because it is inhumane to act this way to other people. It is, and I do not use this word lightly, evil.
Why is this specific scenario "the real threat" and not any of the alternatives you could use in your test? How large do you think the benefit is you get from using this specific example over another?
What is bad about a company culture where going for maximum emotional impact over a weighted approach causes an uproar? (I'd be much more wary about one where it doesn't, because it tells me employees expect the company to yank their chain, and expect bad treatment from other teams, and know complaining about it won't help. that's a broken environment.)
How would this other form of exercise you propose demonstrate that the staff isn't vulnerable to a fictitious promise of a bonus sent by a malicious outside actor?
I disagree that one can characterize the simulation of a specific scenario as "lazy." Is it unusually attractive to the target? Yes. Lazy? No.
Ask yourself what the purpose of proving this point is other than a vanity metric. Then ask yourself whether that should be the actual goal. Because it seems to me the actual goal is to have a workforce with generally high awareness and caution around phishing risks in general - and that can be built without goodwill-destroying tactics like this.
Hey, I'm all for making it better if it's possible! But the evidence isn't clear that other forms of training are effective in stopping an attack that involves this sort of messaging. If there are, great - but we need to prove that out first.
No - since you are planning to inflict some harm to your employees, the onus is on you to prove that the harm is warranted.
Specifically, you would have to prove that sending phishing training emails with more neutral topics (e.g. a package arrived, IT policy change - ACTION REQUIRED) is less effective than sending the more potentially harmful.
In fact you should first show that fake phishing email are more effective then traditional non-phishing emails that simply warns you about the risk of phishing and gives a clear example of a phishing email without any trickery.
Something like: “SECURITY INFORMATION: Phishing emails target holiday bonuses to increase engagement. Always be on alert” along with a few points on what you are likely to see in a phishing email, how you could spot one, and what to do if you get phished.
The only reason you would have to send something like this is because you are lazy, need to generalize quickly, and blast it to the company. And you will teach a lesson. And you will kill any goodwill to the security team as a byproduct.