If your company culture is bad enough you fear working on real world security problems with them you've already got a collaboration problem and it's going to preclude any security theater you perform when it comes to actual risk.
Hmm. Nowhere did I say my culture is bad. In fact it is the opposite. Where teams proactively think about security from the ground up and bake it into their respective products. A gotcha stunt like this diminishes that culture all so that you can pat yourself on the back for emulating a phish with a high click rate.
I'm aware your claim is your culture is good, I never claimed you admit it's bad. My claim was needing to avoid real attack vectors is a good sign the cultural is bad and security is theater about how much it's worked on rather than how real risks were discovered, tracked, and approached.
I agree this shouldn't need to be a "gotcha stunt" though but if this would be to you then yes you're doing it/communicating it/following it up wrong. It should be an awareness call that phishing is going to target you in very thoughtfully real and meaningful ways, not just boring fake tickets/issues or the like. If you can't find a way to send that message in your org then again I claim your org already has bad culture as it can't talk about or examine real security issues and instead has the first reaction to a real risk that you're just trying to improve your reporting numbers again.
Also a small year end bonus being not only considered exciting but not existing at all is probably the real callousness problem in the org, not that one points out the allure of money is used by malicious actors to get people to click things. But that's separate from the whole role of security discussion.
Your assertion that the only output of this could be "to send a message" is exactly the kind of thinking at a place with cultural issues relating to security policy. See my child comment on how something like this is supposed to be used in a way that focuses on risk-reduction and active feedback instead of group politics and message marketing.
At the end of the day if you've got a threat vector you're too afraid to actively measure it doesn't matter how much you message other departments about it you'll never know how big of a risk it is or if appropriate action was taken unless you are compromised by it a later date. This is true of human risk monitoring as much as it is of technological risk monitoring.
I'm involved with some of our security testing and the way something similar (relating to monetary compensation from work) was done at our org (not to be named obviously) recently was:
1) An uptick in phishing emails (both user reported and data analyzed) was noted with a few patterns of attack in particular being on the rise in the last month.
2) Communications about the attack profiles were sent out company wide followed by invitations to material (written and short video form optional for low risk groups in the org based on use of email and HR training courses for those with heavier/job required use patterns that fit the attack profile, usually IT).
3) About a month time pass, user based reporting numbers were looking better and communication to leadership that a test email was going to go out to users matching the attack pattern to see what click through rates would look like on a well tailored email that wasn't quickly removed from inboxes. Leadership approves.
4) Click troughs result in dropping at an internal landing page hosting links to the above communications and adds the HR training course to the users profile.
For more detail on step 4) the opening of the landing page is a forgiving style not a scolding style, think "Oh no, this could have been from one of the phishing attacks we've been experiencing lately. Don't worry, it was just an example attack from us - but did you notice any of signs of a...".
It's also worth noting as part of 2) it was discussed the official communications on end of the year packages would be sent out _before_ the security test, not sure if that was the case in GoDaddy's scenario. This year didn't include cash bonus at Christmas (we did one in the summer for COVID) but did include unlimited PTO rollover and similar non-cash perks given the situation (and explained there would be no cash bonus).
3) Can be tricky when trying to fend attacks on the leadership level, it usually uses a modified approach to this list where lower management of each department is involved instead.
As far as the numbers themselves the higher the click through rate the WORSE the score the security group gets, it's a score for the amount of improvement from the planned action not a score for how many people you could trip up before you get a pat on the back. At the same time it was modeled after the COVID related phishing emails we had been getting, not something someone made up because it seemed easy or hard to pass.
It's a very large org and in the end this went well for all teams (even though improvements weren't quite as good as we had hoped they were still pretty darn good numbers) and we were able to show with data we had improved security against a real threat. I'm sure there were a few (given numbers) that were a bit let down after clicking it but given the approach and planning I don't think that was a failure of the test approach (though we're always open to refinement when an opportunity is brought up. Part of the training material is freeform feedback on how you think the org could better handle the challenges from your PoV).
.
Anyways the point is it shouldn't have to be "tackle real risks or have other groups hate security", you need to find a way to do both at your place (which may be different than how to do both at another place). And that should be true regardless if GoDaddy managed to implement the test poorly this time or not.
Exactly! I kept having this feeling as I read their replies and you just nailed it. "I can't test the real threat because my users will rebel" might be a real case in his situation but speaks poorly of the company. What other things are they not dealing with head-on?
It literally beggars belief that in a time of massive human uncertainty that the active malice of having the company participate in dangling a fake bonus in front of employees is just given this weird shrug.
The most good-faith reading of this whole tire fire that I can manage is that if you're going to pull this, you had absolutely and without question better have actual bonuses, of no less and frankly probably more than the phishing attempt, in the pipeline for every employee.
If you don't, you should be fired because it is inhumane to act this way to other people. It is, and I do not use this word lightly, evil.
Why is this specific scenario "the real threat" and not any of the alternatives you could use in your test? How large do you think the benefit is you get from using this specific example over another?
What is bad about a company culture where going for maximum emotional impact over a weighted approach causes an uproar? (I'd be much more wary about one where it doesn't, because it tells me employees expect the company to yank their chain, and expect bad treatment from other teams, and know complaining about it won't help. that's a broken environment.)