I am consistently surprised by people's attitudes as I read HN comments.
Full disclosure, my old employer was very serious about phishing prevention (we sat on a ton of IP and people were out to get it) and did testing of this type as well. I did fall for a phish test around last review season because the phishing email had something about reviews in it.
Let's be adult about it - it was a good test and I failed it because I wasn't vigilant enough.
The fact that GoDaddy's email was "tempting for people around holiday season to click on" is - guess what - exactly what a moderately-sophisticated phishing attack would look like.
One of the things IT teaches employees to look for in phishing emails is language that either doesn't make sense or seems designed to "get your guard down and act on it."
In GoDaddy's case it included language like "it's free money, claim it now." Really - if you slowed down for a sec and thought about security, is this the language in which your employer emails you? Also, why does your employer need you to fill out any forms about your location - they know where you are.
For all the talk on HN about security and companies not taking it seriously enough, here's a great example of a company taking it seriously (to the point of discomfort) to teach their staff about what phishing could look like, and HN is somehow objecting.
PS: I bet there are no consequences for failing this test other than needing to retake the training which is right because the people who fell for it do need the reminder. If this was a real phish they'd have leaked real data. Perhaps even your data.
I explicitly told my penetration testers to avoid anything like this.
Invoking and then revoking a bonus email in a year where folks are already hard hit financially and mentally under the guise of “education” is lazy and any security leader who does it is a fool.
Effective security culture builds rapport with business units. This type of nonsense does the exact opposite.
Yes you may have proven a tactical point, but you have just set yourself up to lose the war.
Why not have both? That is, plan to provide a real bonus and amount, even if it's just 100 bucks. Beforehand, send out said phishing email, and collect data. Provide the bonus to everyone. Once the holiday is over, notify employees who failed the test of how easy it is to prey on people's emotion and to be careful, that email was in fact in no way tied to the bonus.
> Yes you may have proven a tactical point, but you have just set yourself up to lose the war.
you glossed over this point, but it is nonobvious. how exactly does making your users skeptical of email "lose the war"? are they going to type in their passwords out of spite now?
the gp is exactly right. if you have a line you wont cross to train your users, then youre leaving a vulnerability. if you want to make the argument that the vuln is worth not "being mean" to your employees, then make that argument, but don't pretend its more secure.
> how exactly does making your users skeptical of email "lose the war"? are they going to type in their passwords out of spite now?
No, they’re going to quit, or put less effort into their job (which is often worse for the employer than quitting), or tell other people not to work there, or—as in this case—tell everyone on the internet what you did and then your company’s reputation takes a well deserved hit.
Or they will learn not to trust any email they receive. Or they will learn that phishing email come from internal training, and therefor do not need to be reported.
>if you want to make the argument that the vuln is worth not "being mean" to your employees, then make that argument, but don't pretend its more secure.
This is not a good scam. People fucking notice when they expect a 650 dollar bonus and it doesn't come. It's not some arbitrary "check out this cool link" style email that an employee might click and then forget they've ever engaged with.
Not to mention that it caught nearly 10% of the fucking company - That's an insanely high click through for any phishing test I've ever seen at a software company. And I don't even know if they sent it to all 7k employees (my guess is no). So either GoDaddy has incredibly incompetent employees, or this looked fairly real (and the email does actually look pretty damn good for a phishing email - custom image, no spelling mistakes, internal domain, aware of different geos the company operates in)
So you pissed of 10% of your company with an email that wouldn't ever be sent as a real scam (because a week later the employee is going to ask where the 650 bucks is and alarm bells will go off).
I don't have the context of a Godaddy employee, so maybe there's an obvious mistake in there, but general speaking - this was a dumb play by the security team. It's now also losing them goodwill in public.
Phishing doesn't have to be a literal scam. The more insidious kind that companies are actually wary of are the ones intended to steal credentials and get into internal systems.
Phishers can easily get what internal company emails look like. Send an email that looks like "Hey, we're giving you a $900 bonus. Make sure to fill out these forms on workday", with a link that is a Phishing website that looks exactly like workday.
Fill in your credentials - hey, also need your 2factor code - and bam, hackers have an in to the system. It doesn't matter that the employee realizes 10 days later they were phished.
It absolutely does matter that they realize a week later.
Security is like an onion - It's layered into place across different levels, and different systems.
If you just want access to a single employee's machine, for a short duration - sure this would work. If you really want to compromise the whole internal net, it's going to take longer than a few hours to work your way into other systems. Generally you either need your target to access the system you want, or you have to spread to other machines to find a machine with the access you need.
The most effective attacks are the ones that get in, and have weeks to spread through the whole internal network. Take the Target breach in 2013 - They were in the system undetected for 20 days, nearly 3 weeks.
As soon as the company knows they have a problem, any decent security team is going to check every system on record.
Maybe "doesn't matter" was too strong, but the point stands. The fact that the employee may eventually realize the folly in no way prevents damage from being done.
In the 10 days or 1 day that it takes between realizing they were phished, all sensitive information they can get access to can be stolen. Furthermore, more sophisticated phishing links can then be sent from their account. After all, who's going to suspect an actual email send by a colleague as a phishing attempt?
A holiday bonus type of phishing attack absolutely can work, and be extremely effective at credential theft. It may not be effective at literally scamming money from the employee, but who cares.
Fake phishing is not some great methodology to better security, it's a tool to embarrass people and hope that that embarrassment leads to better security, like the Wall of Sheep. (Which, by the way, don't anyone ever implement that at work)
Know what doesn't build better security culture? Trying to trick your users. Know what does? Working with them closely to help them understand security, finding out when and how people get tricked, and working to solve those issues.
The anti-phishing efforts I've seen so far have been lame and ineffective. Rather than trying to find new ways to make people fail, security teams should be finding new ways to prevent people from falling victim.
if you make your employees feel bad on purpose and have a track record of insensitivity, they won't like you. People who don't like you won't act in your interest to a high capacity.
I don't think GP was ambiguous at all, to be honest. This point was extremely clear. It feels like you came to the discussion with an axe to grind.
Now that's a good idea - where I work, our security policy is such that I should be able to destroy any (company) computing equipment an end user is using and the loss of data should be restricted to their unsaved changes. A test like you propose would be a very memorable reminder for information security awareness, and so might be really effective before infrastructure upgrades if restricted to people with hardware about to be phased out, you need to nuke old hard drives at that point anyway...
Not exactly related but 7-8 years ago a company in the region had a major virus outbreak. There was some kind of zero day worm running through their environment and despite having updated endpoint protection they still got got.
They didn't feel they had any choice but to hire about 50 temp folks from a local contracting company and run through the entire desktop environment and pull drives, re-image, reinstall. They did this over the course of a weekend and on Monday when employees came back they found their desktops wiped clean. Within an hour the site lead started getting calls and visits from panic stricken employees that had kept all sorts of personal info on those machines...photos, emails, documents, etc.
All gone.
They ended up missing one machine that got powered up that Monday and re-infected the entire campus. Had to do the same thing again.
This one is tricky. Effective phishing preys on the weak and the vulnerable. For better or worse, we do have to train people -- even weak and vulnerable people -- not to fall for scams, and there's no better way we know of than to do training exercises. Cruel though it may sound, unless we can think of a better way, I think it's the most effective option.
The fact that employees are failing this proves the point, and until we can show that they are no longer falling for it, it shows how vulnerable we remain from an infosec posture.
Again, while you may be proving a point and obtain a localized victory, it is Pyrrhic. There are other phishing scenarios that can be tailored for the proclivities of each team. Overdue POs for purchasing. “Hey I saw some source code leaked on this site” for engineering. Etc etc.
The only reason you would have to send something like this is because you are lazy, need to generalize quickly, and blast it to the company. And you will teach a lesson. And you will kill any goodwill to the security team as a byproduct.
I don't agree with you, but with the other posters in this thread. You can't just pull your punches, phishing and ransomware crime is currently skyrocketing. This is exactly the type of scenario that will entice many people to infect themselves with malware. This is also a scenario I've seen multiple _real_ e-mails of in recent weeks. It's not like your employees are not likely to receive something just as nasty as this over the course of their employ.
That said, I do agree with you that money is better spent elsewhere. A company with > 1000 employees should just assume there's an infected machine in their network 24/7. Defense in depth stipulates that this should not matter for their security.
You can, and must, pull your punches and gauge the trade offs when you run a security organization that has to work with and collaborate with other teams. If you truly disagree, I encourage you to replicate this particular test in your organization and see how that works for you.
I can attest that I have ran tests in a very similar vein at least three times. One of which also caused an uproar.
I also prefer my security measures procedural, technical and layered. Like I said before, phishing simulations don't really work for me at all... Currently many places where I do some work the security department have a large say in what happens, regardless of goodwill. Security measures always suck, are less efficient and cause no end of headaches, you mostly have 0 goodwill anyway.
> Security measures always suck, are less efficient and cause no end of headaches, you mostly have 0 goodwill anyway.
Isn’t your job to make sure security measures don’t suck, aren’t less efficient, and don’t cause headaches?
This seems to be a very defeatist attitude to take.
From my personal experience one of the following happens when a security team takes your perspective:
1.) users work around security mitigations, causing worse security issues
2.) workers quit the company due to friction when working
3.) the security manager gets fired because they won’t ever compromise
Can you say who you work for? I don’t want to work for you either.
You seem oblivious to the fact that you can still be effective but not a shitty security team.
Your way of approaching this literally makes people who should listen to you and trust you want to do the opposite. What an awful security team you’re part of.
Over time security measures always cause friction. You'll always be the annoying presence, the naysayer, the 'needlessly' difficult person. Effective security imposes restrictions, hurts egos and interferes in natural social responses.
It's funny you say that my way of working literally makes people want to do the opposite of trust me, when I send them a phishing e-mail that's exactly what I'm aiming for ;P
I've worked at a place where the security team were a detached, nagging presence. Devs only interacted with them when they had to, so security became an afterthought.
I've also worked at a place where the security team were trusted collaborators. Devs were comfortable communicating with them. Their security skills improved over time, and so did the security of the software they wrote.
The latter strategy is far more effective at moving the needle over the long term.
can you rephrase this without the acerbic vitriol? I can't really take a security discussion seriously with someone who can't communicate professionally.
Treating people this way is not itself professional. The inhumanity of the people with power and the willingness to exert it upon others in this fashion is deserving of a little, as you say, vitriol. (I would say "contempt".)
Ah, yes — merely supporting fake bonuses during a historic economic crisis is perfectly professional, but describing that as awful is “acerbic vitriol”.
It was pretty much this scenario years back, as a result of that exact same scenario being used by bad actors.
The real thing happened, was detected and shut down. Then they did some trainings over the course of a year, and had us test the same thing again next year. Results were not particularly good, and people got upset that they did not receive the promised Christmas gifts.
Afaict, these trainings stop being effective at around 10%. That is, 10% of all people being phished like this will do everything you tell them to. Up to and including sending your their password and installing malware on their machines following your e-mail instructions.
If your company culture is bad enough you fear working on real world security problems with them you've already got a collaboration problem and it's going to preclude any security theater you perform when it comes to actual risk.
Hmm. Nowhere did I say my culture is bad. In fact it is the opposite. Where teams proactively think about security from the ground up and bake it into their respective products. A gotcha stunt like this diminishes that culture all so that you can pat yourself on the back for emulating a phish with a high click rate.
I'm aware your claim is your culture is good, I never claimed you admit it's bad. My claim was needing to avoid real attack vectors is a good sign the cultural is bad and security is theater about how much it's worked on rather than how real risks were discovered, tracked, and approached.
I agree this shouldn't need to be a "gotcha stunt" though but if this would be to you then yes you're doing it/communicating it/following it up wrong. It should be an awareness call that phishing is going to target you in very thoughtfully real and meaningful ways, not just boring fake tickets/issues or the like. If you can't find a way to send that message in your org then again I claim your org already has bad culture as it can't talk about or examine real security issues and instead has the first reaction to a real risk that you're just trying to improve your reporting numbers again.
Also a small year end bonus being not only considered exciting but not existing at all is probably the real callousness problem in the org, not that one points out the allure of money is used by malicious actors to get people to click things. But that's separate from the whole role of security discussion.
Your assertion that the only output of this could be "to send a message" is exactly the kind of thinking at a place with cultural issues relating to security policy. See my child comment on how something like this is supposed to be used in a way that focuses on risk-reduction and active feedback instead of group politics and message marketing.
At the end of the day if you've got a threat vector you're too afraid to actively measure it doesn't matter how much you message other departments about it you'll never know how big of a risk it is or if appropriate action was taken unless you are compromised by it a later date. This is true of human risk monitoring as much as it is of technological risk monitoring.
I'm involved with some of our security testing and the way something similar (relating to monetary compensation from work) was done at our org (not to be named obviously) recently was:
1) An uptick in phishing emails (both user reported and data analyzed) was noted with a few patterns of attack in particular being on the rise in the last month.
2) Communications about the attack profiles were sent out company wide followed by invitations to material (written and short video form optional for low risk groups in the org based on use of email and HR training courses for those with heavier/job required use patterns that fit the attack profile, usually IT).
3) About a month time pass, user based reporting numbers were looking better and communication to leadership that a test email was going to go out to users matching the attack pattern to see what click through rates would look like on a well tailored email that wasn't quickly removed from inboxes. Leadership approves.
4) Click troughs result in dropping at an internal landing page hosting links to the above communications and adds the HR training course to the users profile.
For more detail on step 4) the opening of the landing page is a forgiving style not a scolding style, think "Oh no, this could have been from one of the phishing attacks we've been experiencing lately. Don't worry, it was just an example attack from us - but did you notice any of signs of a...".
It's also worth noting as part of 2) it was discussed the official communications on end of the year packages would be sent out _before_ the security test, not sure if that was the case in GoDaddy's scenario. This year didn't include cash bonus at Christmas (we did one in the summer for COVID) but did include unlimited PTO rollover and similar non-cash perks given the situation (and explained there would be no cash bonus).
3) Can be tricky when trying to fend attacks on the leadership level, it usually uses a modified approach to this list where lower management of each department is involved instead.
As far as the numbers themselves the higher the click through rate the WORSE the score the security group gets, it's a score for the amount of improvement from the planned action not a score for how many people you could trip up before you get a pat on the back. At the same time it was modeled after the COVID related phishing emails we had been getting, not something someone made up because it seemed easy or hard to pass.
It's a very large org and in the end this went well for all teams (even though improvements weren't quite as good as we had hoped they were still pretty darn good numbers) and we were able to show with data we had improved security against a real threat. I'm sure there were a few (given numbers) that were a bit let down after clicking it but given the approach and planning I don't think that was a failure of the test approach (though we're always open to refinement when an opportunity is brought up. Part of the training material is freeform feedback on how you think the org could better handle the challenges from your PoV).
.
Anyways the point is it shouldn't have to be "tackle real risks or have other groups hate security", you need to find a way to do both at your place (which may be different than how to do both at another place). And that should be true regardless if GoDaddy managed to implement the test poorly this time or not.
Exactly! I kept having this feeling as I read their replies and you just nailed it. "I can't test the real threat because my users will rebel" might be a real case in his situation but speaks poorly of the company. What other things are they not dealing with head-on?
It literally beggars belief that in a time of massive human uncertainty that the active malice of having the company participate in dangling a fake bonus in front of employees is just given this weird shrug.
The most good-faith reading of this whole tire fire that I can manage is that if you're going to pull this, you had absolutely and without question better have actual bonuses, of no less and frankly probably more than the phishing attempt, in the pipeline for every employee.
If you don't, you should be fired because it is inhumane to act this way to other people. It is, and I do not use this word lightly, evil.
Why is this specific scenario "the real threat" and not any of the alternatives you could use in your test? How large do you think the benefit is you get from using this specific example over another?
What is bad about a company culture where going for maximum emotional impact over a weighted approach causes an uproar? (I'd be much more wary about one where it doesn't, because it tells me employees expect the company to yank their chain, and expect bad treatment from other teams, and know complaining about it won't help. that's a broken environment.)
How would this other form of exercise you propose demonstrate that the staff isn't vulnerable to a fictitious promise of a bonus sent by a malicious outside actor?
I disagree that one can characterize the simulation of a specific scenario as "lazy." Is it unusually attractive to the target? Yes. Lazy? No.
Ask yourself what the purpose of proving this point is other than a vanity metric. Then ask yourself whether that should be the actual goal. Because it seems to me the actual goal is to have a workforce with generally high awareness and caution around phishing risks in general - and that can be built without goodwill-destroying tactics like this.
Hey, I'm all for making it better if it's possible! But the evidence isn't clear that other forms of training are effective in stopping an attack that involves this sort of messaging. If there are, great - but we need to prove that out first.
No - since you are planning to inflict some harm to your employees, the onus is on you to prove that the harm is warranted.
Specifically, you would have to prove that sending phishing training emails with more neutral topics (e.g. a package arrived, IT policy change - ACTION REQUIRED) is less effective than sending the more potentially harmful.
In fact you should first show that fake phishing email are more effective then traditional non-phishing emails that simply warns you about the risk of phishing and gives a clear example of a phishing email without any trickery.
Something like: “SECURITY INFORMATION: Phishing emails target holiday bonuses to increase engagement. Always be on alert” along with a few points on what you are likely to see in a phishing email, how you could spot one, and what to do if you get phished.
> there's no better way we know of than to do training exercises
Are you sure about that? Is there any evidence that supports this claim?
It certainly fails the sniff test. Behavioral science does not hold setting you up for failure as an effective learning contingency. I mean try teaching your dog that way, see how far you’ll go. You will also teach your workers that phishing emails come from internal so there is no need to report it.
When I was working as a life guard we never went into training situations unknowingly. There is a reason for that, a) it is dangerous as workers might act irrationally creating a dangerous situation, b) it is stress invoking and not healthy for anyone, especially those that have underlying stress issues or conditions, and c) there is no evidence that live training is any more effective or teaches you anything more than traditional training does.
Internal domain, custom image with GoDaddy logo, no obvious spelling problems or mistakes.
Go look at the image in the article.
I don't work there so maybe something in it should have set off red flags for employees, but generally speaking - That's a very high quality phishing attempt.
You misread the article. The article is saying that the emails said they were from the godaddy.com, but it doesn't say that they were actually sent from godaddy.com.
Regarding the "it's free money, claim it now" language... my employer also does regular phishing tests yet I wouldn't be surprised to get this as a legitimate email. Benefits from company partners, wellness incentives, etc. all read something like this and you never really know who the company gave permission to contact you until the message comes. A few anecdotal email subjects, all to my work address, from a 15,000 head company on the S&P 500:
1. "Claim your check now! $1.50 is waiting for you" Apparently a legitimate email refunding me for one time the vending machine failed to read my card properly (but had my work ID scanned).
2. "You could be earning an additional $1000 per week– find out before it's too late" Something about our 401k but included a link to some survey that HR required we fill out.
3. "Reminder: Claim your wellness check now" I expected this to contain a link to the health survey we're supposed to complete which gives me a $5/week discount on the health insurance I get through the company. Turns out this was actually related to some other company wellness incentive which gave us gift cards for participating in various events (bike to work month, etc.).
I could probably dig out more but I passed all of those along to the corporate email check to make sure and each was verified as legitimate. I'm sure plenty of other companies have made attempts to increase "social engagement" without realizing the consequences it has for security.
I have a lot of thoughts on this, having worked on security teams, and now running a company.
1. Employees and customers are who the company should be serving. It isn't "employees at the cost of customers", or vice versa. If your business can't do both it shouldn't exist. In this case the thought was "It's worth making employees upset because this addresses a real customer concern".
2. Phishing tests are silly. You should just assume someone will get phished. If you want to do trainings, do trainings. Or, better yet, make phishing pointless - at my company anything important is gated by a U2F token, among other things. We are simply 'immune' to most types of phishing - the one major source left being wire transfer fraud, which is fairly easy to avoid.
We only do phishing training for compliance purposes.
3. This is the wrong kind of security. It's "blame the user" security. It's a silly, backwards, outdated, ineffective attitude.
If the goal is "Make sure people report phishing", you can do that without trickery. Just send an email and request that users report it to whoever is responsible - you'll soon find who does or does not know how to do so.
4. (Editing in) Security teams should be aware of how what they're going to do is impacting the company. The impact of this phishing test has had public impact on the company - that's a disaster. A security team is about managing risk, and here they've instead subjected the company to concrete, public scrutiny. This was simply the wrong call.
Having trust in your security team is really important. Security teams need to do outreach, they need to be friendly, and be people who everyone feels good about going to with a problem. Building animosity for the sake of phishing protection is far more dangerous.
> If this was a real phish they'd have leaked real data. Perhaps even your data.
That's the security team's problem, not the victims!
Of course, ideally the IT team has screened out the phishing attempt and secured systems as much as they can.
If they do it perfectly, then your comment applies and nothing else matters.
If, however, the security team cannot be perfect, some amount of bad stuff will land with the users. This seems realistic.
So training people to be security-minded is useful. It's the next line of defense.
How do you train your users? "Just send an email and request that users report it to whoever is responsible" as you mention is one thing. But you can also imagine someone complying with that but then falling for a well crafted phish (that happened to me, as I mention elsewhere in this thread.)
Being confronted with falling for something makes people take the threat more seriously. It takes someone's attitude from "it can't happen to me" to "oh, it did happen to me."
>> at my company anything important is gated by a U2F token, among other things. We are simply 'immune' to most types of phishing
I don't know your business' threats but that seems naïve. Imagine your sales team gets a phish email asking them to list
your biggest customers, including contact info and revenue "so that we can send them an appropriate thank you gift for the holidays." If someone replies to this/fills out the form, you have leaked critical information even though nobody has penetrated your system (so your tokens don't help.) Even if this type of thing isn't a problem for your business, you can imagine it is for many.
For me the term that comes to mind is: Security at the cost of quality of life.
Sometimes security is redundant, that is fine. Sometimes it is unessisary, that could be fine as long as it is not harmful. Phishing training should be redundant at worst and unessisary at best. But these types of phishing training definitely has negative effects on the trainees. You are invoking a sense of failure and unessisary stress. Some people will have underlying stress issues, perhaps they are having a bad day, perhaps they are experiencing PTSD, and then seeing they failed a phishing test could be disastrous. Definitely not worth putting your workers at this risk for supposed (and unproven) security.
Perhaps my least favorite phrase in security, used constantly to justify weak, meaningless, or harmful work as "another layer".
U2F is just one layer. It's a real, meaningful layer - it addresses real threats in a non-phishable way. Gating access by device, acls, etc, or locking down sharing permissions, are other real and meaningful layers.
> some amount of bad stuff will land with the users
That's the security team's problem to solve.
> But you can also imagine someone complying with that but then falling for a well crafted phish (that happened to me, as I mention elsewhere in this thread.)
I can just assume that one person will fail the phishing test already, and work from that angle. Way better than assuming I can teach an entire company to be untrickable.
> Is there no data that humans can leak out by entering it in the wrong place?
Wire transfers were one example - there are others, and we have other ways to mitigate those without assuming users should bear the responsibility.
By 'most types' I mean either phishing for credentials or as a delivery mechanism for malware - these would both be extremely difficult to do, or otherwise mitigated by other policies.
> but you can imagine that in many businesses there is.
That's their security team's problem to solve.
Feel free to train users, I have no problem with it. We do it for compliance purposes, but it's also a fun opportunity to engage with people about security, including meaningful security. And you do want people to be able to spot and report phishing, it's just not something you should ever rely on, or compromise for.
Note also that fake phishing emails is not the only way to train people against phishing. Arguable traditional training (i.e. explain and show the trainee examples in a setting where the trainee knows this is a demo) is both a better way to teach about the risk and protection, and is less harmful.
I used to work as a life guard and we would conduct training scenario every month at least. And we would never do live training (i.e. training scenario without knowing it is training). There are several reasons, first is safety. You would be putting workers in an unessisary risk. Second is stress, and third is that there is little evidence that we would walk away from the training having learned anything.
So if you believe phishing is a serious threat to your security, that is still no excuse to deliver fake phishing emails to your workers.
I can't tell, but it seems like you have unrealistic expectations for security teams. One the one hand, you handwave about how things are 'their problem', and on the other dismiss standard methods and models as baseless self-justification.
I would turn down a position reporting to someone with this combination of attitudes - I'm pretty sure my hands would be tied, only there to take them blame when the inevitable happened.
I've worked on security teams or for security teams for my entire career.
Yes, some things are the security team's problem - as in, security teams are responsible for managing risk. My expectation is for them to do so.
Again, you can perform phishing tests, but I think they're mostly a waste of time, a terrible substitute for real mitigations, and should never come at the cost of your employee's sanity - a security team must build trust with other teams first and foremost, not burn it because "real attackers are mean too".
> dismiss standard methods and models as baseless self-justification
I would argue that live training with a “set up for failure” is non-standard.
A standard training has the trainee knowing they are in a training situation. I have never seen this type of training used for any situation other then phishing training. And before you say “fake firedrill”, no I have never seen those outside of the movies, and I would believe a Simpsons type situation where you are actively putting workers at risk is the reason for that.
I agree you can't expect people to provide full defense, and it doesn't sound like you disagree that helping people act more securely is important.
In my example, there's a difference between whether one sales person leaked their business numbers, or the entire 100 person department did. You train users to minimize the vulnerability even if you can't fully solve it.
If you agree that far - then I am not sure where we're disconnecting on this question.
I would imagine we're disagreeing with the "at what cost"?
As in, is the cost of:
1. Losing the trust of your coworkers
2. Causing public reputation damage
3. Potentially harming coworkers emotionally
worth the gain of having a slightly more effective phishing training? I would argue no.
I would also say that it isn't nearly as important as implementing other measures - U2F being a big one that I'd mentioned, but there are plenty of others. It's certainly not where I'd recommend anyone start.
You should not assume that U2F makes you 'immune' to most types of phishing.
If your assumption is that users will fall for phishing tests then it follows that they will present their U2F token or simply take some action on behalf of the attacker directly.
The point of U2F is that they can't do that. For example, if I send you a fake Google login page, you can't "trick" me into entering my token - it won't work.
Yeah, if you'll read that blog post it'll explain exactly that.
U2F tokens are a second factor for authentication, meaning that to log into your account you need your password and the token.
Further, the token takes the domain into account when it's generating the key. That means that you can't just give me a fake Twitter page and then forward the creds/token, it won't be valid.
U2F mitigates this entire class of phishing attack.
Beyond that, we also only allow logins to sensitive services from employee laptops, validated by a TPM, and multiple other controls, but that's not really the point.
> Or 'I'm on vacation and left my laptop, could you just run this command on the prod cluster for me so we can avoid downtime over the holidays?'
Right, so this would work, but it's a huge gap between "Hey run this command for me" and the attacker being able to run arbitrary commands. It's a reasonable thing for your security team to consider and attempt to mitigate in other ways.
“Really - if you slowed down for a sec and thought about security, is this the language in which your employer emails you?”
Yes. Friendly language is becoming common in corporate emails.
“Also, why does your employer need you to fill out any forms about your location - they know where you are.”
They very often do, because things aren’t wired together within the company - and because many schemes are administered by a third party. Filling out forms with details that ‘the company knows’ is commonplace.
Companies even add ‘Warning: External email’ to subject lines of emails from outside the company, but then fail to whitelist trusted third parties, so employees get used to flagged emails actually being legitimate ... and then getting used to ignoring the warnings.
Security is important, but it does not trump any other consideration.
Will they next send employees emails claiming their loved ones are in danger, because that is something real hackers might do? Would you consider that ethical behavior? It's actually a pretty common scam, at least in my country (normally done to defraud old people, not to steal company secrets, but still).
I'd expect to see a very convincing study that would show that this type of emotional response is crucial for accurately training people to recognize real phishing before I accepted in any way that this was ethical. Absent strong evidence in this regard, this is utterly disgusting.
I guess that's subjective. In my old company, people generally just felt "yup, people are out to get us, and this test is making me realize how vulnerable I am to screwing it up and is therefore a good reminder."
However this one makes you feel, how would you feel if this was the real phish and you were the one who leaked sensitive customer data because you fell for it?
You are assuming that seeing real emotional stakes in the phishing exercise actually helps with recognizing the same in a real phishing email. I very much doubt the validity of this argument.
Note: I'm not against testing your employees for phishing attempts. That is extremely valuable. I'm against using something with an emotional impact as the pretext of the phish, when I believe a more neutral pretext will do just as well.
A separate note is that according to the pictures shown, it seems this is also a particularly bad example, as the email has legitimate headers, showing that it's coming from godaddy.com - so it would only rely on employees distrusting the contents to recognize it as phishing, which is a bad lesson to teach.
If instead of christmas bonus, it was death of a loved one, would you still consider it acceptable (both meet your criteria)? Would it be acceptable to test employee susceptibility to extortion by taking compromising photos and then threatening them with it?
In my opinion no. Any sort of experimentation on employees needs to be ethical. If you screw people over in the name of security, you have now become the security risk. Making the security team be the enemy that the employees hate because they have been hurt by them, will lead to very poor outcomes.
It's ok for companies to prepare their employees for phishing, except that (a) they are not allowed to inflict emotional harm on their employees, and (b) the email should include the correct examples of phishing markers.
A good example would be an email coming from a realistic-looking but fake external email; or an email with faked internal-looking headers that are highlighted by the company's email system.
A bad example would be an email coming from the company CEO's real email address, claiming that the employee was promoted, with no warnings from the email system that the headers are faked. That would not teach a useful lesson, and it would inflict some emotional damage on your employees.
Note: the lesson is not useful since, if the attackers have managed to corrupt the email system well enough to send emails from internal addresses without getting flagged, they will most likely have no need to phish for further access.
It is not subjective! Do the study and get the data. Before you risk putting your workers in a harmful situation.
(edit): And by study I mean show me that fake phishing emails are more effective then traditional training where workers know they are in a training situation.
It might be a realistic scenario and thus a "good" test. It still is a total dick move to do this as an employer. Some things you just should not do to your employees. There are a million other fishing scenarios that are realistic where you don't have to be promised money by your employer.
I think the correct course of action would have been to actually award out the amount promised to everyone completely separate from their action or inaction on the phishing email. It would have gone over a lot better if they had done this.
There are more humane ways to train an army than to make them think of situations where someone is trying to kill them, but... if that comes at the expense of worse training (and therefore actual higher likelihood of death) then it doesn't do anyone favors.
The fact that so many people fell for this test means there's something (obviously!) around this scenario that makes it particularly sensitive and mistake-prone for people. Your IT department may chose to avoid it, but people trying to phish you, won't.
True story, but I’m not making a point here, or trying to disagree with you. Just something that happened to me once:
I was leaving a hotel near the airport in Delhi, and as were waiting outside for an Uber the manager of the hotel told us not to be alarmed if we saw some guys with guns running towards the hotel. Told us the police were running a terrorism scenario to see how the local hotels would react, and whether they would follow the plan for such an event. The guns would be unloaded, but everything else would seem realistic. The manager and the outside security knew about it (because they had to let the “terrorists” into the hotel) but no other staff or guests had been warned. We were only told because we were outside and might see them coming and he didn’t want us to give the game away.
Fortunately our Uber arrived before the “terrorists”. It’s possible the manager was just fucking with us, and none of it happened, but it didn’t seem like it and if he was that’s a pretty messed-up thing to do, too. It occurred to us that if they were actual terrorists he might be in cahoots and making sure they got into the building.
He also told us how in a real event he told his outside security guys to run away, they couldn’t stop terrorists anyway, and there was no point getting themselves killed.
Thank you for your service. Hopefully you can appreciate the tradeoff between mental/emotional discomfort and an actual problem that was the point of my analogy.
If we're arguing analogies, OK. Does your company ever have fire drills? What if the first sound of the alarm freaks someone out? What if people find it annoying/idiotic/morale reducing to walk down the fire stairs?
At some point you just say "look, we need to make sure our people are trained, if people think fire drills are stupid or upsetting, we have to take that hit because the alternative is worse."
OF COURSE! We're not talking about prioritizing the company over people. We're talking about training people not to fall for what they could discern as a phish if they took a sec.
It hasn't trained people not to fall for a phishing email.
It's trained them not to believe their company when they offer a bonus.
Which might stop the same email from working tomorrow, but not the one saying "This needs to be filled out by Friday!" or "Class action settlement against GoDaddy over fake bonuses"
But that is not how people are normally trained. Normally people are trained in safe conditions where participants know they are in a training session.
You put people in unnecessary danger by putting them in an unpredictable situation. That is why training sessions are varied and thoroughly debriefed so that participants can know how what they learned in the current session can be applied at different settings.
Source, anecdotal: I’m a former life guard that had regular drills, and never entered one unknowingly.
We actually don't know what the usual communiqués internally at GoDaddy look like. In a vacuum we can also judge this to be an effective test. In practice there are many unknowns and factors we don't know about though. In my opinion phishing is also an issue at scale when we talk about companies; meaning, there's a likelihood that some will always be more likely to fall for it.
Given how the world has been this year and what some employees maybe have gone through the employees that will fall for this particular phishing emails may actually need more support from their employer.
Either way, this isn't a vacuum and we are talking about a test that is unnecessarily cruel.
Edit: just to make this more constructive, there are always alternatives. Instead of relying on emails only employees could be informed to check in via a second channel in all matters relating to money or a company's IP.
I'm unfamiliar with corporate expectations regarding phishing emails—are you not supposed to click on links sent from an internal email address? The article clearly shows the email coming from an @godaddy.com address, and I'd think that them misconfiguring SPF/whatever is a much bigger deal.
I don't remember the exact language of the training but there's a "use your head" element. Someone you don't know is emailing you something that doesn't really make sense - stop and don't comply. There are lots of reasons the email could have internal addresses (a misconfiguration, a similar-looking domain, an internal threat, whatever) - don't rely on that if other red flags are up.
If you read the text of the GoDaddy phish, there's something like "CLICK HERE NOW TO CLAIM YOUR FREE MONEY BEFORE ITS TOO LATE".
If it was just a notification "btw you'll find a little extra something in your check this month" without requiting any weird action from the employee (which btw is how this would look if it was real) then it's a totally different thing.
> Really - if you slowed down for a sec and thought about security, is this the language in which your employer emails you?
That is the weakest part of your argument. Companies are notoriously incompetent, yes, even domain registrars or whatever you categorize GoDaddy as.
And what if they used better language? What then? Would you simply move the goal post to another way to victim blame?
I know to avoid a lot of things because of experience, I know a lot of legitimate things act like illegitimate things. And a lot of illegitimate things can masquerade adequately as competent legitimate things.
For example: I know to look at the DKIM and SPF headers in an email, specifically because email clients do not show you when someone is spoofing an email address with the exact same domain name. Not a phishing domain name, the exact same one. This is a real, ongoing UI problem.
I wonder if you’re capable of seeing that both of these are true:
1. It’s a good test
2. It’s a bad test
That’s the thing that gets me. Are people (HN readers, other people) generally able to only see one of those points — or do they see both but dismiss one?
In your case, @xyzelement, do you see both? I know you see what’s good about the test, but do you also see why it’s a bad test?
I notice that people have difficulties explaining both positive and negative, and then explain why they pick one over the other.
Here it seems an excellent test, but you will probably upset a large part of your employees.
People seem to take sides first, and then try to fully ignore the the positives of the other side, and negatives of preferred side, in a way to convince the other party how wrong they are.
One side: it's a terrible test and is inhumane
Other side: excellent test and people need to grow up.
But of course you will never convince the other party like that.
>> In your case, @xyzelement, do you see both? I know you see what’s good about the test, but do you also see why it’s a bad test?
I totally see why some people reacted negatively to it, though I think ideally mature people can see that it is a useful test and is trying to teach them something and thus get over it.
My personal value system is that I chose tough love versus coddling because the former breeds stronger and more capable people. It's not for everyone, but for example I want people who run security for my firm to err on the former.
It's a tradeoff. I don't want to work at a place that's more vulnerable because it (rightly or wrongly) assumed that employees aren't mature enough to go through a real exercise. That's just my view.
It sounds like “maturity” and “coddling” etc have a specific meaning in your value system.
If someone else sees the benefit of this test but considers it needlessly harmful/cruel (and disproportionately so for different people) — I wouldn’t guess that “immaturity” is in the top 5 of the reasons why they would think that. So I found that surprising.
LeonB, I appreciate this discourse and let me explain why I see maturity as a factor here.
Let's assume that phishing is a real threat, and testing like this moves the needle on people's vigilance (as it has for me when I failed something like it last year.) Let's also assume that if this was a real phish, there would be really bad emotional and financial consequences. EG: imagine being the one who fell for a real phish and actually caused a huge data leak that ended up in the news and put your company out of business.
Regardless of how we feel about it, the above threats are real. So we can either chose to be "nice" but increase people's vulnerability to real painful consequences , or we can chose to be "tough" because we realize that in the long run it creates greater actual security for everyone. To me that's "tough love" - harsher short term decisions to help everyone in the long run.
It indeed feels adult and mature to take the tough and unpopular decision that aims to address the real risk, and conversely irresponsible to say "we can't deal with the problem because it'll be unpopular and someone may get upset."
There's maturity on the flip side too. When I failed the phishing test, it was a wakeup call. In retrospect I should have caught it but I wasn't careful enough. So I am grateful the company did it because it taught me a valuable lesson that will keep me safer in the future. If my response was instead "those fuckers tricked me" or whatever, it would have been childish, because it ignores that the risk is real and that it's really I who has the power to do better.
There are scammers that call people and claim their relative is in danger or badly hurt or in hospital dying and they need to urgently do this and that to help them - and people are deceived exactly because the stress makes their response less measured and more susceptible to deceit. That doesn't mean that ethical company would employ such tactics in a pentest. There's a limit of how far you can go in simulation. Surely, real criminals could do it - but the company can maintain vigilance without resorting to essentially replicating the villains and causing almost as much harm to the victims as the real villains do.
I'm astonished how nobody at GD could predict what emotional harm would such a trick cause. There's a lot of ways to test it without such a cruel method. Even reviews - that you mentioned - would work as well without being cruel.
First you should establish that this "training" is effective in raising security standards. I’m skeptical it does. When working as a life guard we never did live testing (i.e. training without knowing it is training) simply because results are mixed at best. Trainees are stressed in a live scenario and are unlikely to really "learn" anything from the experience. Worst case scenario, trainees will experience stress to a level where they will be harmed by the experience.
Second security should not fail on a successful phishing attempt. If a worker opens a phishing email and it compromise your security, you’ve got bigger problems.
Thirdly, don’t discount workers experience of having failed a task. It is extremely unpleasant and stressful. Workers health matters, and to subject us to unessisary stress levels is simply evil. There is no excuse. Find a better way to secure your system.
I don’t see any problem with the phishing test. But if you’re going to do it like this, when it’s over you should actually give your employees the bonus. Otherwise it’s just tasteless.
> Also, why does your employer need you to fill out any forms about your location - they know where you are.
pff, this happens all the time (although not for bonuses, but for things like branded cloths or surveys). you can't expect HR to automate this kind of stuff every time. it's much easier to just design a google form than to make a script that calls APIs.
Are you managing people or customer expectations? If so, how many already left because of the rational explanations of your lack of empathy? If not, keep it that way, you don't have the emotional capacity to lead others.
If my employer pulls this prank on me, they can either give a formal apology to everyone, give me the actual bonus, or find another developer.
I've never understood why phishers wouldn't just brush up a little on their english, or at least have their phishing email reviewed by a friend. I mean, proper English is so much more effective. At least learn some proper grammar before launching an attack
Yeah, it makes sense to test for this kind of thing, but they should have planned to give the bonuses to everyone regardless of how they responded to the phishing attempt.
Here's a professional penetration tester and social engineer, Jayson Street: For the record I’m not just disgusted & disappointed by the horrific actions taken by #GoDaddy executives who signed off on this gross betrayal of employees disguised as a phishing exercise! I’m extremely sad that anyone claiming to be in InfoSec would help with it!!!https://twitter.com/jaysonstreet/status/1342213216122892293
Digital forensics and incident response professional (and risk management expert) Leslie Carhart: This is one of the cruelest and most counterproductive moves I have ever heard of inside our industry... stunningly unethical... objectively negative as it spoils the fragile relationship between infosec and staff for future IR, reporting, and policy adherence.https://twitter.com/hacks4pancakes/status/134218618971823718...
CEO of social engineering training company SocialProof Security, Rachel Tobac: They have to trust you as a helper so you can support and keep them safe... if it’s a pretext that makes them feel you and your team are insensitive or rude. They’ll just remember they distrust you personally more — when you need their trust!https://twitter.com/RachelTobac/status/1342194628024406016
Are there any indications that the email is even a phishing email? The email is from an internal address. Do they mean to say that their internal email system can’t detect fake emails purporting to be from an internal address? This is cruel in multiple ways. Promising a non existent bonus and then blaming employees for failing a phishing test (which is not even a real phishing email or exposes deep seated IT inadequacies). Presumably this goes on their performance report too.
I was thinking the same thing. You shouldn’t fail by clicking a link sent by an internal email address. If the link took you to an external site and you entered your GoDaddy credentials or provided personal information, that might be a different story.
> You shouldn’t fail by clicking a link sent by an internal email address.
I disagree in making this broad of a claim -- insider threats are certainly an issue. And as a sibling commenter points out, email headers are easily spoofed.
I'm not condoning GoDaddy's pentest (agreed with everyone else who sees this as a cruel prank), but also, um, why would you click a link if your company is telling you they're going to pay you a bonus? Wouldn't that just go through payroll as with everything else?
edit: it looks like the phishing email provided the bonus as an opt-in? yeah, that ought to raise red flags that it's not just being applied across the board, but still, it's been a tough year, so people might not think as hard about it.
I don't know what the security situation is like at Godaddy, but I'm sure there's some amount of investment needed to roll that out broadly without accidentally breaking existing employee workflows.
And my point still stands re: insider attack. At least at Google, anyone could ostensibly register HappyHolidays@google.com (or some variation if it's already taken) as an alias or a mailing list, which removes the need for spoofing.
Entering personal info might also be understandable. My employer gives a Christmas gift. This year they asked us to update a form with our temporary address if we were in a different location.
Absolutely, in the context of a physical item being shipped to you (especially if they can't just distribute it at the office), if it's not through payroll. (e.g. we had site-specific fun events in lieu of the annual holiday party)
But a cash bonus? That's the epitome of something that 1) should go through payroll and 2) should just get direct-deposited into your bank account as is the case with your regular paycheck. There's no reason why you'd need to provide any additional info.
Unless the pictured email client has an absolutely horrible kerning implementation, it is sent from @Godaddy.com. Where did you get the "@gocladdy.com" information from?
The apex phishing e-mail is indistinguishable from a legitimate e-mail, except by SPF/DKIM. After all, the apex phishing e-mail is based on a byte-for-byte copy of a legitimate e-mail.
The email in my company all comes from @<company-name>.com. Why would a company keep a separate domain for internal vs external email? Is that common practice?
It's rare but not unknown - for example Facebook employees' e-mails are whatever@fb.com rather than whatever@facebook.com and sometimes facebook send e-mails from whatever@facebookmail.com ¯\_(ツ)_/¯
Since the article doesn't seem to address it, the email was sent from “happyholiday@gocladdy.com” and tried to trick users with the kerning between “d” and “cl”, here's a screencap where the address is shown in a different font
If the people running this actually cared about phishing, they would update their email system to put a giant red header at the top of all emails coming from an external source.
What gets me is companies sending out legit emails that look like phishing attacks. I got one from one of the big banks, asking me to click on a link to get some benefit. I assumed it was a phish, but after a close look at the headers, it really was from the bank's usual server.
I sent a note to the security incident report address for the bank, telling them they're training their users to click on phishing emails. They sent back some noncommittal answer.
It happens all the time at my work place. I work at a big company and I get emails from random people in the company to go to submit info to random places on the Internet, usually related to employee benefits or surveys. I always report them as suspicious and move on with my work.
Once in a while you'll see a follow up email from the sender saying that they are aware of many people reporting the email as a phishing attempt and that reporting the email was wrong and that link to that particular website is a company approved tool and that employees are required to complete the survey at the link for some department or center's benefit.
This wouldn't be a problem if we had internal websites with standard tools like a survey builder and such, but getting our IT contractor in my org to do the basic functions of their jobs takes a herculean effort, getting them to support a basic internal tool would be a multi-year campaign. I can appreciate how much easier (and cheaper) it can be to use external tools and train users to disobey IT and security recommendations about phishing.
The problem with bureaucracies and hackers is that both will take the path of least resistance, and the hard problem of IT security in large orgs is how to separate the two into distinct categories.
> Once in a while you'll see a follow up email from the sender saying that they are aware of many people reporting the email as a phishing attempt and that reporting the email was wrong and that link to that particular website is a company approved tool and that employees are required to complete the survey at the link for some department or center's benefit.
That's exactly what a spearphisher would say. Still not clicking a link. Of course, I would rarely read the company email anyway.
Marketing teams at companies are always a risk vector for security IMO and their practices should be looked at by infosec people. Not just concerned citizens (but thank you for spending your time trying to help).
I've seen some wonky stuff being put out by them and they are oblivious. As their work often crosses the line into actual web-dev stuff that has security implications. ie, landing pages and emails. And not 2FAing the countless marketing tools they use nor letting the more experienced devs know what they are up to.
Banks, and health providers too. I love when my health insurer, say Mega Health Care for a fictitious example, decides to have a marketing page at megahealthcare.com, a patient portal at mymhc.com and a special page for covid stuff at mhccovidresponse.com . All of these have independent certs so it’s entirely possible it’s a phishing site. But it’s legit, and these companies are just encouraging bad behavior. This crap is why phishing happens.
The only way to get them to make a reasonable response to these things is to make a redacted screenshot and shame them publicly on social media. Otherwise, your email is just an annoyance for someone who (perhaps proudly?) thinks "these customers think they are experts when we have a whole team dealing with security".
My previous employer's internal PR team sent out emails that were just short clickbait text in images with a single link that went through the same tracking urls as external emails. You had to click through to get useful information, much to the annoyance of the security team
My company ran something similar recently. The cyber security team opted for “an important change to your available PTO.” It was from an internal address but from a made up name. The company is large enough (1,300 people) that you wouldn’t know if the name was actually an employee in HR or not. Because IT runs software that proxies and mangles all the links in an email, it’s super hard to evaluate the legitimacy of a URL anyways. Of course, most people clicked the link.
Email-link obfuscation (for whatever purpose) makes it nearly impossible to determine a real email from a phishing attempt. Even if you notice AFTER clicking the link that it is no good (i.e. before entering credentials).
My company used to mandate all emails are signed with AD-registered certificates to lend credibility, but they've moved away from that. (I think the reasoning was that webmail clients don't have robust support for S/MIME certs, but I'm not sure.)
Clicking the link shouldn't be enough to consider a target to have fallen to fishing. Sometimes, if a get a fishy email, I open it in a private tab within a browser I don't use, or even within a throwaway VM (if I feel something is REALLY strange).
Clicking a link is all it takes to download malicious code and send stuff to an attacker.
Clicking a link is enough to consider a target to have failed.
It shouldn't be though. If your threat vector includes teams with something like Chrome 0day, you've got bigger problems than employees clicking links.
Malicious email in the wild is either a link to a phishing page, or a link to a page offering an executable.
If I paste a URL urlscan.io and have a look at it, I can assess better whether it might be safe. Being told "url got hit, you compromised us" is really silly in my view.
Of course "click to fail" is silly. And, in some experimentations I did in the past, it's usually easy, in a large organization, to forge a 100% legit url (like somefileserver.organization.com/some_url_that_can_be_easily_edited_by_anonymous_users) and a 100% legit sender (because of some open relay that passes DKIM and/or SPF). So you just need an access to a minimal-security internal network (easily obtainable through spearphishing or malicious employees) to perform a good phish.
The obvious attack vector is to insert some JS in the webpage that performs a redirection to an external server holding malicious data. But the user would fail IFF they entered the data there, not just by clicking.
The URLs include some unique identifier that’s traceable to you. As far as my company is concerned, merely clicking it is grounds for security training.
Edit: I guess the argument is any page could contain an RCE.
Wow. If a single click is enough for a RCE, you've got bigger problems, IMHO. Basically, each and every website can hack into your infrastructure.
I'm not sure whether there are policy recommendations about phishing, but as far as I'm concerned a target would have failed if they entered private data somewhere, or opened downloaded documents or executables.
My last company would do this, and yeah, I think I got bit at least once.
THEN, without any clue about the irony, they'd send us "important" emails with something to click in them... employee surveys, links to benefits information, etc.
This must have been pointed out by other people than me, because once one of the emails had a line like, "AND YOU CAN CLICK ON THIS LINK, IT'S AUTHORIZED."
I don't miss working there (though I do miss a lot of the people).
"And now here is the secret code phrase to show this is legitimate: [...]"
Not sure how you inform people about benefits without giving a link, if the intranet isn’t easy to navigate, short of including everything in the email all the time. So how should it be done anyway?
Well, we did have an intranet (and yeah, the search was abysmal, so it was tough to navigate), and giving it a second's thought, maybe email each person a magic number, that they then enter into a single magic-number box on the intranet and you're magically whisked to the appropriate site? This just doesn't seem like rocket surgery.
An irony here is that there's some decent evidence that phishing simulations don't work; they appear not to meaningfully alter end-user behavior. So basically, this is just an IT department trolling its customers.
A company I used to work for conducted an operation with fake emails using KnowBe4. The helpdesk received an increase of "is this spam?" type tickets afterward. I think it helped for a time but don't have hard numbers. What got most people was the fake Amazon receipts.
I think that's an understated point. We certainly haven't stopped phishing from being successful - but we have heavily increased the rate at which malicious emails get reported. Most of these bad emails do get sent to a lot of people after all.
That makes it actionable - if something looks nasty I can detect every recipient and have it deleted from their inbox. I can report block malicious URLs on the firewall and report them to Google Safebrowse, and so on.
My company has been sending out monthly tests for about 2 years now. There were some modest early gains (~33% -> 20%) but it's been stuck there for 12 months or so.
They're not particularly convincing fakes either: "please update your password sent from @ourc0mpany.com" or "Dear Customer, your Worldwide Freight package has shipped" type things.
> Dear Customer, your Worldwide Freight package has shipped
I had a package shipped from the US to the UK by my company's shipping provider a few months ago due to covid. I honest-to-god thought I was being phished. The email had one of those titles, a typo in the body, no tracking number, was sent from a third party domain. the template email told me to click a link to an external third party via a url shortner, and when you got there you had to enter your "email address and password" which turns out is actuallu my work SSO login and it checks via ldap. Insanity!
Broadcom just did the same thing: e-mail from the CEO inviting people to sign up for a holiday party. Headers were legit with DMARC pass, SPF pass. You were asked to click a URL that showed a Google Drive document but actually led to another site. That and some typos were the only clues that it wasn't legit.
Ah, that would explain why I didn't see you at the party last night. Too bad, we had a blast. /s
In all seriousness how is a laymen supposed to know it's phishing? I assume Broadcom has specific places/processes to organize parties that people should know about?
This kind of email is especially clever as it plays on FOMO which so many people seem to have.
IAAL (this is not legal advice). An offered but undelivered gift conveys no legal obligation on the would-be donor. In order for it to become an enforceable contract, some sort of additional consideration would need to be required from the employee over and above their existing work (e.g. overtime, extending a contract, etc.), and a promise to do that would need to be accepted from the company.
This is covered in first-year contract law.
That said, an employee might be able to recover from injuries on the basis of promissory estoppel if they relied on the promise of a bonus to their detriment.
If:
1. The company sent bonus emails in 2019 via email
2. The offending email was from 'happyholidays@godaddy.com' (instead of 'goCLaddy.com')
would it change the legal perspective? I could totally see myself seeing such an email and making decisions based on the premise that I had additional income coming. I think I've read advice in the past that if a "reasonable person" would believe it, then it counts as an informal contract or something.
Yeah, promissory estoppel is a thing. It's not technically a contract, but it's equitable relief that might be available if there's reliance and no other remedy.
I wonder if any jurisdiction has laws against entrapment against their workers. If not there ought to be a law that forbids employees from setting their workers up for failure.
The most extreme example (I can think of) would be that the company gets a pentester to attempt to bribe employees to disclose privileged materials to an apparently unprivileged person. Would you see this 'as a promise for compensation'?
Not if the tester was an outside contractor and it’s reasonable to be suspicious of someone you don’t know asking for information in exchange for money. But it’s a different story if a company worker was doing the “test” and offered me money for company information. That’s pretty much my normal job (get paid for giving people information), why should anyone be suspicious of that?
Obviously we could come up with details that make it sound completely ridiculous, or totally normal. If someone calls me and says they’ll share the bounty/award money for helping them fix a problem, that sounds normal. If a janitor offers to pay me money to throw away proprietary info without shredding it, that sounds suspicious.
But the point here is that an internal email about having to put in info for your bonus isn’t necessarily suspicious. I have my regular pay and my travel reimbursement deposited into two different accounts right now, so I wouldn’t bat an eye at needing to provide account info for a bonus.
b) Someone telling you they'll help you steal a TV from your friend's house
To be the same thing? Because one involves a perfectly reasonable and legal promise, and one does not. If you can't tell the difference, remind me not to invite you over.
For someone who is not a lawyer, you're making some strong statements about the legal aspects of this situation.
Yes, it's true that an undelivered gift is not a contract...but you seem to have forgotten (or possibly never learned) about promissory estoppel. If the giftee was told of the gift and reasonably acted upon the expectation of the gift, then the gifter could very well be obligated to actually deliver the gift. This is covered in first-year contract law. There are indeed a great number of cases on this point in which the gifter was required to provide the gift.
Also, generally employee bonuses are considered compensation for labor, not gifts. Labor laws trump contract laws. If a company tells an employee that they are getting extra compensation through an official means of communication (like an internal email), they may very well be bound to that, even if it turns out the email was just a phishing test sent by the IT department. You do not fuck with the Labor Board.
You're totally right about promissory estoppel (I updated my other comments about that). It's been awhile since I was in 1L :). Again, nothing I say should be construed as legal advice. Appreciate the feedback.
That nickff's comment asked whether both a promise of a bonus and a promise of a bribe would be equally considered a "promise of compensation"? That's who I was replying to.
Whether or not a promise of a gift is a legally binding contract, there's a vast difference between a promise of a gift and a promise of a bribe.
If a company had a pen tester offer bribes to people, and those employees accept bribes, the pen test was indeed failed and those employees should be rightfully reprimanded. That's good security.
If a company had a pen tester send what appears to be a perfectly legitimate message to employees offering them a company bonus for responding (during a worldwide pandemic where everyone is short on money), and then says it was a pen test and they've failed, that company is awful and heartless. There might not be a legal case against them, but the company is still deserving of public scorn and outrage.
Put yourself on the other side. If their employees are failing this test when they send such messages themselves, the company cannot feel comfortable that they are secure if a malicious actor sends such messages to their staff. It's a case of bad vs. worse. There are no winners here.
It's a message from a valid internal email address. If a company's own email servers can't tell the difference between valid internal mail and external phishing, that's the company's problem, not the employees. If the company's email is hacked so that a hacker can send valid emails from a legitimate internal email address, that's the company's problem, not the employees. Nothing of value was learned by this test, besides the disdain GoDaddy has for their own employees.
If I put myself on the other side, I'd stop producing gross sexist ads, stop supporting overreaching internet legislation, and stop treating my employees like garbage.
If a hacker hacked the CEO's email address, and then used that address to email their secretary asking for information, and the secretary responded.... that is not a security failure on the secretary's part. That's the CEO's security issue, and the company's security issue. Therefore, it's a useless pen test, unless the purpose is to tell employees no emails from anyone can be trusted.
A phishing email can come from the inside. But seeing whether employees will respond to a valid internal email is not a test of employee security. And in this case, it was as heartless as it was useless.
This is beyond the pale, way worse than the scenarios I've seen in the while when I wrote https://honest.security
We HAVE to as an industry disavow & replace this behavior with techniques to build trust between the security team and the rest of the employees.
The people, not the technology are the real eyes and ears on the ground. They are a massively beneficial resource for a security team of any size. Why would you squander that relationship with cheap trickery and deceit?
Seriously, this has to be satire, right? It's straight out of National Lampoon's Christmas Vacation. If true, then "what a cheap, lying, no-good, rotten, four-flushing, low-life, snake-licking, dirt-eating, inbred, overstuffed, ignorant, blood-sucking, dog-kissing, brainless, dickless, hopeless, heartless, fat-ass, bug-eyed, stiff-legged, spotty-lipped, worm-headed sack of monkey shit" company.
Any employee of The Company found to be in violation of our new policy of no bonuses will be subject to immediate action, up to and including termination of employment. Any employee found to be ignorant enough to believe they could possibly be receiving anything for the holiday season besides basal survival shall be forced to work over Christmas making hand-written copies of The Employee Handbook until such a time that the CEO returns from his vacation.
The Board of Directors and Principal Shareholders shall meet shortly before the stroke of midnight on December 31st, 2020 to determine the fate of any other employees and to henceforth distribute The Employee Handbook, 2021 Edition.
This may be an effective test or it may not, I don't care. It is extremely tasteless to offer this kind of incentive to employees especiay during a year where many are struggling financially. This would be slightly more acceptable if the bonus was actually given to the employees. People who really dont see the issue with this most likely havent been in a situation where a bonus can be a lifesaver.
I work for a company that sends phishing test emails like this (though not as bad). It has gotten to the point where legit company emails look just as suspect as phishing tests. I feel like I am just a part of some psychological experiment now.
the simple matter is people click on stuff and while some attempts might be distasteful if not rude there does not seem to a good way to stop people from doing dumb things.
short of banning all non internal mail and not allowing access to the internet except approved sites on corporate computers; then not allowing access from non corporate computers.
all of our mail that comes from external sources is clearly marked yet people still click on enticing links. it is to the point it goes on your review if you fail too many phishing attempts and can requiring training and eventual termination
When they click on the (fake) phishing link, bring up a page that tells that them that it is a test and add some gentle instructions so they can avoid it next time.
What I do to confirm a non-obvious phishing email, is look at the real "From" address. The other day I got an email from "Accουnt-alert@amazon.com"
Looking at the actual email address revealed that it was from
"remimbersrs-qwrdvcxwet-exp-2020-3111538818484262@legger-3.com"
This is a great question, but the answer is pretty mundane. Building out, operating, and maintaining the kind of tooling and infrastructure you need to make this work is really freaking hard even with good client-side support for digitally signed email. GPG and similar tools are really hard to use, really hard to support, and really hard to integrate with platforms and services. Even organizations that have rolled out smartcards to all their users and live entirely in a Microsoft world tend not to use digital signatures. Sometimes, integrations break, which results in the email client (or some other tool) breaking, and then the business stops while the integration gets fixed, if that's even possible. And Goddess help you if you're trying to support more than just Outlook on PCs, which has relatively good S/MIME support, all things considered. It's really, really non-trivial to do what you're asking.
This is a bad mistake that could happen in many organizations. But if we'd blanked out the company name in the headline, and used HN comment voting to collectively guess at which well-known tech-ish company this happened, I would've been very curious to see which ones percolated to the top.
(a) Prior training for what employees are supposed to look for, and how to respond if they think they're being phished.
(b) Unannounced drills to confirm that employees act in accordance to (a).
If some GoDaddy employees did not act in accordance to the training, how is that anything but a failed test?
[EDIT] I just realized that "how is that anything but a failed test" could sound reductive. FWIW, I would call this a test failure and also a really unwise test design.
What makes you think GoDaddy employees are struggling financially? They all have their jobs, and their costs are likely lower than pre-pandemic. Many are suffering and unable to work, but that wouldn’t include this set of people.
Alternatively, GoDaddy has failed to secure their work environment and are blaming the workers instead.
Think about why we care about phishing: it’s mostly about people giving credentials to an attacker. Drop $20/person on some FIDO tokens and the risk factor drops considerably. Repeat for the risk of malware - if someone in accounting can run arbitrary software, they aren’t the root cause.
It’s defense in depth. Just because you have one line of defense doesn’t mean you shouldn’t have another. The way defense in depth typically works is assuming the previous lines of defense have been thwarted. Even if you can’t foresee that happening, assume it did.
And it sounds like these employees were trained on this.
The thing I don’t like is the XMas bonus aspect. But the general idea doesn’t seem unreasonable.
How much depth is it really adding, though? Harassing non-specialists seems to have relatively limited value – plenty of security staff get phished – and there is a risk of making people think of the security group as adversaries.
I would certainly agree that the exercise could have some value but I think it would be wise to weigh that against the costs and especially to think about how you can make it supportive rather than punitive. In particular, most people are not only not given good tools for making untrained security decisions and many of them will be told to violate that advice regularly. For example, what percentage of vendors, outsourced HR, etc. will tell people to open unexpected attachments or click on links which are difficult to distinguish from phishing? SolarWinds was far from the only company training their customers to ignore security errors on installers, too.
> Alternatively, GoDaddy has failed to secure their work environment and are blaming the workers instead.
This.
I've worked at careless companies who don't deal with their security very well.
I now work at a large health care company who takes security incredibly serious. All the USB ports are disabled. Nobody has admin rights on their laptops. You can't install any software unless you download and install from their internal app store which only allows apps that have passed a rigorous gambit of testing beforehand. And you have to put in a request for the software in the first place. It took me three months to get Photoshop approved because I was designated as a developer and not a web designer. It went through three escalations and took several debates between senior managers who finally approved my request.
We don't have "phishing tests". If something pops up on the security team's radar, then they push it out as an email alert company wide. That's about it. Maybe if GoDaddy put as much effort into securing their network as they do putting together these stupid tests, they probably wouldn't need them to prove that yes, humans are fallible.
I'm sorry, but your 3 months to get Photoshop approved demonstrates why so many wouldn't want to work in such locked down environments.
And what was the cost to the company in having those debates between senior managers? Just to get a standard tool that they already approved onto a developer machine? Can you imagine the overhead that they are causing themselves?
Many people have spouses out of work right now. It's possible to have a job and be in a bad situation.
Regardless of whether the employees actually need a bonus. Dangling the idea of a bonus as a test, shortly before the holidays and during a pandemic seems like a recipe for bad publicity or at least some frustrated employees.
A friends company had a test phishing email to see how many people clicked the link, and then sent out another email after the test with a link to a video everyone had to watch about not clicking links in emails... it is amazing that authenticated email is not a solved problem!
These corporate phishing test emails come from phishd and similar services. Very easy to fingerprint them with smtp headers and set up mail rules to bounce them to your corporate phishing sink...
That's the problem. Many companies proxy all links, making the domain you are actually going to indecipherable even if you want to be careful and hover over the link to check first. It isn't like personal email.
I wonder what sociopath came up with this idea for a phishing test. It seems incredibly insensitive given the minute amount of difference it is likely to make - the way to tell a phishing email from a non-phishing email depends much more on the metadata than the actual contents, so there's no reason to play with people's emotions like this. Even the employees who immediately recognized it as phishing will have felt the disappointment.
> the way to tell a phishing email from a non-phishing email depends much more on the metadata than the actual contents
this is true, but people are also much more likely to skip the rational analysis step if the contents elicit an emotional response. it's hard do to an effective test without some sort of emotional and/or time sensitive call to action.
I certainly agree it is cruel to tease employees with a fake bonus. but if they turned around and actually paid a holiday bonus in the same amount, I would say no harm was done ultimately.
My guess that the real story is that someone from above told HR department about salary raises but changed their mind after mass mail was sent. So they needed some excuse and came with this lame shit.
I was replying to greencore's theory that Godaddy sent the email and then changed their mind, and is using the phish test as a cover. If that had been the case, then the email would have been sent from godaddy.com not gocladdy.com
Not to be a total jerk, but has anyone told you that you have paranoid thoughts? This is obviously not the case. I am genuinely curious what might lead one to even be suspicious of this.
Full disclosure, my old employer was very serious about phishing prevention (we sat on a ton of IP and people were out to get it) and did testing of this type as well. I did fall for a phish test around last review season because the phishing email had something about reviews in it.
Let's be adult about it - it was a good test and I failed it because I wasn't vigilant enough.
The fact that GoDaddy's email was "tempting for people around holiday season to click on" is - guess what - exactly what a moderately-sophisticated phishing attack would look like.
One of the things IT teaches employees to look for in phishing emails is language that either doesn't make sense or seems designed to "get your guard down and act on it."
In GoDaddy's case it included language like "it's free money, claim it now." Really - if you slowed down for a sec and thought about security, is this the language in which your employer emails you? Also, why does your employer need you to fill out any forms about your location - they know where you are.
For all the talk on HN about security and companies not taking it seriously enough, here's a great example of a company taking it seriously (to the point of discomfort) to teach their staff about what phishing could look like, and HN is somehow objecting.
PS: I bet there are no consequences for failing this test other than needing to retake the training which is right because the people who fell for it do need the reminder. If this was a real phish they'd have leaked real data. Perhaps even your data.