While this is an astonishingly large criminal heist, we should look at this from a business perspective. The largest take from a single bank sounds to be around $10M. The first russian bank I could find in Wikipedia, Alfa-Bank, had a net income in 2010 of $550M, meaning that if they were the ones hacked they would have lost about 2% of their annual PROFIT. What would be the capital, operational, and efficiency cost of a major security overhaul be? Probably more than $10M. Moving to a new system like qubes or even a more standard desktop Linux variant could very well terrorize me more than the losses from hacking.
Lots of industries just live with a certain degree of loss- retail in particular sees about 1.8% of inventory lost due to "shrinkage", the polite term for shoplifting and employee theft. While stores will take steps to reduce their loss, they can't be extravagant or they will lose customers (I stopped shopping at a drug store that put deodorant behind plexiglass) or cost more than the problem (rfid trackers on every candybar.)
Given that perspective I think we as technical professionals need to be a little more restrained in our recommendations. Enterprise decision makers are very receptive right now to projects involving security due to hacks like this and Sony, but we as technical professionals still have to speak to the whole of their concerns.
Agreed, I'm not saying these bank CIOs should do nothing, just trying to point out the break even point between security and other factors can come sooner than we as technical people might assume.
So what defenses should an organization employ to prevent these types of attacks?
From this non-technical article, it looks like they penetrated employees' computers and used their credentials, which makes sense because it's probably the weakest link.
It reminds me the philosophy/motivation behind Qubes OS [1]: there is no server security without client security.
What are banks running on employee computers these days? I'm guessing Windows. Do they have anything beyond what typical corporate IT does to Windows machines (install virus checkers, auto updates, most users don't have root)?
Clearly that's not sufficient. It sounds like you want some kind of strict compartmentalization like Qubes. There's probably no reason that an e-mail client like Outlook needs to share any state with whatever app they used to manage accounts. Besides perhaps sharing a clipboard for cutting and pasting a tiny amount of info.
The machines probably need secure boot and attestation of the root file system state too. It's pretty bad that in this attack and I think in the Anthem case that attackers were inside their network for such a long period without detection.
I also remember a DEFCON talk where a penetration tester said the hardest site he ever worked on was where they had a strict "star" network topology. None of the computers in the enterprise could talk to each other or even see each other. All communication had to be proxied through a central hub, which would audit all the connections.
Do any banks do that now? Is there any reason they couldn't in practice? I imagine that there isn't really a need for two tellers in the same office to be sharing files directly with each other. Let alone tellers in different offices. I've never worked at a bank do I have no idea what their networks are like. Possibly there would be some uptime concerns with a centralized system like that.
I'm just brainstorming and wondering if anyone has direct work-related experience.
I know it's silly to think that banks would be better than anyone else, but good lord, malware running on machines capable of transferring millions of dollars that's able to send out video feeds from the network without anyone noticing?! Your various IT/Security teams should be absolutely ashamed. And then the banks don't even have to stand up and admit their incompetence publicly; that's a total disgrace.
That's the state of corporate security I guess. I've dealt with corporate IT departments over the years where they put these "processes" in place to mitigate these security issues but it's all a load of rubbish. Filling in forms to tick boxes so that everyone can go home happy pretending there's security going on, when really their network is a leaky sieve.
At one point I saw a release by a 3rd party supplier to a large corporate system that included privilege escalation, blatantly, at the start of a T-SQL script. It was done because the IT department refused to carry out the action on request via the official channel but it was work that needed to be done to complete a project. The 3rd party knew the admins would just be running scripts as SA so they escalated their own account to do what they needed to do later.
I know it's silly to be so frustrated about it, but we've all dealt with crappy banking systems for years, with totally insane security measures; meanwhile hackers can just walk away with millions using a bit of malware.
A key differentiator for banks vs. many other service providers is that financial transfers can be reversed. Releases of information however cannot be.
So where a bank has a risk of an unauthorized financial transaction, there are multiple options to claw that back (or to shift the risk to other parties, notably merchants).
A disclosure, though, of account information is a different case, and here the results can be damaging to the banks and their customers. One instance I'm generally aware of is an increasing number of disclosures pertaining to offshore banking, many uncovered by the the ICIJ (International Consortium of Investigative Journalists: http://www.icij.org/) and the Guardian. Again, the case involves banks, but it's rather more difficult to reverse transactions when it's your client list and balances, or communications, which have spilled.
That's an interesting point, though it's a pretty thin silver lining on a very dark cloud. Being able to undo the operation doesn't really soften the blow of having hackers inside your bank sending outgoing video feeds of employee's screens.
Do you think they'll be getting back the money in this case? Presumably the people involved know enough about the operations to have moved the cash to somewhere out of reach before being exposed.
Fair point. As the responses note, the damage here is usually limited -- ATMs carry only so much cash each, and (usually) only dispense up to a few hundred dollars (or equivalents) at a time. There've been some exceptions where an exploit is found and utilized in mass effect at many locations in a short period. That takes a high level of organization though.
On Feb. 19, cashing crews were in place at A.T.M.'s across Manhattan and in two dozen other countries ... Starting at 3 p.m., the crews made 36,000 transactions and withdrew about $40 million from machines in the various countries in about 10 hours
I stand corrected. That's quite impressive and amazing that they had that many people involved and nobody tipped it off.
So are you saying that bank IT is no better than corporate IT? They don't have any special software or policies? (like the star network thing I mentioned)
I would honestly expect it to be a bit better than average. I suppose there are many different types of banks and they all vary. Let's just consider your chain banks like Wells Fargo or BoA, since I'm sure somebody around here has worked at one of those places.
Across the industry it's generally better than corporate IT. A lot better. However, it varies widely by sector.
Companies with trading floors or that interact regularly with traders have the best IT practices in the industry.
Banking conglomerates are kind of messy - they combine IT operations for each business and never change anything and the systems don't cooperate.
I remember one such company's backup procedures. At that point they were made up of 13 separate large (regional/national) banks. They were trying to standardize the backup procedures between all the banks and run them from a centralized system. At the time that I got there, the nightly backup process failed every single day for over a year and a half. I didn't even get a computer or working logins to be able to do any work for nearly a month. Anyway, getting it to work involved getting the people responsible for the backups at each individual bank's IT group to get their system to cooperate. All of them knew that this would be putting them out of a job at the completion of the project, so there was tons of resistance and it usually took a week of calling peoples' bosses to get the work done. This was also in the middle of forced relocations for most of them. Most of the folks responsible for the work quit. It was really ugly.
To be honest, I have no experience - so I can't say, if the description in the article is at all accurate though, they're in a pretty bad way:
"The cybercriminals sent their victims infected emails — a news clip or message that appeared to come from a colleague — as bait. When the bank employees clicked on the email, they inadvertently downloaded malicious code. That allowed the hackers to crawl across a bank’s network"
There must be plenty of people on HN with experience in this field, so it'll be interesting to hear their take on it.
My (very ranting/rambling) point was that I've seen other large organisations pretending to do security (and probably believing it themselves), where it's really just security theatre.
>Filling in forms to tick boxes so that everyone can go home happy pretending there's security going on, when really their network is a leaky sieve.
I saw a DefCon video where the guys were talking about something similar. Lots of small banks in the US use 3rd party services for their banking software. One of them had horrendous security and so some hackers made off with several million dollars before anyone found out.
Six years ago I was an intern at a Wall Street firm for a year. The firm that I worked for used an account system that was built in the early 90s and relied on all employees learning special terminal commands to access anything. I can't really go into detail for fear of being sued, but suffice to say the system was archaic. I was amazed that a multi billion dollar company relied so heavily on and invested so little in something essential to the business.
The IT department seemed to use the following logic to justify it: the system served its purpose, the legacy employees already knew how to use it and the developers who made it were long gone thus it was cheaper and easier to just leave it be. While my firm had plenty of developers that could rewrite the system from scratch, their attention was devoted solely to money making endeavors like trading platforms and client facing projects.
As for the "Enterprise Architecture Group" (ie the developer department) that I interned in, the big problem was the heavy reliance on third party development companies. While the firm wanted to hire more developers, simply put very few developers want to work for banks (it's funny though that people in finance would have killed to work at the firm). It would take 6 months to a year on average to fill a developer position and they would have to pay a big premium over the average dev salary with a large yearly bonus.
In order to keep up with all the various projects, they would pay third party development/consulting companies millions to come in and create apps. While this allowed the firm to get the necessary apps "done", it created the most crazy spaghetti architecture you could ever imagine. All these different apps were built using different companies/languages/platforms/technologies then thrown together in a big mish mash of iframes and duct tape. The fact that any of them were able to communicate with each other at all was a miracle. I don't actually blame the developers themselves for this, they would constantly voice their concerns while the completely clueless department head/"architects"/project managers/business analysts would shoot them down. They would say things like "I understand your concerns but Super Consultancy X says that they would do it and it will only take 12 to 18 months!! They are even available to help support the app once its finished!!". Security and use experience were not even on the company's radar, only making money.
So what defenses should an organization employ to prevent these types of attacks?
Infrastructure architect at a major Bitcoin exchange here.
It's about defense in depth. Processes. An architecture level stance like "do not trust the client, the server, the network, the data center, the hardware provider, or any particular stage within those three elements". Each element validates the other. An alarm raised by inappropriate behavior at any point will shut down an entire instance, cell, or data center before allowing an attacker a foothold.
The only way to realistically take such a stance without going broke or becoming functionally paralyzed is infrastructure level automation beyond what is common in the industry. Hence, cue for meaningful cloud infrastructure management systems spanning private and arbitrary third party infrastructure. Docker-level stuff is about 1/2 way, what we really need is a few degrees of abstraction beyond that.
So, I'm specifically asking about protecting employees' machines. My reading of the article is that the attackers got a foothold on employees' machines and credentials, and just piggybacked their malicious transactions along with normal transactions.
In that case, it doesn't matter how much security you have in your data center. Employees need access to central systems to do their job, so client security is paramount.
For instance, for a bank or Bitcoin exchange, I think it would relevant how many client operating systems can access your crown jewels. I think if you're just using Mac or Windows with antivirus or whatever, there's already a pretty low upper bound on your client security and thus your overall system security.
What I'm wondering if anybody is deploying some kind of custom client OS similar in spirit to Qubes OS, or a build of Chromium OS or Android, which have application sandboxing beyond what stock Linux, Mac or Windows have.
Also, I would imagine that each teller has their own credentials, and the bank should have policies about the transaction rate / total for a single teller. It sounds like the attackers would have to compromise multiple employee accounts to steal that much money. So you also want to protect employees machines from each other as much as possible (not just "outside" attackers).
I'm guessing that a Bitcoin Exchange doesn't have that many employees, since the whole industry is new. You probably have people just accessing stuff with their personal MacBooks or whatever, and that's fine for now (there are bigger risks). But when you start to have 100, 1000, 10,000 employees capable of doing financial damage, then I think this type of thing will start to matter more.
EDIT: Actually I remember one large deposit I made required three people at a bank to approve it. The teller said, "Wait my boss has to approve this." Then the boss said, "Wait my boss has to approve this". So they are probably using the presence of three credentials and credentials at a sufficient employee level to authorize large transactions. So I take it the attackers would have to target employees with those credentials.
But that can cause problems for customers -- e.g. if the branch manager isn't around, you might not be able to do what you wanted. To some degree, they are using meat space protocols to mitigate risk that their software systems can't handle.
Even widespread two factor auth would mitigate a lot of this. Banks are often quite backward because there are few software suppliers, and it is an industry that took to computing early so there is a lot of legacy. But they vary a lot - the implication of the story is that these were perhaps banks in smaller countries - the banks that got defrauded recently in another large case with cashpoint withdrawals from fake cards were middle eastern. You have a lot of choice of banks, choose the weakest...
I don't believe that's true in this case or in the case of many client attacks.
If you have two factor auth, the employee will go through the process since they need it to do their job for 8 hours a day. Then they will have credentials on their machine (in memory or wherever).
Any attacker sitting on the machine can use those same credentials. Whether you have two factor auth or not doesn't matter.
The point is that you need to prevent the client from getting infected in the first place (which isn't easy if you have 10,000+ employees). As mentioned, if the state of the art is Windows or Mac + antivirus, then your upper bound on security is pretty low.
I recommend reading "Kingpin", a recent book about Max Butler. There's a nice story where he is hired for a penetration test. He guarantees 100% success rate, since he's always been able to get in.
He was coming out of jail and his skills were perhaps rusty, and he couldn't get into this particular server.
So what he did is hack an employee's home computer, steal their VPN credentials, and hack the company server with internal access. Apparently the company was agnry that he did this, but it pretty vividly illustrates the point.
I recall that Kevin Mitnick also used employee VPN attacks. Just because you have hardened Linux, regular updates, jailed processes, etc. on your server doesn't mean it's secure. Employees have to access systems to work, so that is often the weakest link. It's not surprising that this is how major banks got hacked and relieved of millions of dollars.
Hi, IT Architect with a history of several major financial institutions here.
Defence in depth is a placebo. Separation of concerns, principle of least privilege, honeypots, SIEM, file integrity monitoring, host intrusion detection, IDS/IPS on all your ingress and egress points, WAF, content filtering and a responsive and empowered SOC capable of acting on auditing events will get you half way to not showing up on the front cover of NY Times.
The problem is that it takes money to keep money safe and too much security is often not secure at all, so putting everything together in a way that you doesn't motivate your users to find new and exciting ways to bypass your controls is an art in itself.
Would love to discuss some of these things with you, any chance I can interview you for my blog?
Agreed, there's no silver bullet. However, I don't think spending money to feel better is much of an alternative. It feels to me as if internal process design in security-conscious organizations is probably more important than actual systems design... which could be summarized as knowing when not to take shortcuts. Please do get in touch, I'd be happy to chat. Email in profile.
The best defense an organization can employ, is to make departments/managers/people economical liable. This result in insurance being bought, budgets assign to risk management, and practical prevention mechanism being implemented.
No organization like being attacked, but any defensive measure that cost money will always be balanced to the potential loss, risk, and convenience of employees. If the risk feels low, the potential loss minimum (worst case, government will intervene), and employees inconvenience high from employing effective security schemes, then no such efforts tend to be used.
I'd like to see the sysadmin or programmer that is willing to take the loss if someone hacks the network (or an app) of his employer and steals a few hundred million dollars.
But we are not liable as long as we follow standards, e.g. building codes. And it's easily verifiable by the government, the employer and the engineer himself whether the standards are being complied with.
Until you have similar standards for software development, I cannot see how such liability shift could work. This is one of the reasons I tend to avoid using the phrase software engineering. It's so different from traditional engineering that it feels incorrect to put it in the same category.
It's not enough to put standards in the software development. Users can misuse software regardless of how well it's written. Same as if you build a bridge and users overload it.
The weakest link was that the computer with access to $10 million+ had access to the general web and was running a general purpose operating system at all.
You don't need Qubes to secure this situation. You could use an iPad/Chromebook or a filtering proxy (whitelisted websites) and either would be sufficient.
That seems to be the fundamental engineering flaw here. Also, their email system shouldn't allow executable attachments. The last company I worked at completely stopped all such virus infections by killing all executable attachments.
You are never safe from targeted attacks but that doesn't mean you should run exotic architectures or give up. There's no reason not to stop carpet bomb attacks. Most of the big known breaches have resulted from those.
Don't run Outlook and don't autorun USB. That should stop most automated attacks, including all the big known ones that breached large companies such as RSA and Google.
To stop the rest, don't surf from sensitive machines, and require two factor auth such as Yubikeys or RSA dongles to log in to them.
Compartmentalize sensitive information on separate machines and networks, and externalise sanity checks of data transactions where possible.
PLEASE do not link to qubes in any security related discussion, the devs are known for their incompetence and making some rather hilarious public claims[1].
Not only that, Qubes uses a vulnerable git version from several years ago so practically anyone could go backdoor it if they cared.
I laugh whenever someone tells me that they never buy anything over the internet. Their reasoning is that they're afraid of hackers going after online transactions. It seems to me that most of the serious security problems reside in the places that keep your money or access to your money, such as banks, credit cards, or even businesses such as Anthem, etc.
Another problem that I've seen from banks is that they all use Microsoft Windows for most of their employees. That's got to be the worst OS in terms of security. Not saying that you can't break into other systems, but it is so much easier under Windows.
The scope of this is pretty stunning, but if you're going to make a billion dollars you can probably invest 100M or so in developing an organization that can pull it off.
I wonder when we'll see the equivalent of VC money in these sorts of enterprises.
It really only makes sense for organized crime to manage this within their own ranks. You already have trustworthy people, and people with the relevant skills and connections. How would you know if the "startup" you're funding isn't undercover police? If an upstart appears, just "convince" them to share in the profits.
I'm going to be a little pedantic, but since this is an article about security (and your point is also about security, of a marginally different type):
> You already have trustworthy people, and people with the relevant skills and connections.
In security, there is a distinction between 'trustworthy' and 'trusted'. Organized crime definitely has trusted personnel, but, 'trustworthy'... maybe not.
So who ends up footing the bill? Does the bank just write it off as a cost of doing business? Also aren't financial transactions reversible among banks?
While I've never worked in banking/financial environments I do know of people who have; they often had two workstations (one for the 'public' network, the other for the systems) and weren't allowed to use software like Synergy to share the keyboard and mouse. I guess not every company does stuff like that, though.
It's nearly impossible to isolate banking system networks these days. As an example, ATMs run transactions through public networks. Customers access their accounts via public networks, etc. Further, network isolation as a primary control fails time and time again.
It's best to focus on the end points and beef up security there. Focus primary security controls on the application and not the perimeter. One of my biggest frustrations as a security professional is walking into an environment where systems which must be highly secure are accessed via simple username & password. All banking applications at a minimum should require x.509 client auth for employees utilizing a private-key stored on a device which is not permanently attached to the system. Monitoring solutions should then be in place to track authentication actions and provide that visibility to security staff and the employee's themselves. That's a pretty basic first step and one I rarely see in practice. Next, rather than isolating networks, start paying attention to the traffic on the networks & limit transactions to known good entities. After that organizations need to consider their customer environment security and how they may be inadvertently compromising it. It's amazing how many times I've gone to a public facing banking portal and spotted third-party JavaScript loaded within the same origin context of an authentication form. One bank I looked at awhile back actually had an advertisement from a third-party ad network on a page where they asked for credentials! That's pretty much asking for their customers and thus their accounts to be compromised.
"It's best to focus on the end points and beef up security there"
Not the way I'd do it. Defence in depth means securing everything. Starting with the perimeter, working inwards to individual apps - on both clients and servers. Every resource needs to be secured. That means spending cash, and the amount of cash that should be spent should be proportionate to the value of the asset being protected. If you have a server application or service, put an application firewall in front of it, so that both internal and external access goes through it. Don't just write a threat model, document the threat tree. Don't trust your employees, your software, hardware or building security. And don't trust the bosses either.
It's analogous to having a bodyguard. If you're in the bedroom and leave your bodyguard in the kitchen for a private conversation, the bodyguard and his big six gun are going to be of absolutely zero use when ninjas come crashing through the bedroom window.
To run with your analogy a bit I occassionally see CEO types with "bodyguards". Because the kidnapping attempt is theoretical and not happened for ten years the bodyguard is carrying the luggage or opening the doors or answering the phone.
The analogy is fairly clear - you can spend the money on security in depth. But humans tend to use those in segments for other things eventually. Banks hav been around long enough that all their bodyguards are now bellboys.
The problem is, most organizations start at the network rather than focusing on the application tier. In the development of applications, they should be designed to work safely within a very hostile environment. Far too often they are not.
Now, I feel a discussion like this one would be the perfect place for me to introduce myself and... try to sell my services but I think I'm too late to the party so I'll keep it short.
Banks are the archetype of the company that suffers through technology. They make huge investments in IT year on year, but often they end up buying overly complex solutions from 1MM consultancy companies that never get fully implemented and, worse, cause high levels of frustration that then backfire onto projects that could actually make a difference.
With every department (or vertical or region) running their own IT, many of the core functions being outsourced offshore, and innovation (ie: BYOD, Shadow IT) being ignored, some pretty serious gaps are opened in the way security is handled despite best intentions, processes or even regulatory compliance we end up with local desktop machines having direct and unrestricted access to sensitive systems _and_ the internet.
Of course, all this is very nice but at the end of the day if someone can just walk in to your office to "fix your computer" and no one bothers to check their credentials... there's only so much one can do for you.
> But the largest sums were stolen by hacking into a bank’s accounting systems and briefly manipulating account balances. Using the access gained by impersonating the banking officers, the criminals first would inflate a balance — for example, an account with $1,000 would be altered to show $10,000. Then $9,000 would be transferred outside the bank. The actual account holder would not suspect a problem, and it would take the bank some time to figure out what had happened.
Sounds like a badly designed system. Usually a bookkeeping system should only accept additions and subtractions, not have direct access to the amount number. Those additions and subtractions should be versionned. It might take a lot of resource and computing power to track that many accounts, but in my opinions, if google, the NSA and amazon have big datacenters, banks should too. I don't think they really have the proper infrastructure to secure something so important like account balance. I even think the government should invest money in securing those systems and places, since it's a nerve of the economy.
So either use up to date computing methods, or hire more accountant and use paper instead.
"But the largest sums were stolen by hacking into a bank’s accounting systems and briefly manipulating account balances. Using the access gained by impersonating the banking officers, the criminals first would inflate a balance — for example, an account with $1,000 would be altered to show $10,000. Then $9,000 would be transferred outside the bank. The actual account holder would not suspect a problem, and it would take the bank some time to figure out what had happened."
A naive thought...if they leave with the exact amount of money (left) in the bank, should it be seen as just "illegal inflation", rather than seeing it as a theft. Someone made a gain but nobody made a loss in any case. Banks have always created more liquidity officially through loans, except that it is legal.
As far as I understand it, the money that was transferred out did not come from nowhere... Ultimately it was the bank's money.
Edit: Meant to also mention also that the whole making-it-look-like-an-account-had-more-money concept was about making the fact that they were taking the bank's money harder to notice. It was not actually creating money that did not exist before.
Not sure, double entry bookkeeping is apparently not baked in everywhere, if you can increase an account balance it might not be picked up. If this is creating money, well thats another question.
Yes, that's absolutely fascinating. The banks wouldn't catch on until up to 10 hours later. Is it possible they're only validating their database every 10 hours?! Shouldn't the database reject the transaction instantaneously?
That's certainly not what went on here, from the description. Your assertion is truthy, in that, yes, the /practice/ of fractional reserve banking increases the money supply.
However, to do the equivalent in the way the example described would have required mocking up a loan (asset) which would be offset by a liability (deposit account balance), and a reserve amount (loss allowance and bank capital) behind it. In that manner you create 10,000 in an account which may be withdrawn where the "real" money in the bank is only 1,000. This happens every day, but not by hackers, just by run of the mill self-dealers and fraudsters. At the end of the day the 9,000 comes out of the capital of the bank (or the deposit insurance if things get dire), but it can take months or years ... See the s&l crisis for more.
But hackers don't give a shit about the books seeming to balance for years to snooker regulators. They just want to withdraw the $9,000. The books obviously won't reconcile end of day, but who cares?
So yeah, fractional reserve banking is interesting to know about and not without its hazards, but this exploit could have happened against a full reserve depository institution just as well. The fractional reserve thing is spurious.
Given the amount of money stolen, I wonder if bribing an insider was involved. That wouldn't be surprising to me, given that most of this was in Russia.
Lots of industries just live with a certain degree of loss- retail in particular sees about 1.8% of inventory lost due to "shrinkage", the polite term for shoplifting and employee theft. While stores will take steps to reduce their loss, they can't be extravagant or they will lose customers (I stopped shopping at a drug store that put deodorant behind plexiglass) or cost more than the problem (rfid trackers on every candybar.)
Given that perspective I think we as technical professionals need to be a little more restrained in our recommendations. Enterprise decision makers are very receptive right now to projects involving security due to hacks like this and Sony, but we as technical professionals still have to speak to the whole of their concerns.