The article says the ransomware affects even patched Windows boxes. Perhaps what you mean to say is, "Great. Maybe we can finally put a price on using Windows."
what you are suggesting has grave realities for those who cannot or do not want to mess with formal verification techniques. This is the future of computing and a lot of people will be left behind once this catches on.
Are you sure about that? You do know most organizations will implement that as a huge amount of bureaucracy for every commit, rather than proper man-hours of security-oriented development.
Only because most organizations don't know how to be effective at security.
It's not hard. You don't actually have to change much. You just have to schedule regular pentests, ideally every couple weeks.
Pentests protect everyone because it's our job to worry about all of the security flaws that you can't possibly be aware of in your normal day-to-day development cycle. There's just too much for any organization to know about except security companies. This way you can focus on development and we can focus on pointing out how to fix what's broken.
Pentests aren't a magic bullet either. You can easily find a consultant who isn't going to rip you a new one.
Security is a mindset. Any "checklist" approach will eventually devolve into ass-covering by an organization that is not internally motivated to run a tight ship. Legitimate variances will be hassled to no end, while actual security vulnerabilities will be ignored.
In the real world, one of the only reasons people get pentests is because another company is forcing them to. That results in a document saying company B is secure.
This is a very effective approach at cutting through ass-covering. Company B has to fix the security problems uncovered in the pentest. There is no other option. And I've seen it take products from "SQL injection by typing an apostrophe" to "It'd be very difficult to exploit this app."
If that's not proof that pentsts are effective, then I'm not sure what would be.
We like to say that security is a mindset, but developers have way too much on their mind to be aware of every possible security vector. It's easier and more effective to punt and let us worry about it instead.
There's different levels of penetration testing too. I worked at a SaaS startup and when we got our first big customer they demanded we get a third party to run a pen test on us. They basically ran their script and gave us a report. There might have been some minimal going back and forth about some false positives, but that was about it. That's better than nothing, but may not be what some of the more technically/security minded folks here at would consider a real pen test.
It's exactly the same as physical security. You build fences and buy locks. You pay people to keep an eye on things. You take insurance to cover the rest of the risk.
Nothing hard, no new inventions required. It just takes some attention and cash. It's part of the cost of being in business.
Wait, the hardness of information security comes because it has to be built-in everywhere since everything is connected and so everything is a potential attack surface.
It's not impossible but it requires a somewhat universal attitude change.
I want to agree with you in principle, but in practice it's not possible to be secure with just an attitude change. The attack surfaces have grown too large. Keeping track of all possible vectors is a full-time job in itself. You either need a dedicated security person or regular pentests. And honestly, regular pentests are probably more effective.
It's a positive statement though: it is possible to be constantly secure if you just get a pentest every few weeks. Big companies can even afford to make it a requirement of their release cycle.
> Big companies can even afford to make it a requirement of their release cycle.
Oh man. I have a peer who works for a very large international company. They require pentests in their release cycle. What could go wrong?
Turns out that pentesting isn't in the final portion of their release. They tag a release candidate (e.g. v5.7.0-rc), send that build to the pentesters, then fix other integration and user-acceptance bugs while the pentesters are working. The pentesters may greenlight v5.7.0-rc when it's really v5.7.3-rc that's shipping, and the pentesters are none the wiser.
Attitude change in the sense of not being willing to allow inherently insecure architectures - management always moving the company towards secure-on-principle architectures (not that I'm qualified to say if it's a good example but Google's BeyondCorp is an example of aiming to make everything secure on principle meaning not leaky on principle). That added to any pentesting or other necessary immediate security measures.
The impression I have is that today's event was the result of a lot of companies allowing insecure-on-principle architectures like a zillion apps each with their own update structure (random Ukrainian enterprise app supplier gets penetrated and the whole world goes down). A pentester might never be able to find that vector until that app supplier leaves their door open or someone finds out about them for example.
And people skilled at picking the skilled people and a willingness to actually do what the skilled people say... when those skilled people aren't necessarily the same as the managers shouting managementese...
And this also collides with the willingness to do anything to save a couple of dollars and once that dictate isn't flowing through every once of the company's blood, who knows what will happen.
Pen-tests show the presence if vulnerabilities, not their absence.
To make secure systems, we need to take the (very) difficult road of working our systems bottom up and proving the absence of vulnerabilities and defining the boundaries of safe operations.
What I really want to see is security being integrated into the development process as a conscious tradeoff teams have to make.
When a new feature is proposed, it's rare to hear someone object on the grounds that it could potentially add new vulnerabilities, but in the long run an approach that recognizes and considers those risks would be beneficial.
At the same time, this is incredibly hard to do - managers celebrate employees who develop things that look cool and awesome, not employees who can mitigate risk and manage security effectively (hopefully this changes, but I can't imagine that many unaffected CEOs are calling up their sysadmins right now and congratulating them on their diligence in making sure all their machines are patched).
Definitely a problem. People (incorrectly) compare vulnerability scanning with pen testing. Vuln scanning often is a component of a pen test, but we do a bad job explaining the distinction. Pen test should attempt to use the app(s), maybe test the people and process, not just profile the software versions and complain they are out of date or misconfigured.