I’d also highly recommend his 'Silence on the Wire' book ( http://lcamtuf.coredump.cx/silence.shtml ), it’s a really readable full-stack overview of potential security problems.
In fact, I’m going to dig my copy out and read it again.
That's a pretty horrible indictment of WebInspect, because Skipfish virtually never finds anything for us (we ban scanners on our teams, but I like Zalewski and tend to run Skipfish just for kicks).
I suppose it's somewhat of a backhanded compliment. I honestly wouldn't pay money for the results I get from WebInspect (we have licenses for it, so we use it).
I swear that WebInspect has gotten noticeably worse the past year and a half or so. Prior to that, it would at least occasionally find something interesting; but I honestly can't remember the last time it found something that wasn't a false positive.
As to SkipFish, I would say that the past three applications I've run it against it's found at least one interesting thing (which is high praise for automated tools in our industry).
Any specific reasons you don't use them? It's a pretty interesting view from a security guy really... most of those I heard before say something like "it doesn't hurt to leave it running, while we do our stuff manually - sometimes it works".
Also, did you mean fuzzers in general or only web scanners?
Just scanners. Everyone uses Burp's fuzzer. And in the rare cases we end up doing network pentests, we will use Nessus.
Scanners make testers stupider. Even if you are conscientious about using them responsibly†, they still work to turn off the part of your brain that thinks about the kinds of flaws they do a good job of detecting. If you say you'll only run them at the end of a engagement to see if there's anything you missed, now you're working with a safety net.
† And we've worked on plenty of projects that had previous runs from teams that weren't responsible about scanners, with predictably horrific results.
We recently partnered with another assessment firm to handle overflow (something not one person in our group was happy about, but we spent the entirety of 2010 overworked), and in the past couple weeks I've had two issues come to me where this firm submitted a clean report with no findings.
They asked me to take a quick look at the applications (as it's a somewhat rare occurrence to not have a single finding), and I immediately turned up a bunch of issues. Upon further inspection, this company is just running WebInspect, without apparently any actual validation or manual testing.
I tried running away from security in 1998 and found that there's charlatanism anywhere you go. Try being a baker; no, wait, there are well-marketed charlatan bakers, too!
There is at least one other company that has publicly banned web scanners. My take is, you should work for/retain companies that refuse to use scanners, and, when possible, avoid using companies that mention using them.
A friend of mine, who I'd have loved to work with, ended up at a security company whose sales pitch to him included "we buy our consultants whatever tools they want, so you can have WebInspect and AppScan". I was unable to convince him that this was a "run, don't walk" red flag.
Or move to a better company.
I tried running away from security in 1998 and found that there's charlatanism anywhere you go. Try being a baker; no, wait, there are well-marketed charlatan bakers, too!
I figure it's a new year and maybe I'll finally man up and find someplace more fulfilling.
My issue is that the pattern I find tends to be:
1. Go work with a bunch of really smart, awesome folks doing cool work.
2. Group is successful, gets bought by large monolith.
3. It gets shitty, everyone leaves, starts over somewhere new.
4. Repeat steps 1-3
I am literally on the fourth iteration of this process.
A friend of mine, who I'd have loved to work with, ended up at a security company whose sales pitch to him included "we buy our consultants whatever tools they want, so you can have WebInspect and AppScan". I was unable to convince him that this was a "run, don't walk" red flag.
That's a bit like liking both "country" and "western".
Don't get me started on paid overtime.
In general I've never complained too much about hours. I've always found that this type of work is cyclical as far as how busy you are on any particular test.
As long as I can remember, it always went that 4th quarter was crunch time (with companies having to spend their budget before the end of the year) so you'd have to bust your ass for a few months, but then things would lull a bit in January (as budgets aren't allocated yet). For some reason, this past year the entire year was like that 4th quarter sprint.
There's a common industry practice of double- and triple- booking people on projects, to the point where schedules can only possibly be kept if people regularly work 10-12 hour days (but, of course, bill 16-32 hours each day). Consultants are kept happy --- indeed, many are thrilled with the arrangement --- because the consultancies pay subcontractor-rate overtime.
This is predictably disastrous for clients.
It blows my mind, really. When you get a contract to assess an application for security flaws, the client presumes you are going to find the stuff that needs to get found; they trust that once you're done and they fix your findings, the app is safe to deploy. Overbooking consultants is like being an auto shop that fucks up brake jobs or an electrician that leaves bare sparking wires in the basement.
I hate being double and triple booked because I worry that I'm going to miss something that I wouldn't otherwise. I mind slightly less if the projects are of dissimilar types (like an app test and a network test), but I feel like it's just an accident waiting to happen.
I worked 16 hour days for the majority of 2010 (worked mind you, not billed) with a huge portion of those on opposing testing schedules (so I'd do one test in the daytime, another at night, and then try and sleep for a few hours).
And we don't get overtime. We have a utilization target (420 hours per quarter), which if we hit we get a bonus. I don't think anyone has missed it in two years (as pretty much everyone is working at least 600 hours per quarter).
I'm hoping we fill some req's (we had around 30 open last year for testers, but the larger organization had a hiring freeze).
Yeah, I don't say this often because I feel weird talking about people who are effectively competitors, but, your company is broken and you should quit. There are better places to work. Drop me an email sometime; even if you're geographically (or socially ;) precluded from working for Matasano, I can put you in touch with lots of other people.
I heard about the MS inaction from many sources now. For example IKVM.net developer wrote a couple of times: "P.S. By my new policy, I won't be filing a bug with Microsoft since they have amply demonstrated not to care about external bug reports."
Although MS' reaction does appear to be irresponsible, a browser crash is hardly the worst security issue I can imagine. If that's all this guy is finding -- it's all he mentions in his post -- then this sounds more like security for security's sake than anything practical.
It's not all that he's reporting, he's just assuming some hacking knowledge I guess. Every time you see a crash which is caused by jumping to some unknown address, there's a pretty good chance that the crash is exploitable - but you can't tell that easily without going through the source / poking around the binary.
Basically in many cases the jumps into non-code areas mean that some buffer overflowed (or some pointer got corrupted) and overwrote the stack frame return address. As long as you can control what was overwritten, you're likely to be able to point at the data you supplied yourself (back at the stack with some text you control). If that condition is true, you can provide new code for execution straight from html - that means the crash is exploitable.
Then again, even if you can't see how is that specific crash exploitable, it doesn't mean it isn't. For some time double-free crashes were just bugs. Then someone found out you can manipulate the pointers in possible later reallocations. What I'm trying to say is that every crash caused by user supplied data should be looked at from the "might be exploitable in the future" perspective.
> Every time you see a crash which is caused by jumping to some unknown address, there's a pretty good chance that the crash is exploitable
No. It's not 2001. Modern MS (and UNIX) operating systems and compilers use NX, stack guarding, and addr randomization to make stack/heap overflows pretty difficult to exploit. Not impossible, but statistically unlikely. Run-of-the-mill C programming errors in a web browser are hardly automatic remote sploits now.
> every crash caused by user supplied data should be looked at from the "might be exploitable in the future" perspective.
OK, I agree with this, but out of principle, not because there's a high chance it's remotely exploitable.
I don't know the details, but a fully attacker controlled EIP sounds likely to be exploitable doesn't it? Possibly data leakage from the browser, maybe arbitrary execution if the attacker is lucky. This sounds likely at the least to put an attacker in a better place to socially engineer a user into installing malware.
If you can control EIP, you can run arbitrary code, period. It might be difficult depending on whether ASLR is in use or not, but it can always be done.
Long answer: It depends on what you define as arbitrary code execution. In a good number of cases, you can use ROP to execute whatever "code" (made up of little bits of existing program code repurposed for your needs) you wish, and accomplish whatever you want to accomplish. In many cases, this is finding your "real" code and setting memory protection such that you can jump into it. In effect, if you have control over EIP, you've already owned the system; it might not be easy to do everything you want to do, but it's effectively always possible.
Interesting, thanks. According to Google, ROP is return-oriented programming, where you build up a program by jumping to near the end of existing executable subroutines. The control flow comes from corrupting the stack with a list of the addresses of this code. The example here:
Search engine hits to this guy's site indicate that these problems are being independently discovered by people based in China.
http://lcamtuf.coredump.cx/cross_fuzz/known_vuln.txt
Bugs in all other browsers, although with better responses it seems. Interesting problems to solve here, both technically and socially.
---
This guy's blog is great, read more of it! Some recent articles:
http://lcamtuf.coredump.cx/electronics/ - geek's guide to electronics for programmers who don't know this stuff
http://lcamtuf.coredump.cx/word/ - cool physical project - threat level indicator
---
Author Wikipedia page:
http://en.wikipedia.org/wiki/Micha%C5%82_Zalewski