About those tests, you should know that the testing orgs are using an array a computers with up-to-date AV solutions, and then making them all go to e.g. websites dealing malware right then as soon as they find a new sources of malware attacks.
I honestly cannot imagine a better way to objectively test how well the products fare against attacks against an average Internet user.
Edit: If I was not clear, nobody tests with historical samples anymore. Only live attacks are being used for tests.
The problem is trying to extrapolate future performance based on performance against a historical sample. The process looks something like this:
1. Malware author releases something new
2. Users start getting compromised
3. Antivirus vendors start getting samples and analyzing them
4. New signatures are released
5. Clients download and install the new signatures
That cycle used to work better but in the Internet era it's a given that malware vendors are taking advantage of the substantial time delays between steps 4 and 5, which are often measured in hours or even days, and will change their code as soon as new signatures are released.
When someone reports results and they specify that the percentages are based on a historical library, that tells you little about what it'll do for you now. When they tell you that results are based on samples collected in the month prior to the test, which is what AV Test and AV Comparatives say they do, that's less stale but since it's starting after the vendors have already completed the entire process it still doesn't tell you how long you'll be exposed between steps 1 and 5 or whether some malware authors are consistently staying ahead of the loop.
This is really coming back to security fundamentals: trying to enumerate all of the bad things on the internet is futile. The better strategy is removing the ability to run programs which aren't on a known-good list but that breaks a lot of legacy practice.
> I honestly cannot imagine a better way to objectively test how well the products fare against attacks against an average Internet user.
The most reliable way to do this would be to simulate randomly surfing around the web, being sure to click on all of the ads, while monitoring for changes to existing programs or new programs, access to files the browser had no reason to open, and unexpected network connections.
I honestly cannot imagine a better way to objectively test how well the products fare against attacks against an average Internet user.
Edit: If I was not clear, nobody tests with historical samples anymore. Only live attacks are being used for tests.