From the outside, it feels that most exploit chains on modern systems rely on 4 mostly-independent steps:
1. code execution in a worker process - typically a memory corruption
2. ACE in worker process to ACE in unsandboxed process
3. code execution in unsandboxed process to local root
4. local root to persistence
Finding (1) (3) and (4) is old-school exploit development - a combination of looking at fuzzers, looking at code, looking at bug reports, and memory exploit development (which is a black art I'm not familiar with). So persistence and luck. Be lucky 3 times and you have 3 steps. If you were an organization I suppose you could have 3 separate groups or buy from 3 separate blackhats.
I'm less familiar with the "worker process to user process" part, which tends to rely on combining a few vulnerabilities (in this exploit, 2 + 1 broken hardening), but it's probably similar.
Finding (1) (3) and (4) is old-school exploit development - a combination of looking at fuzzers, looking at code, looking at bug reports, and memory exploit development (which is a black art I'm not familiar with). So persistence and luck. Be lucky 3 times and you have 3 steps. If you were an organization I suppose you could have 3 separate groups or buy from 3 separate blackhats.
I'm less familiar with the "worker process to user process" part, which tends to rely on combining a few vulnerabilities (in this exploit, 2 + 1 broken hardening), but it's probably similar.