> The correlation that LLM produces has no value if its not actionable.
For security, there are two parts to this:
- correlation within detection engines, i.e. what Crowdstrike does: CS and so on are already doing what you describe (baselining normal system and identity behaviors). It is hit-or-miss still, but noticeably better than a few years ago, and I think this current AI era will push it further. These already took away the need for several sec eng hires.
- correlation across logs, i.e. an incident is happening and under time/under stress, and usually this is a IR team putting together ad hoc search queries and so on. LLMs, as many of them seem to have indexed query languages docs and much of the open source docs on AWS, O365 etc, are in almost invaluable tool here. It's hard to explain how quickly security dev across pre-incident prep or in-incident IR are sped up by them.
> where you can ask the "right" questions a useful tool because...
Yes, this specifically is one of the great value-adds currently - gaining context much quicker than the usual pace. For security incidents, and for self-build use cases that security engineers often run into, this aspect specifically enough to be a huge value add.
And I agree, it will exacerbate the existing version of this, which is my point on replacing analysts:
> you are back at the original problem of developing security engineers...
This is already a problem, and LLMs help fix the immediate generation's issues with it. It's hard to find good sec engs to fit the developmental sec eng roles, so those roles become LLMs. The outcome of this is... idk? But it is certainly happening.
For security, there are two parts to this:
- correlation within detection engines, i.e. what Crowdstrike does: CS and so on are already doing what you describe (baselining normal system and identity behaviors). It is hit-or-miss still, but noticeably better than a few years ago, and I think this current AI era will push it further. These already took away the need for several sec eng hires.
- correlation across logs, i.e. an incident is happening and under time/under stress, and usually this is a IR team putting together ad hoc search queries and so on. LLMs, as many of them seem to have indexed query languages docs and much of the open source docs on AWS, O365 etc, are in almost invaluable tool here. It's hard to explain how quickly security dev across pre-incident prep or in-incident IR are sped up by them.
> where you can ask the "right" questions a useful tool because...
Yes, this specifically is one of the great value-adds currently - gaining context much quicker than the usual pace. For security incidents, and for self-build use cases that security engineers often run into, this aspect specifically enough to be a huge value add.
And I agree, it will exacerbate the existing version of this, which is my point on replacing analysts:
> you are back at the original problem of developing security engineers...
This is already a problem, and LLMs help fix the immediate generation's issues with it. It's hard to find good sec engs to fit the developmental sec eng roles, so those roles become LLMs. The outcome of this is... idk? But it is certainly happening.