Having worked for a SIEM vendor, I can say that all security software is extremely invasive, and most security people can probably track every action you make on company-issued devices, and that includes HTTPS decryption.
Reminds me of a guy I know openly bragging that he can watch all of his customers who installed his company's security cameras. I won't reveal his details but just imagine any cloud security camera company doing the same and you would probably be right.
Yeah the question is always if the cure is better than the disease. I'm quite ambivalent on this. On the one hand I tend to agree with the "Anti AV camp" that a sufficiently maintained machine can do well when following best practices. Of course that includes SIEM which can also be run on-premise and doesn't necessarily have to decrypt traffic if it just consumes properly formatted logs.
On the other hand there was e.g. WannaCry in 2017 where 200,000 systems across 150 countries running Windows XP and other unsupported Windows Server versions had crypto miners installed. It shows that companies world-wide had trouble properly maintaining the life cycle of their systems. I think it's too easy to only accuse security vendors of quality problems.
AKIDs... ugh. They'll be there if you use AWS + Mac.
Again, the plaintext is the problem.
These environment variables get loaded from the command line, scripts, etc. - CrowdStrike and all of the best EDRs also collect and send home all of that, but probably in an encrypted stream?
I usually remote dev on an instance in a VPC because of crap like this. If you like terrible ideas (I don't use this except for debugging IAM stuff, occasionally), you can use the IMDS like you were an AWS instance by giving a local loopback device the link-local ipv4 address 169.254.169.254/32 and binding traffic on the instance's 169.254.169.254/32 port 80 to your lo's port 80, and a local AWS SDK will use the IAM instance profile of the instance you're connected to. I'll repeat, this is not a good idea.
Thank you, that's a sound perspective, but it is the responsibility of the security staff who deploy EDRs like Crowdstrike to scrub any data at ingestion time into their SIEM. but within CS's platform, it makes little sense to talk about scrubbing, since CS doesn't know what you want scrubbed unless it is standardized data forms (like SSNs,credit cards,etc..).
Another way to look at it is, the CS cloud environment is effectively part of your environment. the secrets can get scrubbed, but CS still has access to your devices, they can remotely access them and get those secrets at any time without your knowledge. that is the product. The security boundary of OP's mac is inclusive of the CS cloud.
for their own cloud, yeah, you basically accept their cloud as an extension of your devices. but the back-end they use(d?), Splunk, does have scrubbing capability they can expose to customers, if actual customers requested it.
In reality, you can take steps to prevent PII from being logged by Crowdstrike, but credentials are too non-standard to meaningfully scrub. It would be an exercise in futility. If you trust them to have unrestricted access to the credential, the fact that they're inadvertently logging it because of the way your applications work should not be considered an increase in risk.
Anyone with the right level of access to your Falcon instance can run commands on your endpoints (using RTR) and collect any data not already being collected.
that's what EDRs do. anyone with access to your SIEM or CS data should also be trusted with response access (i.e.: remotely access those machines).
If you want this redacted, it is a SIEM functionality not Crowdstrike's. Depends on the SIEM but even older generation SIEMs have a data scrubbing feature.
This isn't a Crowdstrike design decision as you've put it. any endpoint monitoring too, including the free and open source ones behave just as you described. You won't just see env vars from macs but things like domain admin creds and PKI root signing private keys. If you give someone access to an EDR, or they are incident responders with SIEM access, you've trusted them with full -- yet, auditable and monitored -- access to that deployment.
Sure, storage. Networking though? SIEMs receive and send data unencrypted? They should not. By sending the data in plain text you open up an attack surface to anyone sniffing the network.
So there's this thing called "Threat model" and it includes some assumptions about some moving parts of the infra, and it very often includes assertion that a particular environment (like IDS log, signing infra surrounding HSM etc.) is "secure" (they mean outside of the scope of that particular threat model). So it often gets papered over, and it takes some reflex to say "hey, how we will secure that other part". There needs to be some conciousnes about it, because it's not part of this model under discussuon, so not part of the agenda of this meeting...
And it gets lost.
That's how shit happens in compliance-oriented security.
There are secrets like passwords, but there are also secrets like "these are the parameters for running a server for our assembly line for X big corp".
They have IT policies to make sure it largely does not apply. Even in our policy officially any personal use is forbidden. Funnily there is also agreement with our employee board, that any personal use will not be sanctioned. So guess what happens. This done to circumvent not only GPR but also TTDSG in germany (which is harsher on 'spying' as it applies to telecoms. For any 'officially' gathered personal information though typical very specific agreements with our employee board exist though (reporting of illness, etc). Wonder how such information which is also sensitive in a workplace is handled. Also I see those systems used in hospitals etc, if other peoples data is pumped through this systems GDPR definitively applies and auditors may find it (I only know such auditing in finance though). In the future NIS2 will also apply so exactly the people that use such systems will be put under additional scrutiny. Hope this triggers also some auditing of the systems used and not just the use of more of such systems.
Is this really a criticism? Because this has been the case forever with all security and SIEM tools. It’s one of the reasons why the SIEM is the most locked down pieces of software in the business.
Realistically, secrets alone shouldn’t allow an attacker access - they should need access to infrastructure or a certificates in machines as well. But unfortunately that’s not the case for many SaaS vendors.
I can trust you enough to let you borrow my car and not crash it, but still want to know where my car is with an Airtag.
Similarly employees can be trusted enough with access to prod, while the company wants to protect itself from someone getting phished or from running the wrong "curl | bash" command, so the company doesn't get pwned.
That's far from factual and you are making things up. You don't need to send the actual keys to a siem service to monitor the usage of those secrets. You can use a cryptographic hash and send the hash instead. And they definitely don't need to dump env values and send them all.
Sending env vars of all your employees to one place doesn't improve anything. In fact, one can argue the company is now more vulnerable.
It feels like a decision made by a clueless school principle, instead of a security expert.
A secure environment doesn't involve software exfiltrating secrets to a 3rd party. It shouldn't even centralize secrets in plaintext. The thing to collect and monitor is behavior: so-and-so logged into a dashboard using credentials user+passhash and spun up a server which connected to X Y and Z over ports whatever... And those monitored barriers should be integral to an architecture, such that every behavior in need of auditing is provably recorded.
If you lean in the direction of keylogging all your employees, that's not only lazy but ineffective on account of the unnecessary noise collected, and it's counterproductive in that it creates a juicy central target that you can hardly trust anyone with. Good auditing is minimally useful to an adversary, IMO.
> In a highly auditable/“secure” environment, you can’t give secrets to employees with no tracking of when the secrets are used.
This does not seem to require regularly exporting secrets form the employee's machines though. Which is the main complaint I am reading. You would log when the secret is used to access something, presumably remote to the users machine.
I’m well aware of what a SIEM does. You do not need to log a plaintext secret to know what the principal is doing with it. In a highly auditable environment (your words) this is a disaster
In a highly secure environment, don't use long lived secrets in the first place. You use 2FA and only give out short lived tokens. The IdP (ID Provider) refreshing the token for you provides the audit trail.
Keeping secrets and other sensitive data out of your SIEM is a very important part of SIEM design. Depending on what you’re dealing with you might want to tokenize it, or redact it, but you absolutely don’t want to don’t want to just ingest them in plaintext.
If you’re a PCI company then ending up with a credit card number in your SIEM can be a massive disaster. Because you’re never allowed to store that in plaintext, and your SIEM data is supposed to be immutable. In theory that puts you out of compliance for a minimum of one year with no way to fix it, in reality your QSAs will spend some time debating what to do about it and then require you to figure out some way to delete it, which might be incredibly onerous. But I have no idea what they’d do if your SIEM somehow became full of credit card numbers, that probably is unfixable…
If that’s straightforward then congratulations, you’ve failed your assessment for not having immutable log retention.
They certainly wouldn’t let you keep it there, but if your SIEM was absolutely full of cardholder data, I imagine they’d require you to extract ALL of it, redact the cardholder data, and the import it to a new instance, nuking the old one. But for a QSA to sign off on that they’d be expecting to see a lot of evidence that removing the cardholder data was the only thing you changed.
> Realistically, secrets alone shouldn’t allow an attacker access - they should need access to infrastructure or a certificates in machines as well.
This isn't realistic, it's idealistic. In the real world secrets are enough to grant access, and even if they weren't, exposing one half of the equation in clear text by design is still really bad for security.
Two factor auth with one factor known to be compromised is actually only one factor. The same applies here.
My mental model was that Apple provides backdoor decryption keys to China in advance for devices sold in China/Chinese iCloud accounts, but that they cannot/will not bypass device encryption for China for devices sold outside of the country/foreign iCloud accounts.
Seriously? Crowdstrike is obviously NSA just like Kaspersky is obviously KGB and Wiz is obviously Mossad. Why else are counties so anxious about local businesses not using agents made by foreign actors?
KGB is not even a thing. Modern equivalent is FSB, no? I'm skeptical. I don't think it's obvious that these are all basically fronts, as much as I'm willing to believe that IC tentacles reach wide and deep.
Agents don't just read env vars and send them to SIEM.
There's a triggering action that caused the env vars to be used by another ... ehem... Process ... that any EDR software in this beautiful planet would have tracked.
No it logs every command macOS runs or that you type in a terminal. Either directly or indirectly. From macOS internal periodic tasks to you running “ls”.
I don't think this is limited to just Macs based on my experience with the tool. It also sends command line arguments for processes which sometimes contain secrets. The client can see everything and run commands on the endpoints. What isn't sent automatically can be collected for review as needed.
It does redact secrets passed as command line arguments. This is what makes it so inconsistent. It does recognize a GitHub token as an argument and blanks it out before sending it. But then it doesn’t do that if the GitHub token appears in an env var.
It may depend a bit on your organization but I bet most folks using an EDR solution can tell you that Macs are probably very low on the list when it comes to malware. You can guess which OS you will spend time on every day ...
Arbitrary bad practices as status quo without criticism, far from absolving more of the same, demand scrutiny.
Arbitrarily high levels of market penetration by sloppy vendors in high-stakes activities, far from being an argument for functioning markets, demand regulation.
Arbitrarily high profile failures of the previous two, far from indicating a tolerable norm, demand criminal prosecution.
It is recently that this seemingly ubiquitous vendor, with zero-day access to a critical kernel space that any red team adversary would kill for, said “lgtm shipit” instead of running a test suite with consequences and costs (depending on who you listen to) ranging from billions in lost treasure to loss of innocent life.
We know who fucked up, have an idea of how much corrupt-ass market failure crony capitalism could admit such a thing.
The only thing we don’t know is how much worse it would have to be before anyone involved suffers any consequences.
Anyone with access to your CS SIEM can search for GitHub, aws, etc creds. Anything your devs, ops and sec teams use on their Macs.
Only the Mac version does this. There is no way to disable this behaviour or a way to redact things.
Another really odd design decision. They probably have many many thousands of plain text secrets from their customers stored in their SIEM.