Hacker News new | past | comments | ask | show | jobs | submit | kafrofrite's comments login

Most probably what Apple means is that since their codebase is shared, the vulnerability exists across devices. This does not mean that the vulnerability is actively exploited in iOS nor that it will not be actively exploited as part of some other campaign.


> Has this happened before? That iPhones had a security hole that could be exploited over the web? Yes, there were exploits in the past that could be exploited remotely, including some that were used for jailbreaking.


I work as a security engineer and, yes, the CT logs are extremely useful not only for identifying new targets the moment you get a certificate but also for identifying patterns in naming your infra (e.g., dev-* etc.).

A good starting point for hardening your servers is CIS Hardening Guides and the relevant scripts.


IIRC, in [1] it mentioned a few examples of AI that exhibited the same bias that is currently present in the judicial system, banks etc.

[1] https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction


This is honestly what scares me the most. Our biases are built in to AI, but we pretend they're not. People will say "Well, it was the algorithm/AI, so we can't change it". Which is just awful and should scare the shit out of everyone. There was a book [0] written almost fifty years ago that predicted this. I still haven't read it, but really need to. The author claims it made him a pariah among other AI researchers at the time.

[0]https://en.wikipedia.org/wiki/Computer_Power_and_Human_Reaso...


https://en.wikipedia.org/wiki/Computers_Don%27t_Argue while not about AI directly and supposedly satirical really captures how the system works.


I'm not a fan of Windows but Stuxnet didn't happen because of Windows. Iran decided to spin up a nuclear program and Israel and the US had concerns and wanted to stop it. They had the resources to develop something tailored for this unique situation, which included windows, Siemens PLCs (IIRC), Centrifuges etc. and developed the malware based on their target. Even if their target used a different stack, they'd find a way to achieve the same result.


It's all about price. Attacking Linux will be harder, thus more expensive.

You make it sound easy, if that was the case they'd launch one attack every few months or so. This stuff is expensive, and making it 100x harder means 100x less attacks before the budget runs out.


I'll try my best to explain everything (trying to avoid too much security lingo, hopefully).

A password manager is a big database of passwords. There is a master password that decrypts the database and from there you can use your passwords. Notice that hashes are one-way operations thus not used in password managers. The benefits of using a password manager are that that users need to remember and handle only one password, that of their password manager, the rest of the passwords are unique and can be rotated quickly. Ideally, your password manager does a few more things, including taking precautions against leaving traces of passwords in memory etc.

There's another part of commercial password managers which is mostly convenience functionality. Passwords are synced across devices, specific members access specific passwords etc.

Some people do use local password managers, depending on their threat model (i.e., who's after them) and their level of expertise/time on their hands. Setting up something locally requires taking additional precautions (such as permissions, screen locks etc.) that are typically handled by commercial password managers.

Reg. Okta, Okta is an identity provider. In theory, identity providers can provide strong guarantees regarding a user, i.e., "I authenticated him thus I gave him those token to pass around". Strong guarantees can include a number of things, including Multi-factor Authentication, VPN restrictions etc.

Funny story: during an internal red team engagement on a previous employer of mine, we took over the local password manager of a subset of the security org, twice. The first time, they had a VNC, unauthenticated, with the password manager running and the file unlocked. The second time, a team conveniently used Git to sync their password manager file, with their password tracked.


Reminded me of a funny story. Maybe a decade ago, when moving to the cloud was all the rage, my then employer decided to check whether the cloud was any good. Long story short, he asked me to conduct penetration tests against the major providers. In one of the providers I pivoted through some network and hit a webpage that looked like some sort of control plane panel (but required authentication so...). I decided to google part of the HTML and... A stack overflow thread pops up with the code and parts of the backend code/logic. So much win.


> he asked me to conduct penetration tests against the major providers

That sounds madly illegal?


Most providers had a semi-automated process that granted you permission to conduct your pentest (assuming you'd share any findings reg. their infra with them). In reality though, most of the findings didn't come from poking around but from tapping the wire. I'd spin up VMs and tcpdump for hours, then look at the logs for odd packets, plaintext etc. etc. which makes it hard to detect such shenanigans

Edit: We went through the process for everything, including having a provider ship us a back-up solution to pentest. My desk became everyone's favourite place in the building :P


Knocking on someone’s front door and noticing it’s unlocked is perfectly legal. It’s actually walking in that’s illegal.


And at least in England, trespassing is not even a criminal offense afaik, just a civil one - and the owner will have a hard time winning that case too, without very explicit signage.

Unless one helps himself to the house contents, or does other Bad Things, walking through unlocked dwellings will get you at most a slap on the wrist.


Outside of the cybersecurity analogy, as an American, that's . . . very disturbing.

Much like someone open carrying a gun is seen as potentially a few seconds away from committing a Very Bad Crime, so is someone walking around your house uninvited.


England has some weird (to me) property privacy laws. IIRC, you cannot be charged for simply walking through someone's property as a shortcut. There's nothing they can do about it, you just can't linger on the property. I mean, it seems fine, I just haven't seen anything like it before.


It's the system throwing a bone to the general populace in order to maintain an extremely unequal order. Aristocratic landowners mostly do what they want, and there has been no land reform for centuries, so a few concessions were thrown in to allow peasants to make a living somehow.


Well cutting across someone's yard != walking through their house. My friends and I growing up would sometimes cut through neighbors' backyards to go somewhere, and while we didn't have formal permission, no one cared because we knew each other.


I don't the know the situation now, but in the UK you could break into an empty place, then change the locks, and from that point on they could not evict you without a long process involving going to court. There was (is?) a huge squatters community because of this.


From the story of the GP, and extending your analogy, this is more like if they walked into the house and found the safe and noted it was locked, so looked up the safe schematics online.

Not exactly legal.

But even stepping back, I suspect walking around and jiggling random peoples’ doorknobs to see if they’re unlocked is probably illegal.


It’s funny how often this works, there’s a ton of copypasta code in production out there.

I do some bug bounty hunting for fun, and just yesterday I Googled a weird snippet of frontend code from a major corporation, found the matching backend code in a blog post, and saw a bug in it. Alas, not a bug that could be used for anything interesting this time.


IIRC, Intel announced about a year later plans to develop something similar. That being said, at the time they didn't have a specific timeline.


> I don't think OS becomes any less vulnerable than usual Linux/Windows installation.

is not a good enough argument.

For the story, SIP is Apple's "rootless". Effectively the OS runs with less privileges than root. Disabling SIP significantly increases the attack surface.

That being said, I'm grateful that someone decided to do something more native for containers in macOS.


I think it's an OK argument given that most people run (and have been running with no alternative until very recently) docker in such a way that there's a trivial privesc to root. In general it seems like docker users are, overall, willing to take that tradeoff.


How so? I use docker pretty frequently, but I’m sure that my user is part of the docker group before I do, so I don’t sudo anything.

Is there anything else I should be doing security wise?

I’ve been hearing podman is more secure, but I think it’s still containerd under the hood, so idk how true that is.


In general if you can `docker run` without sudo then that means you have a trivial privesc path since you can do `docker run` with the various flags that run it without any sandboxing, get a shell, and just ask to be let out of the namespace.

The way that podman and newer versions of docker get around this is using unprivileged user namespaces. Unprivileged user namespaces are not a free lunch - in fact, they're a bit of a security disaster in their own right.


In a typical installation, being in a docker group gives you access to a socket that controls docker daemon and that daemon runs as root. `sudo` is not important in this context.

Thankfully there is rootless mode for some time now: https://docs.docker.com/engine/security/rootless/.

Podman, too, can run in rootful and rootless mode. Rootless in podman still feels to me to be more like first class citizen, as opposed to docker case.

In both cases it's important to keep in mind in which mode you operate. Both from the perspective of security and day to day operations, as some aspects of behavior will differ between those modes.


DEP is a Windows implementation of a non-executable stack, i.e., memory permissions that do not allow execution on specific pages. Depending on the situation, an attacker can e.g., mmap() a new page with the execute permission set, write his shellcode there and jump there. Another way to bypass the NX bit is to actually use gadgets (snippets of code essentially) that are already there in the code thus they can be executed and redirect your instruction pointer to those addresses. Reusing code is generally known as ROP, JOP etc. and is mitigated by PAC for ARM (after v.8.3) and CFI for Intel (11th Gen onwards I believe).

That being said, Apple implements a ton of mitigations, both on a hardware level and on a software level which generally makes exploits on Apple devices interesting to analyze and see how they bypassed stuff.

Edit: For clarity, Apple requires both codesigning and implements PAC, among others. mmap'ing or ROP won't make the cut in this case.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: