Hacker News new | past | comments | ask | show | jobs | submit login

   The funny thing is that this is effectively a keylogger that does not run any code on the CPU while it is running.
I already knew it, but this just reinforced how terribly vulnerable pretty much every computer system is. Makes me think ransomware/hacks are going to get a lot worse, and I can’t see how the situation can be improved, at least for quite some time.



For some reason, I can't interpret your comment in any other way than security-paranoia FUD that's advocating for completely-locked-down user-hostile computers (or maybe I should say "devices"...) under the total control of Big Tech, because that's the narrative they've been using more and more to justify it.


That's rather cynical, but to clarify, my comment is rooted in the understanding (illustrated by OP in some excellent examples) that all of our digital infrastructure, from browsers down to silicon, are born from tight engineering schedules, "ship it" mentality, a focus on "ROI", and not necessarily security. Even if there were more focus on security, OPs article is fantastic at showing a sliver of this complexity, and the daunting task it is to understand the inter-relationship and consequences of many individual design choices, across a litany of components and libraries, over many decades and teams.

As a first step, I'd like to see some tough laws that hold companies liable for data leaks. Once it becomes a major liability to be the source of a data leak, most companies won't bother collecting PII, and will make it a point to ensure it's not stored on their systems. Nonetheless, systems will always have vulnerabilities and be exploited. This is at least worrisome from many, if not terrifying, and that should not be automatically interpreted as "security-paranoia FUD".


Some folks are going to be more personally exposed to the current state of things than others.


Someone would have to load this firmware onto your devices, either with local physical access or remote root access to your system. Are you suggesting it should be impossible for anyone to modify the firmware on their peripherals/NICs/etc, or else the world will end or similar? Is it only safe to allow us iPads?


I'm pointing out my conclusion: nothing is safe... yikes!

And no, I'm not worried that because of this article anyone can keylog my stuff. It's the realization from OPs summary that there are many attack vectors, and vulnerabilities hiding in and between layers and components. iPads are no different in that respect.

Seriously, how do we get passed the current state of zero-days and major vulnerabilities cropping up on the reg?

Some ideas from other commenters here aren't entirely convincing to me: require open source, require software standards (both of which would need to apply to hardware/silicon as well). I'm honestly looking for some thoughts on how to build a more secure digital future (Links to articles or studies are welcome).


Use less software.

But really, I think some perspective is needed on what is "safe". Is riding in a car "safe"? Is eating food from the supermarket "safe"? Can you ABSOLUTELY GUARANTEE that it's IMPOSSIBLE to screw it up? How did my parents survive for 60 years under these UNSAFE conditions?

I think electronic devices can be pretty damn safe, even without totally locked-down firmware and secure-boot. They can be flashed with low-level firmware at the hardware level (SPI or JTAG or similar), then boot trusted install media, wipe the mass storage and install fresh.

Then, keep it minimal, keep it under control. Don't install and use 20k components/libraries which you are not familiar with and of which hundreds want to update every other day. At least, be familiar with all the processes and daemons running. Either you should know why they're there, or they should not be there. You don't need a firewall if no process is listening for connections (and if you need a firewall to block it, why are you running it ?!) Just run less junk.


good advice for the above average tech-savvy user, but alas useless for >90% of users.


Programs are riddled with mistakes. Just because you can point a finger and say zero-day, doesn't mean that there's a single solution to this problem. Nothing _is_ safe; life is inherently unsafe.


>Is it only safe to allow us iPads?

Of course not. But, the meat of the discussion is that everyone agrees it IS safe to use iPads. So, is there a practical middle ground?


Just make companies liable for any damage caused by their crappy products. Make them pay billions in damages every time somebody gets hacked because of their negligence. Then they'll start caring about the quality of their software instead of treating it as a cost center.


The implication that software providers should be liable seems to reappear eternally here and remains misguided. Even when we're essentially discussing hardware here.

Software is the perhaps that area where "good" or "crappy" is most undetermined. A given piece of software can be bullet-proof today and a catastrophic hole can appear tomorrow. And even if the producer releases an update, there's no guarantee it will be picked-up.

Overall situation is that what's needed is standards of software use for those companies which actually do damage. Without standards, your use of "crappy" is meaningless.


> A given piece of software can be bullet-proof today and a catastrophic hole can appear tomorrow.

Sometimes you do everything you can and things still go wrong. That's okay.

What happens in practice is totally different though. Gross negligence is endemic in the technology industry. Most companies out there simply don't give a shit. Their negligence is deliberate, calculated and pre-meditated. They know exactly how much damage they're causing and they don't care because caring costs money.

> Without standards, your use of "crappy" is meaningless.

It's not meaningless at all. For example, nearly every laptop manufacturer I've ever seen has delivered to me software that is unambiguously bad. This opinion is not controversial at all. You just need to fire up some manufacturer app to see just how incredibly bad they are.

I've posted about that here many times and people explained to me that the software is garbage because hardware companies literally don't care about it. They see it as just additional costs to be eliminated and as a result we get products which are total crap. My laptop came with a driver that intercepts my keystrokes and sends signals to the keyboard so that it can light up the LEDs under the keys I pressed. What caused an insane design like this to even come into existence is beyond me, no doubt it came down to saving a few cents in manufacturing. I replaced this functionality with free software and I'm not sure if I even want to know whether there are any vulnerabilities in that driver.


>My laptop came with a driver that intercepts my keystrokes and sends signals to the keyboard so that it can light up the LEDs under the keys I pressed. What caused an insane design like this to even come into existence is beyond me, no doubt it came down to saving a few cents in manufacturing. I replaced this functionality with free software and I'm not sure if I even want to know whether there are any vulnerabilities in that driver.

That sounds pretty cool and hackable actually.


This is how malpractice qorks in every other injury. It isn't just about damage being caused by the software, but if there was a violation of the reasonble standard of care


Exactly. Us engineering types tend to underestimate how much intent and judgment matters when it comes to matters of malpractice (and similar) laws.


> A given piece of software can be bullet-proof today and a catastrophic hole can appear tomorrow.

Only because you didn't notice the catastrophic hole today, and that's true of everything. When a building collapses the construction company/architects can't really get out of that by going "well it was perfectly fine yesterday, it's hardly our fault!", I don't see why we'd accept that attitude from software engineers.


The state space of a complex piece of software is vastly larger than that of a building, and the desirable states within that space are not continuous, while a building's desirable state space is pretty smooth. It's like the difference between walking along the knife edge of a fractal vs. standing on top of a mostly smooth hill.

Because of that, software simply can't be treated the same as other engineering disciplines, at least not yet.


> The state space of a complex piece of software is vastly larger than that of a building

Is it? Unless we can accurately simulate reality at the atom level and bruteforce building designs against every possible scenario to make sure it doesn't fail catastrophically, it's still all down to prior knowledge/experience, reasonable assumptions, approximations and measurements, just like in software. If anything, software is easier, as with enough efforts (formal methods, etc) you can get to prove the correctness of your software, but you can't really do so with a building.

A large chunk of software vulnerabilities is either due to outright incompetence, legitimate mistakes or cost-cutting. The example of an insecure, black-box backdoorable EC like in this article would have a "building" equivalent of using rebar or concrete of unknown specs and origin and then wondering why the construction collapsed.

Thankfully the liabilities associated with civil engineering means we mostly don't use unknown materials of shady origin and there are multiple layers of review to catch any oversights. The same can be applied to software, and while you're never going to get 100% in either domain, if software was as reliable as buildings (as in you can count the number of major collapses/bugs on one hand in the past few years) it would be a major improvement.


Disagree. First of all, the level of detail an atomic level simulation could provide is simply useless in the grand scheme of things. Classical physics is more than enough for the vast majority of use cases and they can be simulated by computers. Eg, you can programmatically measure the load bearing on any part of a structure. And the more important thing, engineers can usually fall back to safety margins that “linearly scale” the provided confidence in a given case. What I mean by that, is that if a given part has to be this stable, as per the calculations, you can trivially make it 3x wider.

While a while loop in a computer program will eat away at any sort of redundancy you may introduce, and the complexity of computability itself simply leaves behind even mathematics. There is no universal way to prove a non-trivial property of an arbitrary program.


> The state space of a complex piece of software is vastly larger than that of a building

Not really, we've just gotten very good at constructing buildings thanks to millennia of experience. We're terrible at writing software.


Yes. People forget how difficult it is to produce even a screw or a nail. And then there are way more difficult things like tubes.


This isn’t really fair. Most buildings, public works, utilities and industrial processes are extremely vulnerable to the most basic of attacks, they just don’t get carried out as often in physical space because it’s logistically more difficult to execute.


Your claim is fundamentally "It's logistically easier to attack software, so such attacks will happen more often".

This is absolutely true, but it's also proportionally easier to defend software. It's insanely easy to test whether your software is vulnerable to SQL injection, it's not particularly easy to test whether your building can be destroyed with explosives.

Combine that with the fact that just about every piece of consumer software on the planet has a laundry list of bugs that don't require malicious intent to reproduce, and I find it very hard to accept that reasoning for software developers absconding responsibility.


Empirical/dynamic testing for vulnerabilities is not rigorous or complete in any any way, and is only viable for a relatively small subset of issues.

I'm only suggesting that holding programmers liable for security vulnerabilities isn't really precedented across any other engineering discipline. That's not to say there isn't tons of shitty software being shipped with reckless disregard for quality, and some reckoning there might be useful.


We don't expect buildings to hold up to physical attacks. Most would crumble immediately. Just like most hardware doesn't catastrophically fail through normal use.

Computers are one of the only things we expect flawless defense against malice.


Today when computers are connected to the whole world, they have to build safe. The analogy would be, if everybody in the whole world could access the buildings frontdoor. In a such scenario, the frontdoor would, of course, build very heavy.


This is true but it still does not validate the OP comparison between buildings and software. Software faces a much much more difficult task.


I agree. Standards or requirements should be enforced, and if there's a failure in the company's infrastructure, a panel should investigate if it was due to negligence or non-compliance with the standards. Also, insurance should be mandatory. If a given online platform was a physical piece of infrastructure like a school or a bridge, current state of affairs would cause an uproar.


> Make them pay billions in damages every time somebody gets hacked because of their negligence.

The downside is companies will lock down their hardware even more out of fear of getting sued. It's utterly amazing this person managed to get custom firmware executing on the WiFi chip... stuff like Intel's or AMD's microcode is digitally signed (and iirc, also encrypted) instead of using a plain old XOR checksum, and I'd argue the world is off a lot less safe as a result.


This natural desire to cut corners by locking down devices will be mitigated by right to repair laws.

With that said, it always irks me when someone suggests regulation as a solution to misconduct by large corporations and someone chimes in "But they'll just misbehave in some other way."

If the entity that is misbehaving has changed the way that they're misbehaving in response to your regulation that means that your regulations worked and that you merely need to continue regulating the offender.


Right to repair will not allow us to flash custom firmware on wifi cards. At best it will allow us to buy a new wifi card for when the old one breaks.


Just make it illegal to lock down hardware as well. Now they can't cut corners anymore and will have to actually develop good software.

Just keep making their "clever" workarounds illegal every single time until the desired outcome is achieved. They should have literally no choice other than to make a good product.


> Just make it illegal to lock down hardware as well. Now they can't cut corners anymore and will have to actually develop good software.

Sadly that one won't ever happen, the copyright mafia will do everything they can to prevent that. Just look at how Netflix is locking down people on rooted devices.


> Sadly that one won't ever happen, the copyright mafia will do everything they can to prevent that.

We really need to abolish copyright as well. It's the 21st century, copying is trivial.

> Just look at how Netflix is locking down people on rooted devices.

I had no idea. Please elaborate.


If it can detect rooting, either by presence of common rooting apps or by SafetyNet attestation, you only get lowest quality of video.


Imagine having your cryptocurrency wallet’s private key exfiltrated in this way.

Hell, it wouldn’t surprise me if a few less than ethical NSA hackers are doing exactly that in their spare time.


You should never type your seed phase on any computer. Hardware wallets will give you a randomized keymap for you to recover a seed phrase without using any of the real letters.


I read what NSA team was installing free internal modules to Cisco routers in their paid time.


This isn’t a “crappy product”, unless you consider every product that allows users to provide their own firmware crappy.


Open source stuff would disappear.


Open source is the only stuff that doesn't reliably leave gaping wholes laying around for years because anyone can pay to have them fixed.


That’s simply false. I really love open-source but to even imply that it is not a ticking bomb of security is just naive.


Why? There's a world of difference between a rich corporation marketing and selling products to consumers and developers publishing some code on the web. It is obvious to me that society expects a lot more from the former.


I disagree. FOSS and commercially licensed (and sold) software with EULAs are two very different things, and can be distinguished in whatever legal language implements these theoretical liabilities


Open source stuff is typically distributed with a massive all-caps warranty disclaimer. If it breaks, you get to keep both pieces.


"... and I can't see how the situation can be improved, at least for quite some time."

I can. But no one is going to listen to either of us, so what does it matter.

Here is how I would start to improve the situation.

Disconnect untrusted computers from the internet.

In other words the only computer that is allowed to access the internet directly is a computer that has all the properties desired for adequate security. Those properties could be things like the hardware being repairable, having an open BIOS and the bootloader and OS being open source and able to be compiled from source by the user easily. Call this computer a "gateway" if you like, or call it a "firewall", or call it whatever you want to call it. The esential point is that it is the one computer you believe you can best understand and control.

I would be willing to bet any amount of money that just disconnecting all Windows computers from the internet, i.e., no direct connection, would result in a dramatic drop in security problems.

Keyloggers are not very useful on a mass scale if they cannot transfer the keystrokes over the internet.

There was a time when not all computers had unfettered direct access to the internet. They worked just fine. Maybe even better than ones today that are incessantly trying to connect to some server.


> Disconnect untrusted computers from the internet.

Disconnect billions of devices? How would you even enforce this?

> I would be willing to bet any amount of money that just disconnecting all Windows computers from the internet, i.e., no direct connection, would result in a dramatic drop in security problems.

Not really sure I understand your point here. Why stop at just Windows? If we remove all computers from the internet we would be so much more secure.

I'm also not sure why you single out Windows when even the blog post demonstrates this key logger in Linux (which is open source).

> There was a time when not all computers had unfettered direct access to the internet. They worked just fine. Maybe even better than ones today that are incessantly trying to connect to some server.

I hope this is a troll rather than someone honestly believing such a statement. You're claiming disconnected computers are perhaps better... while writing on one connected to the internet.


"disconnecting all Windows computers from the internet, i.e., no direct connection, would result in a dramatic drop in security problems."

And jailing the entire population of a contry would reduce car accidents!


> while writing on one connected to the internet.

I think you’re missing part of their point (which isn’t super clear). You can still surf on such a computer, by going through an http proxy on the same LAN (the “gateway” they’re talking about, or bastion host)

They could very much be writing that comment on such a machine.


This is the idea.

Amazing how people can (mis)interpret (unclear) comments as if they were crystal clear. They make assumptions. They read things in that are not there. It is truly entertaining, I never mentioned Linux. I never mentioned "desktop". Nor did I suggest Windows users would not be able to access the internet. Nor did I suggest the computer with IP forwarding enabled (call it what you like) needs to do everything a "firewall" does.

Indeed, I am writing this comment on such a commputer that runs a proxy for all the other computers. That's only because I like to experiment with different proxy configs.


So they are saying to use a firewall...


More specifically, that the host should not route public IP space but use a proxy for any outbound connection (and a load balancer/reverse proxy for any incoming)

Every org is different of course but in the general I agree that this should be a more common pattern.


Most popular Linux distros are in many ways more vulnerable than Windows. Microsoft employs actual security engineers for Windows. To give one example, X11 is still in wide use.

The secure Linux distros are all of the locked down kind, like Chrome OS and Android.

The reason why we aren't seeing widespread desktop Linux malware campaigns is because almost nobody uses desktop Linux. The year of the Linux desktop, whenever it will be, will be followed by the year of the Linux desktop malware.

I love open source and free software, but it's not inherently more secure.


You might be surprised to find that Windows has a similar problem to X11 - any GUI window can send messages to any other GUI window, and on consumer versions of windows you can't use multiple simultaneous user sessions to separate different applications.

https://invisiblethingslab.com/resources/2014/A%20crack%20on...

It's true that windows has way more mitigation technologies, and X11 is more laissez-faire. But if you don't run untrusted or crappy software, linux can be pretty damn good. You just don't really know how crappy all the components of windows really are, or which mitigations are really useful or really working correctly, and in software with that much complexity there's always some stuff that isn't really working and nobody notices. In linux, if you really care, you can trim down to a very minimal curated setup, and that's really the only way to know what's really going on in your computer.


Linux is still under 2% of the market share for anyone wondering.


Market share for desktops of course ;)


> I would be willing to bet any amount of money that just disconnecting all Windows computers from the internet, i.e., no direct connection, would result in a dramatic drop in security problems

Given how many security problems come from the internet, I imagine that there would be a dramatic drop in security problems if any platform with as much popularity as usage as was cut off from contacting other machines over the internet, even if the platform had above average security.


Although there is a reason I said Windows and not some OS that can be trimmed down and compiled from source by the user (not necessarily GNU/Linux, there are others), it does not matter so much what is the "platform" (I presume you mean OS) nor its popularity. The idea is that fewer of each user's computers, no matter what kernel they run, would be able to directly contact other computers over the internet.


Bluetooth is a dumpster fire of vulnerabilities already.

Where I live it's not too hard to wardrive around the block playing funny noises into random peoples' bluetooth speakers and headphones inside their homes.

Don't use Bluetooth keyboards, period.


I think that’s a feature, not a vulnerability:


> and I can’t see how the situation can be improved, at least for quite some time.

It can be improved if we support companies selling computers with FLOSS firmware and disabling Intel ME.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: