Hacker News new | past | comments | ask | show | jobs | submit | more Kluggy's comments login

So this just reads /sys/class/power_supply/BAT0/energy_now (or charge_now) and /sys/class/power_supply/BAT0/charge_full and outputs it nicely. While the asm is cool, how large would the bash script to do it?


This project started as a rewrite of a Bash script (for learning purposes), so I can easily answer this question. It's indeed small. Here's the full Bash script I used to use (including comments):

    #!/usr/bin/env bash

    # Print the remaining amount of energy (or charge)
    # stored in the battery identified by `BAT0`.  This
    # is what works on my ThinkPad X220 and may not be
    # portable to other laptops without changes.

    cd /sys/class/power_supply/BAT0 || exit $?

    # Sometimes there are `energy_now` and `energy_full`
    # pseudofiles, and sometimes there are `charge_now`
    # and `charge_full` pseudofiles instead.
    if [[ -e energy_now ]]; then
       now=$(<energy_now)
       full=$(<energy_full)
       unit=Wh
    elif [[ -e charge_now ]]; then
       now=$(<charge_now)
       full=$(<charge_full)
       unit=Ah
    fi
    percent=$((100 * now / full))
    # Convert from microwatt-hours (or microampere-hours)
    # to watt-hours (or ampere-hours).
    now=$(bc <<< "scale=1; $now / 1000000")
    full=$(bc <<< "scale=1; $full / 1000000")
    echo "$now $unit / $full $unit (${percent}%)"


I enjoyed it. I followed it way easier then the books and that was a positive for me.


Louis Rossmann did a video with 107k views so far already https://www.youtube.com/watch?v=2cHDyq5hRk4

I would bet dollars to donuts that they lost a lot more than the replacement cost would have been.


On the other hand, this design decision was so stupid that they're likely to kill the hardware of a lot of other people too, so obviously the lawyers would want to... uh... help destroy the company's reputation.

Here's a question: If you're going to change the pinout of the connector, which is proprietary anyway, why is the connector not keyed for the pinout it's supposed to use so you can't plug in the wrong cable?


> why is the connector not keyed for the pinout it's supposed to use

Because it's cheaper to reuse the connectors they already have rather than to make a new one. /s

EDIT: Added a /s just to be on the safe side.


I really wonder if this was worth the few thousand they likely saved. You have two connectors(PSU and modular cable) with cost of what 1 cent a piece...

And it is not like they could not be used in some other model later...


We'll see about that after the lawyers get through with them.


Obviously I agree with your sentiment that this was a foolish and in the end a not-actually-cheaper solution, just saying that I think that was the argument used by the people responsible for the change.

It's insane that PSU manufacturers still haven't standardised the PSU-end of modular PSUs to this day. Or at the very least, standardised within one manufacturers offerings.

But I suppose I know the reason, it's more profitable if people can't reuse cables but have to get new ones every time they get a new PSU...


And motherboards don't have a standard front panel connector after like 35 years plus even though it'd be easy and save a lot of frustration everywhere


I would argue that this is much less of an issue, especially given there's like 4 things max you wanna plug in anyway, power button, power LED, maybe activity LED, and speaker ...maybe, and you only do this once per PC, and if you fuck up you're not breaking anything.


It's difficult, but not impossible to do the recovery yourself. Swap the platters between two drives and clone the data to a fresh drive. I've done it on some 2 tb drives way back when.


The stupidest thing to do first is moving platters. You move the PCB first from an OK drive to a NOT ok drive


I've done this, but on IDE drives, over a decade ago. I wonder if nodern SATA drives store their encryption keys on the platter or on the PCB. Afaik even drives where the data is accessible on boot has encryption, just that the key isn't password protected, it's a convenient way to turn on encryption, just encrypt the key instead of overwriting the whole disk...


If you have self encrypting disks you are screwed anyway and they are more common today than ever. There are some with the "nice" feature of resetting the initial key to the same value every time though.


A lot (but not all) of modern drives past the 8 TB mark are Helium filled. They are airtight.


With sufficient time and motivation, that's not really a barrier. ;)


even if you get it open, you better have a plan to refill it, because the helium is needed for the head to float above the platter properly.


Of course, that's a given. ;)


Does it have an air inlet??


As far as I'm aware, you can't do this on modern HDDs because of the unique mapping/calibration data stored on a chip on the PCB.


I would check for a fuse on the PCB first.


> EVGA also encouraged the Reddit poster to make a warranty claim from the drive manufacturer(s) themselves to get his storage drives replaced.

Wow, the drive manufacturer's warranty most certainly wouldn't apply. There was no fault with the drives at all.

Very poor look for EVGA.


If they actually break up apple and google, it’s going to cause so much chaos that it might actually kickstart the venture capital markets into high gear again.


This would almost surely be a net positive for the world. Tech is way too consolidated now days.


No. The attacker needs to load on a program and then run it against the target program to extract keys. This doesn’t work with anything in the Secure Enclave and things like iCloud don’t expose an endpoint for the attacker program to send data to. Plus you can’t load a program on the device if it’s locked.


In this case, this is a new feature for a new app. They didn't change anything. The setting didn't exist before, nor did the app


They auto-installed the app on everyone's phone, and auto-enabled this feature on everyone's phone. The feature sends my location to someone else when they are nearby and using the app. So even though I deleted the app, other people nearby could still receive information about me because I haven't disabled the default toggle I never opted into in the first place.

It's almost suspiciously malicious that they auto-enabled this feature and auto-installed the app for everyone at the same time.


Installing a new app onto a phone absolutely changes the phone. This app was packaged in an OS update.


It’s a total non issue for the majority of folks. It requires local access and takes hours under very specific conditions that don’t apply to most people. How often do you run a server that will run arbitrary crypto operations on attacker controlled inputs?

Plus all the secrets in the Secure Enclave are immune to this attack, so your FileVault keys and your Apple Pay cards and all that jazz are completely safe.

It sucks that it exists, and crypto libraries that run on the platform outside of the Secure Enclave will get slightly slower, but no one will notice.


> It’s a total non issue for the majority of folks

People said the _exact_ same thing about Spectre/Meltdown. Then the JS PoCs came out


Isn't the lesson here that scripting in the browser needs to die. Letting untrusted code run on your computer is always a bad idea, no matter how much you try to sandbox it.


I would also love to see the API surface of the browser come way down.

If people knew just how widespread and effective browser fingerprinting is they would be shocked. It's Cambridge Analytica on steroids.


Yes, and now browsers have mitigations which make timing attacks harder. This bug also has a key dependency on being able to trigger a crypto operation in a local process, which isn’t easy to do from a browser sandbox or in general on a Mac.

The angle I’d worry about is something like a password manager, but most of those already have an authentication step and I’d be surprised if they didn’t have rate-limiting.


The SpectreMeltdown mitigations have caused me more grief than the problem themselves to this day.

These vulnerabilities definitely exist, that much is a matter of fact. But whether it's something someone should consider in their threat model is a different matter.


>It requires local access

What does this mean? All I read is access to user space. Wouldn’t any web browser be enough?


Isn't that the entire point of the secure enclave[1]?

https://support.apple.com/guide/security/secure-enclave-sec5...


The secure enclave is not a general-purpose/user-programmable processor. It only runs Apple-signed code, and access is only exposed via the Keychain APIs, which only support a very limited set of cryptographic operations.

Presumably latency for any operation is also many orders of magnitude higher than in-thread crypto, so that just doesn't work for many applications.


If you look at the cryptokit API docs the Secure Enclave essentially only supports P-256. Which is maybe why they didn’t include ECC crypto in the examples.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: