Hacker Newsnew | past | comments | ask | show | jobs | submit | stefan_'s commentslogin

Qualcomm is a US company right? I've worked on a few WiFi router devices and their chips are pretty popular in that segment. But WiFi is not a priority for Qualcomm (in fact they actively sabotage it for their more profitable 5G segment), and software is even less of a priority. So you had "parsing 802.11 TLVs in the kernel with obvious stack overflows" quality code drops.

(Which is why it's a bit ironic I saw the Google Fiber guy post on X about how they always had TPM^TM "security" in their routers; thats cool, but the drivers you used still made them "general purpose computing over the air" devices)


Doesn't matter where they're headquartered if they use foreign-made components. I don't think there's a robust enough supply chain of domestic materials available (nor cheap enough labor) to feasibly stop using foreign-made components.

Checkout v4 of course, released in August 2025, which already now pollutes my CI status with garbage warnings about some Node version being deprecated I could absolutely care less about. I swear half the problems of GitHub are because half that organization has some braindead obsession with upgrading everything everywhere all the time, delivering such great early slop experiments as "dependabot".

So Safari doesn't work, Firefox doesn't work. It's professional video editing, right in the ~~browser~~ Chrome window.

What is the problem with targeting the most prevalent rendering engine?

You seem pretty young, honestly. You likely don't remember a time when websites displayed a message "Only works in IE", or "Only works in Netscape". It was not a good time for the web.

Yeah, but now with AI. Because the shtick of this blog is "everything is ~~computer~~ AI".

Ok, bear with me here.

Theory 1: the internet has been fully strip mined for all content and is now dead. See that graph of StackOverflow questions dropping off a cliff to zero. Nothing much worthwhile is being added.

Theory 2: they are all unethical as fuck and definitely learning off your data. You would be insane not to - theory 1 means all your free training data is gone, but all that corporate data is fresh, ripe and covers many domains that the amateurs on the internet never filled. You have to launder it some way of course, but it's definitely happening.

Theory 3: winner takes all. I don't care for "Claude" and your wishy-washy ethics performance. ChudAI has a better model and harness? I'm gone this evening.

Having all the users, even if they are exploiting you for cheap compute with their own harness, is essential.


Good theory and insight. Seems like that’s setting us up for some epic big co vs ai co legal battles for covertly training off sensitive and internal big co data

They are being downvoted because Nuland is an utterly insignificant diplomat, but serves well as a dog whistle for people who subscribe to the belief that the Maidan revolution in Ukraine was really some Obama organized coup. This is a story peddled by the Russian government, which of course is where Yanukovych promptly fled after having protestors shot. At the same time Russia was busy staging troops and material for the actual coup they were planning.

Yes, insignificant enough to remark "Yats is our guy" (during an intercepted but confirmed-authentic conversation with another US diplomat that Russia subsequently leaked), only for "Yats" (Arseniy Yatsenyuk) to subsequently become the Ukrainian PM. Coincidence, I'm sure...

For a company that tries exclusively to sell to people that are very far removed from the use (government), yet have onerous reporting standards for all spending (government), there sure is very little independent reporting on the efficacy of whatever it is they are even selling. Even the contract with NHS was heavily censored. So frankly I oppose it on that ground alone.

Good reminder that the Raspberry Pis only have good software support if you stick to whatever the foundation is releasing. Because that same foundation has stayed obsessed with their weird custom ways of doing things, instead of furthering efforts like UEFI on ARM. Some of it is insultingly stupid - like for revD of the 5, you better now update the magic boot partition of your RPi with the device tree overlay for revD, because it will use the old device tree, but also expect the overlay to be there so it can actually work. To say the least, that is never what overlays were supposed to be for.

> custom ways of doing things, instead of furthering efforts like UEFI on ARM.

I thought uBoot was more or less the standard way of booting embedded Linux? Is it really worth bringing the entire UEFI environment, which is basically a mini OS, to such devices? Embedded devices are often designed to handle power loss or even be unplugged by users, so the boot up process is generally as lean as possible.


U-Boot nowadays speaks UEFI :) (and so does LK)

New Android devices all use a UEFI bootloader: https://source.android.com/docs/core/architecture/bootloader...


SecureBoot might be more useful than UEFI on SBC like Pi.

The grub EFI shim is signed, but does or doesn't verify kernel image and initrd and module (and IDK optionally drive and CPU and RAM hw) signatures?

mokutil does module signature key enrollment. Kernel modules must be signed with a key enrolled in the BIOS otherwise they won't be loaded.

To implement SecureBoot without UEFI would be to develop an alternate bootloader verification system.

But what does grub or uboot or p-boot do after the signed grub shim is verified?


mokutil and these commands don't work without UEFI:

  mokutil --sb-state
  mokutil --help
  mokutil --import key.der
  mokutil --list-new
  reboot

  efibootmgr
  efivar

  fwupd
  fwupdtool
  fwupdmgr get-updates && \
  fwupdmgr update

  tree /sys/firmware/efi

  systemctl reboot --firmware-setup

Note that UEFI doesn't mean supporting most of those.

UEFI without runtime UEFI variable writes is a thing, and that configuration is incompatible with mokutil.


FWIU,

There is no SecureBoot without UEFI.

UEFI without SecureBoot does have advantages over legacy BIOS with DOS MBR.

> UEFI without runtime UEFI variable writes is a thing

Which vendors already support this?

Do any BIOS - e.g. coreboot - support disabling online writes to EFI? (with e.g. efibootmgr or efivar or /sys/firmware/efi)

One of the initial use cases for SecureBoot is preventing MBR malware.

What there be security value to addding checksums or signatures as args to each boot entry in grub.cfg for each kernel image and initial ramdrive?

Unless /boot is encrypted, it's possible for malware to overwrite grub.cfg to just omit signatures for example.


> Which vendors already support this?

One implementation I've seen in the wild is: https://docs.nvidia.com/jetson/archives/r36.4/DeveloperGuide...

Secure Boot is still supported in that configuration, but with PK/db/dbx being part of the firmware configuration and updating them requiring a UEFI capsule update.


Looks like UKI include the initrd in what EFI checks the signature of.

Add signature checking for grub.cfg (instead of just the EFI shim) but that requires enrolling a local key

Add initrd signatures to grub.cfg


This is exactly why I’ve to replaced my home server by a low-power x86 NUC instead. No custom build needed to run NixOS and idle power consumption turns out to be slightly lower than the Raspberry Pi 5.

Idle consumption is truly horrid on the Pi 5, even with all the hacks and turning absolutely everything off and hobbling the SoC to 500 Mhz it's imposible to get it under 2W. I'm convinced that the Pi Foundation doesn't think battery powered applications are like, a thing that physically exists.

Allow me to ask you what’s the NUC computer you are using?

I’m using an ASUS NUC 14 Essential Kit N355. It’s a bit more expensive than the Pi 5, but also more powerful (8 cores and decent GPU). There is also a more affordable N150 model. And even lower budget are the N150 mini PCs from Chinese manufacturers, but they often mess up things like cooling in a hardware revision (compared to the favorable review that you’d read).

And forgot to mention this before: Intel CPUs with built-in GPUs have very performant and energy efficient hardware video codecs, whereas the Raspberry Pi 5 is limited and lacks software support.


And what is the idle power draw that you're seeing on the NUC? Out of the box or did you have to mess around with BIOS and powertop?

I get 3-5W, mostly 4W on my N100 nuc. WiFi disabled through bios. And I ran powertop and made the suggested changes. 1 stick of 16gib lpDDR5, 1 nvme ssd, 1 4TB SATA ssd. Under full cpu load usage goes up to 8-12W. When also the gpu is busy with encoding the consumption grows to 20-24W. This is with turbo clock enabled. With it disabled power draw stays around 4W, but it is annoyingly slow I enabled turbo again and just content with the odd power peak.

I'm seeing 4-4.5 Watt idle. I've disabled WiFi in the BIOS (using wired Ethernet) and ran `powertop --auto-tune`, but not much else.

I am not the OP, but I got an $150 (at a time) fanless quad core Celeron box at Aliexpress about 5 years ago, and it just runs with zero problems with openmediavault and dockers. Attached is external HDD over USB 3, it’s still fast enough (and the HDD is the bottleneck, not the USB interface).

Few months ago it was possible to get Intel N100 (i5-6400 performance at much lower power) based mini PC with 8GB RAM and 256GB SSD for 100-120 USD on sale. Unfortunately, 'rampocalypse' happened.

I wonder if I can run this on a 2 year old celeron laptop

You can run this on a 10 year old celeron laptop.

Could these choices have anything to with the alleged focus on Compute Module and less focus on the "normal" Raspberry? Does anyone know?

not really, it has been like that since day1. it has more to do with the weird architecture of the bcm chips they use.

When your SoC is a GPU with CPU cores tacked on, it's a bit weird to boot things up.

[flagged]


It is acutely on point. The only reason people have to put in work again and again to fix distributions like Fedora for Raspberry Pi models is because the foundation pulls stunts like that revD. Right now, you can take Buildroot at git master, build an RPi image and have it randomly not work on one of two what looks like identical RPi 5 boards. That's bad, and there is no reason for it.

And you would solve this how?

Your comment only serves to illustrate exactly why big companies like BRCM are not seeing the case the way you do. Apple, if you want to start naming names puts out hardware that is far more closed than the Raspberry Pi foundation and yet you don't see the same level of aggression against Apple. What you do see is a couple of very talented hackers that won't take 'you can't' for an answer and that will RE stuff until they know enough to scratch their itch.

That's the way you solve these problems, not by writing take-downs.

Not having UEFI on ARM has never held me back. I do have a nice Apple laptop lying around here that is unusable because the network drivers need a functioning copy of Apple's OS on that machine to get bootstrapped. Rather than bitching at Apple about it I just stopped using and buying their products.


Apple doesn't pretend to be open.

Apple can afford to spend as much as they want on this and they are in control, they're as vertically integrated as it gets. Heck, they could divert some of their developer toll to this.

The Raspberry Pi foundation is emphatically not in control of Broadcom, and in spite of their success still has limited resources and needs to work with what they've got and to prioritize.


> Apple, if you want to start naming names puts out hardware that is far more closed than the Raspberry Pi foundation and yet you don't see the same level of aggression against Apple.

Ooooh of course, I 'member the days right here when they announced they'd drop Intel. And I am fairly certain the echo across the tech blogosphere was what led them to, while not openly announcing they'd support a competing OS like they did with Bootcamp, they'd at least not lock down the bootloader like on iOS devices.

> What you do see is a couple of very talented hackers that won't take 'you can't' for an answer and that will RE stuff until they know enough to scratch their itch.

Apple, to my knowledge, never explicitly said "you can't" - at least not on Mac devices, for iOS the situation is different. All they're saying is "we won't help you, but you may try your best".

> Not having UEFI on ARM has never held me back.

The thing is the lack of UEFI adoption in the ARM sphere is holding everyone back! An OS / distribution shouldn't have to manage devicetree overlays on its own, they should be provided by the BIOS/UEFI management layer as a finished component.

RPi is the biggest toppest dog in the embedded world, at least when it comes to an ecosystem. They would have all the muscle needed to force everyone else's hand.

> I do have a nice Apple laptop lying around here that is unusable because the network drivers need a functioning copy of Apple's OS on that machine to get bootstrapped.

What did you do to that thing? On any pre-ARM machine, the bare bootloader should always, even if the primary storage is gone, be able to bring up enough hardware to support a UI, an USB and networking stack to allow restoring it from the Internet. ARM machines I'm not sure, haven't had the misfortune of having to dig down that deep, but I think even they should be able to do that in case you somehow manage to fry your partition table. And even if you managed to fry that, any other Apple device should be able to do a DFU restore on its lowest level bootloader.


Agreed that the EUFI thing could be better, but I don't see how you could compel Raspberry Pi to fix it without knowing the exact details of the license agreement that the foundation signed with Broadcom and I suspect that that more than anything is what is holding this back. It's not as if they're deaf or can't read at the Raspberry Pi foundation.

As for that machine: it's got a bunch of stuff on it and I have dongle with ethernet so I can live without it. It's one of the last line of Intel portables they made and there just aren't enough people that want this fixed and I'm not smart enough to fix it.

Meanwhile, and probably ironically, that too is a Broadcom chip...


Very sorry, but people are allowed to have opinions and to express them. If the opinions upset you, then don't read them - by your logic anyway.

Atlassian hasn't made money in 10 years. Of course they can't ride on the latest stock slop meme, that company is such an unmitigated disaster it beats even their terrible software. And now they keep spamming me with that Rovo garbage, god I hope they go down among all of this.

bzip and gzip are both horrible, terribly slow. Wherever I see "gz" or "bz" I immediately rip that nonsense out for zstd. There is such a thing as a right choice, and zstd is it every time.

> Wherever I see "gz" or "bz"

That should not happen too often, considering that IIRC bzip lasted only a couple of months before being replaced by bzip2.


lz4 can still be the right choice when decompression speed matters. It's almost twice as fast at decompression with similar compression ratios to zstd's fast setting.

https://github.com/facebook/zstd?tab=readme-ov-file#benchmar...


pigz it's damn fast on compressing. Also, a Vax with NetBSD can run gzip. So here is it. Go try these new fancy formats on a Vax, I dare you.

And, yes, I prefer LZMA over the obsolete Bzip2 any day, but GZIP it's like the ZIP of free formats modulo packaging, which it's the job of TAR.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: