It sounds like the PoE spec was designed before the arrival of “IoT” type things like the esp32, raspberry pi’s, etc.
How much of the complexity is a “fundamental electrical engineering problem” and how much of it is just a spec written to solve a different set of problems?
Almost all of the complexity of PoE is fundamental. To get enough power over 100m of ethernet cable (10x longer than USB) you have to run at much higher voltages like 48V. The same has eventually come to USB: for USB-C Pd to reach 240W, it also has to use 48V.
There have long been lower-voltage "passive PoE" systems which expect a lower always-on voltage on some of the ethernet pairs (usually 12V, 24V, or rarely 48V), which can be very easy to implement so long as your users can handle the setup and incompatibility with other ethernet devices (in the most extreme case of passive PoE on 100Mb/s ethernet, you simply connect the positive pair to the input of the device and the negative pair to the ground, no additional hardware needed).
Subagents are isolated context windows, which means they cannot get polluted as easily with garbage from the main thread. You can have multiple of them running in parallel doing their own separate things in service of whatever your own “brain thread”… it’s handy because one might be exploring some aspect of what you are working on while another is looking at it from a different perspective.
I think the people doing multiple brain threads at once are doing that because the damn tools are so fucking slow. Give it little while and I’m sure these things will take significantly less time to generate tokens. So much so that brand new bottlenecks will open up…
It'd dogfooding the entire concept of vibe coding and honestly, that is a good thing. Obviously they care about that stuff, but if your ethos is "always vibe code" then a lot of the fixes to it become model & prompting changes to get the thing to act like a better coder / agent / sysadmin / whatever.
> How can people be so naive as to run something like Claude anywhere other than in a strictly locked down sandbox that has no access to anything but the single git repo they are working on (and certainly no creds to push code)?
Because it’s insanely useful when you give it access, that’s why. They can do way more tasks than just write code. They can make changes to the system, setup and configure routers and network gear, probe all the iot devices in the network, set up dns, you name it—anything that is text or has a cli is fair game.
The models absolutely make catastrophic fuckups though and that is why we’ll have to both better train the models and put non-annoying safeguards in front of them.
Running them in isolated computers that are fully air gapped, require approval for all reads and writes, and can only operate inside directories named after colors of the rainbow is not a useful suggestion. I want my cake and I want to eat it too. It’s far to useful to give these tools some real access.
It doesn’t make me naive or stupid to hand the keys over to the robot. I know full well what I’m getting myself into and the possible consequences of my actions. And I have been burned but I keep coming back because these tools keep getting better and they keep doing more and more useful things for me. I’m an early adopter for sure…
Well, one of the other reasons I suggest running it in a strictly limited container is that you can then run it in yolo mode.
In fact, I use the pi agent, which doesn't have command sandboxing, it's always in yolo mode, I just run it in a container and then I get the benefit of not having to confirm every command, while strictly controlling what I share with it from the beginning of the session.
And doing it over, and over, and over and over again. Because sure it didn't change in the last 8 years but maybe it's changed since yesterdays scrape?
It will mess up eventually. It always does. People need to stop thinking of this is a “security against malicious actor” thing… because thinking in that way blinds you to the actual threat… Claude being helpful and accidentally running a command it shouldn’t. It’s happened to me twice now where it will do something irreversible and also incorrect. It wasn’t a threat actor, it wasn’t a bad guy… it was a very eager, incredibly clever assistant fat fingering something and goofing up. The more power you let them wield, the more chance they’ll do accidents. But without lots of power, they don’t really do much useful…
It’s actually a hard problem. But it really isn’t “security” in the classic sense…
How much of the complexity is a “fundamental electrical engineering problem” and how much of it is just a spec written to solve a different set of problems?
reply