Guys!!! Important!!! Don't buy or lease an EV now!! Battery breakthrough is coming! Your car will be obsolete trash in two weeks tops! Buy ICE car instead! Stable investment!
It is a slightly weird experience trying to buy an EV as they genuinely do get significantly better very quickly. It's like buying a computer in the 90s or a phone in the 00s.
Ok, but the Rivian R1S is a particularly inefficient EV (2-2.5 mi/kWh = 31-25 kWh/100 km). 12.5 kWh/100 km is efficient but not outlandishly so considering these are likely CLTC ranges, which are higher than WLTP which are higher than EPA, and the car in question is not in fact a dumptruck.
PoE is also fairly bulky, requires large connectors, and either requires a wholly isolated PD or what's basically a class 2 DC/DC converter. That's why PoE-powered stuff usually has that big transformer cube in it with a lot of clearance, slotted PCB, 2-4 kV capacitors etc.
In practice PoE will have lower efficiency than mains powered, since it'll usually be at least double conversion, often three converters in series, plus the losses of the thin network wires, and the relatively high idle losses / poor low-load efficiency of the necessarily over-dimensioned PSE.
Yeah, that too. Windows supports them nowadays too, just to be clear. I think we're still bottlenecked, right now, on #1 and #2 in the form of Java 8 refusing to die.
Yeah, doing the math it's actually only 33 years of not supporting AF_UNIX, but that's not really right either, since those versions of Windows didn't support any sockets. I guess the technically correct answer then is that Windows didn't support UDS for 26 years.
Which is still enough for most portable software to go "eh, localhost is fine*"
* resolving localhost is actually a pretty bad idea (yet very common) and it's way more robust to listen directly on a numeric address.
It's one of those things that keeps getting reinvented over and over. Most people don't want to put in the effort to perfect it.
For the longest time, I've done the bare minimum and just used console.log/print for whatever I thought I needed in that particular piece of code; without any coherent system.
The Honda e was a massively compromised vehicle due to the tiny ~29 kWh net battery and high energy consumption. It was released in 2020 but in terms of utility it's really much more like an early 2010s EV.
There's a pattern where people create AI-specific infrastructure for coding agents which is essentially instantly obsolete because it's pointless. Stuff like most MCPs (instead of just using a CLI), agent-specific files (CLAUDE.md, AGENTS.MD, github-instructions.NET etc.) etc.
> You should have a good, concise introduction to the codebase that allows anyone to write and test a simple patch in under 15 minutes.
MAE is kind of obvious, because healthy ears are incredibly sensitive. 0 dbSPL translates to attowatts on the eardrum displacing it just a few pm, with hair cells firing on sub-nm movements (after mechanical amplification). It is completely unsurprising that just the thermal effect of RF being pulsed in the general direction of the head can become audible in the right circumstances.
The gzip compression of layers is actually optional in OCI images, but iirc not in legacy docker images. The two formats are not the same. On SSDs, the overhead for building an index for a tar is not that high, if we're primarily talking about large files (so the data/weights/cuda layers instead of system layers). The approach from the article is of course still faster, especially for running many minor variations of containers, though I am wondering how common it is for only some parts of weights changing? I would've assumed that most things you'll do with weights would change about 100% of them when viewed through 1M chunks. The lazy pulling probably has some rather dubious/interesting service latency implications.
The main annoyance imho with gzip here is that it was already slow when the format was new (unless you have Intel QAT and bothered to patch and recompile that into all the go binaries which handle these, which you do not).
Yeah that’s fair. For weights specifically there often isn’t a huge dedupe win across versions since retraining tends to change most of them. That said, we generally don’t advocate including model weights in container images anyway. The main benefit for us is avoiding the need to pull the full image up front and only fetching the data actually touched during startup. On the latency side, reads happen over a local network with caching and prefetching, so the impact on request latency is typically minimal.
reply