Give https://rcl-lang.org/#intuitive-json-queries a try! It can fill a similar role, but the syntax is very similar to Python/TypeScript/Rust, so you don’t need an LLM to write the query for you.
RCL (https://github.com/ruuda/rcl) pretty-prints its output by default. Pipe to `rcl e` to pretty-print RCL (which has slightly lighter key-value syntax, good if you only want to inspect it), while `rcl je` produces json output.
It doesn’t align tables like FracturedJson, but it does format values on a single line where possible. The pretty printer is based on the classic A Prettier Printer by Philip Wadler; the algorithm is quite elegant. Any value will be formatted wide if it fits the target width, otherwise tall.
I'd assume a lot of this is because of mobile devices of some type. Getting legacy network operators like cable providers to supply IPv6 has been hell.
Eyeball networks and cloud providers have been implementing IPv6. In the US all major phone carriers are v6 only with XLAT, the large residential ISP all have implemented v6 (Charter/Spectrum, Comcast/Xfinity, altice/optimum). The lagging networks are smaller residential ISP and enterprise networks.
In Asia they've implemented v6 everywhere pretty much because their v4 allocation is woefully insufficient. APNIC has like 4 billion people in it but less IP space than ARIN, with a population of less than 500 million.
Well the data shows they are in fact using it. Most people use their ISP router which in these carriers would be setup by default to use v6, plus any router bought in the last 10 years would support v6 and probably use it by default.
I'm on a large ISP provider and they do not have IPv6 in my area, a new build with fiber to a access point that turns it to cable on the house. So there's that.
Ah RFoG. It's a weird technology choice. I think it's supposed to be transitional so they get the fiber in the ground now and then can later come back and rip out all the DOCSIS equipment and replace it with *PON
Is that worldwide adoption or adoption in the US? China went from almost nothing to 77% adoption is just a few years because they included it in their last 5-year-plan. How much of that adoption would be explained by China alone
That's the best thing I've read all year. Ok, it's the best thing I've read last year too. I kinda knew all this stuff but I never knew how it all happened. I never thought of MAC as unnecessary in an IPv6 world.
encode()/decode() have used UTF-8 as the default since Python 3.2 (soon, 15 years ago). This is about the default encoding for e.g. the "encoding" parameter of open().
You mean the coding= comment? Where are you shipping your code that that was actually a problem? I've never been on a project where we did that, let alone needed it.
Makes sense, my bad, but even that is something I've never seen. I guess this is mostly a Windows thing? I've luckily never had the misfortune of having to deploy Python code on Windows.
There is a sweet spot for the bass. Lower is better for deep bass, but too low and it stops being a recognizable note, and consumer speakers can't reproduce it. This effect exists though I'm not sure if it is the cause of the pattern here.
I find them helpful. It happens semi-regularly now that I read something that was upvoted, but after a few sentences I think "hmm, something feels off", and after the first two paragraphs I suspect it's AI slop. Then I go to the comments, and it turns out others noticed too. Sometimes I worry that I'm becoming too paranoid in a world where human-written content feels increasingly rare, and it's good to know it's not me going crazy.
In one recent case (the slop article about adenosine signalling) a commenter had a link to the original paper that the slop was engagement-farming about. I found that comment very helpful.
> We did have three bugs that would have been prevented by the borrow checker, but these were caught by our fuzzers and online verification. We run a fuzzing fleet of 1,000 dedicated CPU cores 24/7.
Remember people, 10,000 CPU hours of fuzzing can save you 5ms of borrow checking!
(I’m joking, I’m joking, Zig and Rust are both great languages, fuzzing does more than just borrow checking, and I do think TigerBeetle’s choices make sense, I just couldn’t help noticing the irony of those two sentences.)
It's not that ironic though --- the number of bugs that were squashed fuzzers&asserts but would have dodged the borrow checker is much, much larger.
This is what makes TigerBeetle context somewhat special --- in many scenarios, security provided by memory safety is good enough, and any residual correctness bugs/panics are not a big deal. For us, we need to go extra N miles to catch the rest of the bugs as well, and DST is a much finer net for those fishes (given static allocation & single threaded design).
I don't think needing to go "the extra N miles" is that special. Even if security is the only correctness concern - and in lots of cases it isn't, and (some) bugs are a very big deal - memory safety covers only a small portion of the top weaknesses [1].
Mathematically speaking, any simple (i.e. non-dependent) type system catches 0% of possible bugs :) That's not to say it can't be very useful, but it doesn't save a lot of testing/other assurance methods.
reply