Hacker Newsnew | past | comments | ask | show | jobs | submit | ruuda's commentslogin

Give https://rcl-lang.org/#intuitive-json-queries a try! It can fill a similar role, but the syntax is very similar to Python/TypeScript/Rust, so you don’t need an LLM to write the query for you.

Nice! Thanks!

RCL (https://github.com/ruuda/rcl) pretty-prints its output by default. Pipe to `rcl e` to pretty-print RCL (which has slightly lighter key-value syntax, good if you only want to inspect it), while `rcl je` produces json output.

It doesn’t align tables like FracturedJson, but it does format values on a single line where possible. The pretty printer is based on the classic A Prettier Printer by Philip Wadler; the algorithm is quite elegant. Any value will be formatted wide if it fits the target width, otherwise tall.


Everything I know about IPv6 comes from this one blog post: https://apenwarr.ca/log/20170810. It’s from 2017, when IPv6 adoption was 17% according to https://www.google.com/intl/en/ipv6/statistics.html; today it’s close to 50%.

I'd assume a lot of this is because of mobile devices of some type. Getting legacy network operators like cable providers to supply IPv6 has been hell.

Eyeball networks and cloud providers have been implementing IPv6. In the US all major phone carriers are v6 only with XLAT, the large residential ISP all have implemented v6 (Charter/Spectrum, Comcast/Xfinity, altice/optimum). The lagging networks are smaller residential ISP and enterprise networks.

In Asia they've implemented v6 everywhere pretty much because their v4 allocation is woefully insufficient. APNIC has like 4 billion people in it but less IP space than ARIN, with a population of less than 500 million.


Just because the ISPs have implemented IPv6 doesn't mean anyone's home router is using it, let alone all the devices in the home WiFi

Well the data shows they are in fact using it. Most people use their ISP router which in these carriers would be setup by default to use v6, plus any router bought in the last 10 years would support v6 and probably use it by default.

I'm on a large ISP provider and they do not have IPv6 in my area, a new build with fiber to a access point that turns it to cable on the house. So there's that.

Ah RFoG. It's a weird technology choice. I think it's supposed to be transitional so they get the fiber in the ground now and then can later come back and rip out all the DOCSIS equipment and replace it with *PON

Obviously they are. Most people use the equipment provided by their ISP without ever changing any settings.

If the ISP is IPv6-first, you bet that their customers are using it in their home WiFi.


> Getting legacy network operators like cable providers to supply IPv6 has been hell.

In my experience it's actually the large enterprises that are having issues.


Is that worldwide adoption or adoption in the US? China went from almost nothing to 77% adoption is just a few years because they included it in their last 5-year-plan. How much of that adoption would be explained by China alone

Google's stats are Google International, i.e. everywhere Google provides service. Whether that includes China depends on the whims of the Politbüro.

Came to post that blog. It's awesome. Everyone should read it.

That's the best thing I've read all year. Ok, it's the best thing I've read last year too. I kinda knew all this stuff but I never knew how it all happened. I never thought of MAC as unnecessary in an IPv6 world.

> Python now uses UTF-8 as the default encoding, independent of the system’s environment.

Nice, not specifying the encoding is one of the most common issues I need to point out in code reviews.


encode()/decode() have used UTF-8 as the default since Python 3.2 (soon, 15 years ago). This is about the default encoding for e.g. the "encoding" parameter of open().


You mean the coding= comment? Where are you shipping your code that that was actually a problem? I've never been on a project where we did that, let alone needed it.


The comment you mention applies to source code encoding and it's obsolete for Python 3 since the beginning. This is about something else: https://docs.python.org/3.15/whatsnew/3.15.html#whatsnew315-...


Makes sense, my bad, but even that is something I've never seen. I guess this is mostly a Windows thing? I've luckily never had the misfortune of having to deploy Python code on Windows.


It's a Linux thing too. It bit me in particular when running a script in a container that defaulted to ascii rather than utf-8 locale.


Have you considered reducing review noise by using static analysis?


Yep, ruff has a warning for this exact issue.


Pylint has had it too for at least a decade.



There is a sweet spot for the bass. Lower is better for deep bass, but too low and it stops being a recognizable note, and consumer speakers can't reproduce it. This effect exists though I'm not sure if it is the cause of the pattern here.


They accept Monero too


I find them helpful. It happens semi-regularly now that I read something that was upvoted, but after a few sentences I think "hmm, something feels off", and after the first two paragraphs I suspect it's AI slop. Then I go to the comments, and it turns out others noticed too. Sometimes I worry that I'm becoming too paranoid in a world where human-written content feels increasingly rare, and it's good to know it's not me going crazy.

In one recent case (the slop article about adenosine signalling) a commenter had a link to the original paper that the slop was engagement-farming about. I found that comment very helpful.


Dell XPS used to be like this, but unfortunately Dell discontinued them :'(


Agreed. I got further into this one than usual before I grew suspect, but something felt off.


> We did have three bugs that would have been prevented by the borrow checker, but these were caught by our fuzzers and online verification. We run a fuzzing fleet of 1,000 dedicated CPU cores 24/7.

Remember people, 10,000 CPU hours of fuzzing can save you 5ms of borrow checking!

(I’m joking, I’m joking, Zig and Rust are both great languages, fuzzing does more than just borrow checking, and I do think TigerBeetle’s choices make sense, I just couldn’t help noticing the irony of those two sentences.)


It's not that ironic though --- the number of bugs that were squashed fuzzers&asserts but would have dodged the borrow checker is much, much larger.

This is what makes TigerBeetle context somewhat special --- in many scenarios, security provided by memory safety is good enough, and any residual correctness bugs/panics are not a big deal. For us, we need to go extra N miles to catch the rest of the bugs as well, and DST is a much finer net for those fishes (given static allocation & single threaded design).


I don't think needing to go "the extra N miles" is that special. Even if security is the only correctness concern - and in lots of cases it isn't, and (some) bugs are a very big deal - memory safety covers only a small portion of the top weaknesses [1].

Mathematically speaking, any simple (i.e. non-dependent) type system catches 0% of possible bugs :) That's not to say it can't be very useful, but it doesn't save a lot of testing/other assurance methods.

[1]: https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html Also, spatial safety is more important for security than temporal safety. As far as language guarantees go, Zig and Rust only differ on #8 on the list.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: