Hacker Newsnew | past | comments | ask | show | jobs | submit | majke's commentslogin

Ispc looks interesting. Does it work with amd? They hint on gpu’s , i guess mostly intel ones?

Yes, it works with AMD CPUs as well as various ARM ones, e.g. Apple silicon.

See for instance https://github.com/ispc/ispc/pull/2160


Yes, works well with AMD. You can compile multi-target so that you'll have e.g. SSE4.2, AVX2, AVX512 support built to your binaries and the best (widest) version is picked by the runtime automatically.

So... does it mean that nobody reads the law? Is it good or bad? What is the takeaway?


> does it mean that nobody reads the law?

It means every lawmaker isn’t a geologist. That’s fine. It also means there clearly wasn’t enough public input from geologists that someone would have noticed a name they’d never seen in a group they were familiar with.


At least not geologists who don't also work for coal companies.


I’ll go out on a limb and guess that none of the industry groups found this funny. They want to replicate this law in other states. The lawyers who worked on it, on the other hand, only work in North Dakota. And calling a spade a spade, I guess if I needed something in front of that legislature I’d at least know they’re competent and crafty.


That often happens. Some bills are released to lawmakers without enough time to read them.


Wouldn't the obvious answer to be vote against? If you do not know the terms of the deal, maintaining the status quo seems superior.


My understanding: You are one legislator among many; you have almost no power on your own. To deliver results for your constituents or to accomplish anything depends especially on party leadership and on other members of your party prioritizing your wishes over many other things. If you don't follow leadership, if you aren't on the team, they won't do anything for you.


I think the answer to those questions are pretty self-evident.


Richard Jones is still alive and kicking https://x.com/metabrew


^ protected tweets. But he's also on Bsky:

https://bsky.app/profile/metabrew.com


Silly curiosity - what's that "BEAM-appreciator" in that bio? I could only think of a protein brand name (that too not from my geography) shortened as BEAM :/

Anyway I loved that he has reposted a post from musicbraniz https://bsky.app/profile/musicbrainz.org/post/3lnhvp23jc22l/...


I would guess it's referring to the Erlang VM: https://en.wikipedia.org/wiki/BEAM_(Erlang_virtual_machine)


this is correct :)


Back in 2022 I published a doc on how the egress IPs work at Cloudflare:

https://blog.cloudflare.com/cloudflare-servers-dont-own-ips-...

In summary, the location at which an IP egresses Cloudflare network has nothing to do with the geo-ip mapping of that IP. In some cases the decision on where to egress is optimised for "location closest to the user", but this is also not always true.

And then there is the Internet. Often some country (say Iran) egresses from a totally different place (like Frankfurt) due to geopolitics and just location of cables.


So, there is a dashboard internally for that. When we do ProbeNet PoP assessment, we have a high-level overview of the frequent and favored connections. We have a ton of servers in Africa, and there is a strong routing bias towards France, Germany, and the UK instead of neighboring connections.

Everyone in our engineering and leadership is very close with various CDN companies. We do echo this idea to them. It is not IP geolocation; we actually have a ton of routing data they can use.


Hey! Popcount used to be my favorite instruction. Now I think I prefer LOP3 though :)


Could you explain more please?


> Coordinator sees Node A has significantly fewer rows (logical count) than the cluster average. It flags Node A as "underutilized."

Ok, so you are dealing with a classic - you measure A, but what matters is B. For "load" balancing a decent metric is, well, response time (and jitter).

For data partitioning - I guess number of rows is not the right metric? Change it to number*avg_size or something?

If you can't measure the thing directly, then take a look at stuff like "PID controller". This can be approach as a typical controller loop problem, although in 99% doing PID for software systems is an overkill.


The trouble with mmap is the performance cliff. A node goes from 'fine' to 'dead' almost instantly, which breaks our balancing logic.

You are right that we need better backpressure. Instead of a smarter coordinator, we probably need 'dumber' nodes that aggressively shed load (return 429s) the moment local pressure spikes, rather than waiting for a re-balance.


Questions for "questions for cloudflare" owner


> > faster - less context switches and copies

This is very much newbie way of thinking. How do you know? Did you profile it?

It turns out there is surprisingly little dumb zero-copy potential at CF. Most of the stuff is TLS, so stuff needs to go through userspace anyway (kTLS exists, but I failed to actually use it, and what about QUIC).

Most of the cpu is burned on dumb things, like application logic. Turns out data copying and encryption and compression are actually pretty fast. I'm not saying these areas aren't ripe for optimization - but the majority of the cost was historically in much more obvious areas.


> This is very much newbie way of thinking. How do you know? Did you profile it?

Does it matter? less syscalls is better. Whatever is being done in kernel mode can be replicated (or improved upon much more) in a user-space stack. It is easier to add/manage api's in user space than kernel apis. You can debug, patch, etc.. a user space stack much more easily. You can have multiple processes for redundancy, ensure crashes don't take out the whole system. I've had situations where rebooting the system was the only solution to routing or arp resolution issues (even after clearing caches). Same with netfilter/iptables "being stuck" or exhibiting performance degradation over time. if you're lucky a module reload can fix it, if it was a process I could have just killed/restarted it with minimal disruption.

> Most of the cpu is burned on dumb things, like application logic. Turns out data copying and encryption and compression are actually pretty fast. I'm not saying these areas aren't ripe for optimization - but the majority of the cost was historically in much more obvious areas.

I won't disagree with that, but one optimization does not preclude the other. if ip/tcp were user-space, they could be optimized better by engineers to fit their use cases. The type of load matters too, you can optimize your app well, but one corner case could tie up your app logic in cpu cycles, if that happens to include a syscall, and if there is no better way to handle it, those context switch cycles might start mattering.

In general, I don't think it makes much difference..but I expected companies like CF that are performance and outage sensitive to strain every last drop of performance and reliability out of their system.


This happened before my watch, but I always was rooting for Linux. Linux is winning on many aspects. Consider the featureset of iptables (CF uses loads of stuff, from "comment" to "tproxy"), bpf for metrics is a killer (ebpf_exporter), bpf for DDoS (XDP), Tcp fast open, UDP segmentation stuff, kTLS (arguably half-working). Then there is non-networking things like Docker, virtio ecosystem (vhost), seccomp, namespaces (net namespace for testing network apps is awesome). And the list goes on. Not to mention hiring is easier for Linux admins.


Falsehoods programmers think about addresses:

- parsing addresses is well defined (try parsing ::1%3)

- since 127.0.0.2 is on loopback, ::2 surely also would be

- interface number on Linux is unique

- unix domain socket names are zero-terminated (abstract are not)

- sin6_flowinfo matters (it doens;t unless you opt-in with setsockopt)

- sin6_scope_id matters (it doesn't unless on site-local range)

(I wonder if scope_id would work on ipv4-mapped-IPv6, but if I remember right I checked and it didn't)

- In ipv4, scope_id doesnt exist (true but it can be achieved by binding to interface)

and so on...

Years ago I tried to document all the quirks I knew about https://idea.popcount.org/2019-12-06-addressing/


Thanks. At Oxide we do use the scope ID quite a bit, as my colleague Cliff Biffle says here: https://hachyderm.io/@cliffle/115492946627058792


It's sad that the only other loopback v6's appear to be v4's /8 in the form mapped into a slice of v7 address space


You can use ::ffff:127.0.0.2 for most purposes, but you can't ping it.


> you can't ping it

WTF?..

(My Linux machine can, but I’ve no clue if I should trust that now.)


Doesn't work on my Arch Linux. Neither does pinging ::ffff:127.0.0.1. Pinging 127.0.0.1 and ::1 works.


I think I know what's going on: iputils[1] ping can't ping IPv6-mapped IPv4 addresses, but inetutils[2] ping can. And look inside the parens:

  $ ping ::ffff:127.0.0.1
  PING ::ffff:127.0.0.1 (::ffff:127.0.0.1) 56 data bytes
says the non-working one,

  $ ping ::ffff:127.0.0.1
  PING ::ffff:127.0.0.1 (127.0.0.1): 56 data bytes
says the working one. In Wireshark, the latter appears as ICMPv4 packets on the lo interface, whereas the former does not appear at all(?..). So overall this makes some amount of sense: you can write a TCP-using program that's agnostic to whether it's running on top IPv4 or IPv6, but you have to use different ICMP versions for IPv4 and IPv6. I actually don't know why it has to be that way.

(My initial confusion was because I thought 'o11c was saying they could ping ::ffff:127.0.0.1 but not .2. It makes much more sense for either both or neither to be pingable.)

[1] https://github.com/iputils/iputils (the one that comes with the bizarre tracepath thing)

[2] https://www.gnu.org/software/inetutils/


Hm, it has always failed for me on Debian.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: