Yes, works well with AMD. You can compile multi-target so that you'll have e.g. SSE4.2, AVX2, AVX512 support built to your binaries and the best (widest) version is picked by the runtime automatically.
It means every lawmaker isn’t a geologist. That’s fine. It also means there clearly wasn’t enough public input from geologists that someone would have noticed a name they’d never seen in a group they were familiar with.
I’ll go out on a limb and guess that none of the industry groups found this funny. They want to replicate this law in other states. The lawyers who worked on it, on the other hand, only work in North Dakota. And calling a spade a spade, I guess if I needed something in front of that legislature I’d at least know they’re competent and crafty.
My understanding: You are one legislator among many; you have almost no power on your own. To deliver results for your constituents or to accomplish anything depends especially on party leadership and on other members of your party prioritizing your wishes over many other things. If you don't follow leadership, if you aren't on the team, they won't do anything for you.
Silly curiosity - what's that "BEAM-appreciator" in that bio? I could only think of a protein brand name (that too not from my geography) shortened as BEAM :/
In summary, the location at which an IP egresses Cloudflare network has nothing to do with the geo-ip mapping of that IP. In some cases the decision on where to egress is optimised for "location closest to the user", but this is also not always true.
And then there is the Internet. Often some country (say Iran) egresses from a totally different place (like Frankfurt) due to geopolitics and just location of cables.
So, there is a dashboard internally for that. When we do ProbeNet PoP assessment, we have a high-level overview of the frequent and favored connections. We have a ton of servers in Africa, and there is a strong routing bias towards France, Germany, and the UK instead of neighboring connections.
Everyone in our engineering and leadership is very close with various CDN companies. We do echo this idea to them. It is not IP geolocation; we actually have a ton of routing data they can use.
> Coordinator sees Node A has significantly fewer rows (logical count) than the cluster average. It flags Node A as "underutilized."
Ok, so you are dealing with a classic - you measure A, but what matters is B. For "load" balancing a decent metric is, well, response time (and jitter).
For data partitioning - I guess number of rows is not the right metric? Change it to number*avg_size or something?
If you can't measure the thing directly, then take a look at stuff like "PID controller". This can be approach as a typical controller loop problem, although in 99% doing PID for software systems is an overkill.
The trouble with mmap is the performance cliff. A node goes from 'fine' to 'dead' almost instantly, which breaks our balancing logic.
You are right that we need better backpressure. Instead of a smarter coordinator, we probably need 'dumber' nodes that aggressively shed load (return 429s) the moment local pressure spikes, rather than waiting for a re-balance.
This is very much newbie way of thinking. How do you know? Did you profile it?
It turns out there is surprisingly little dumb zero-copy potential at CF. Most of the stuff is TLS, so stuff needs to go through userspace anyway (kTLS exists, but I failed to actually use it, and what about QUIC).
Most of the cpu is burned on dumb things, like application logic. Turns out data copying and encryption and compression are actually pretty fast. I'm not saying these areas aren't ripe for optimization - but the majority of the cost was historically in much more obvious areas.
> This is very much newbie way of thinking. How do you know? Did you profile it?
Does it matter? less syscalls is better. Whatever is being done in kernel mode can be replicated (or improved upon much more) in a user-space stack. It is easier to add/manage api's in user space than kernel apis. You can debug, patch, etc.. a user space stack much more easily. You can have multiple processes for redundancy, ensure crashes don't take out the whole system. I've had situations where rebooting the system was the only solution to routing or arp resolution issues (even after clearing caches). Same with netfilter/iptables "being stuck" or exhibiting performance degradation over time. if you're lucky a module reload can fix it, if it was a process I could have just killed/restarted it with minimal disruption.
> Most of the cpu is burned on dumb things, like application logic. Turns out data copying and encryption and compression are actually pretty fast. I'm not saying these areas aren't ripe for optimization - but the majority of the cost was historically in much more obvious areas.
I won't disagree with that, but one optimization does not preclude the other. if ip/tcp were user-space, they could be optimized better by engineers to fit their use cases. The type of load matters too, you can optimize your app well, but one corner case could tie up your app logic in cpu cycles, if that happens to include a syscall, and if there is no better way to handle it, those context switch cycles might start mattering.
In general, I don't think it makes much difference..but I expected companies like CF that are performance and outage sensitive to strain every last drop of performance and reliability out of their system.
This happened before my watch, but I always was rooting for Linux. Linux is winning on many aspects. Consider the featureset of iptables (CF uses loads of stuff, from "comment" to "tproxy"), bpf for metrics is a killer (ebpf_exporter), bpf for DDoS (XDP), Tcp fast open, UDP segmentation stuff, kTLS (arguably half-working). Then there is non-networking things like Docker, virtio ecosystem (vhost), seccomp, namespaces (net namespace for testing network apps is awesome). And the list goes on. Not to mention hiring is easier for Linux admins.
I think I know what's going on: iputils[1] ping can't ping IPv6-mapped IPv4 addresses, but inetutils[2] ping can. And look inside the parens:
$ ping ::ffff:127.0.0.1
PING ::ffff:127.0.0.1 (::ffff:127.0.0.1) 56 data bytes
says the non-working one,
$ ping ::ffff:127.0.0.1
PING ::ffff:127.0.0.1 (127.0.0.1): 56 data bytes
says the working one. In Wireshark, the latter appears as ICMPv4 packets on the lo interface, whereas the former does not appear at all(?..). So overall this makes some amount of sense: you can write a TCP-using program that's agnostic to whether it's running on top IPv4 or IPv6, but you have to use different ICMP versions for IPv4 and IPv6. I actually don't know why it has to be that way.
(My initial confusion was because I thought 'o11c was saying they could ping ::ffff:127.0.0.1 but not .2. It makes much more sense for either both or neither to be pingable.)
reply