Not that that isn’t a practical concern, but that’s not the level at which the network claims to be decentralized. Your account was banned by one participant in the hypothetical decentralized network.
Agreed, in theory all participants are equal and being banned by one participant shouldn't lock you out of a decentralized network.
In practice, the vast majority of handles (98.9% as of 2024) are under bsky.social [1]. Yes, alternative PDS providers exist, but if the default onboarding funnels everyone into one provider, and the average user doesn't even know what a PDS is, then decentralization is an implementation detail, not a user-facing reality.
this is true, "black" has been used in racist ways, but it got rehabbed and reclaimed in the 60s and 70s.
but more to the point, it is not currently used in a racist manner by the vast majority of the US, and certainly does not carry the same connotations as "yellow", so not really comparable imo
If and when the asian community decides to reappropriate "yellow" as a way of self identification, then given a few decades, it will not be seen as racist anymore.
In the mean time, "yellow" is a racist adjective for asians, "black" is not a racist adjective for black people.
> In several Gallup measurements over the next three decades, including the most recent in 2019, the large majority of Black Americans have said the use of Black vs. African American doesn't matter to them.
Not caring is not acceptance. The term is literally racist both and origin. Unfortunately they were denied being called simply Americans due to historical reasons. African American is sadly also a misnomer given that there’s barely any connection to Africa for the people generally referred to as “black”.
Notice how everyone else is called by nationality or origin.
Black is absolutely accepted as an accepted adjective. Especially with the capital-b, Black is used to refer to the unique Black culture and heritage in the United States. Black history is one where people were taken from their nations or places of origin, transported to a foreign land, and put in bondage. As you say in your own comment, many black or African-American people (whichever label you prefer) have little connection to Africa; it wouldn't make sense to them to refer to them by nationality or origin, when Black culture is its own thing.
Don't get it twisted: I agree that the history of African-Americans in the US is one marred by slavery, segregation, racism, and the constant struggle to attain and retain equality. But out of that came something unique that many black people celebrate to this day.
There's not really a black community either, it's a demographic. There are many communities of black people, but we really need to stop equating demographics with communities (not just this case).
What they point to are capabilities, but the integer handles that user space gets are annoyingly like pointers. In some respects, better, since we don’t do arithmetic on them, but in others, worse: they’re not randomized, and I’ve never come across a sanitizer (in the ASan sense) for them, so they’re vulnerable to worse race condition and use-after-free issues where data can be quietly sent to the entirely wrong place. Unlike raw pointers’ issues, this can’t even be solved at a language level. And maybe worst of all, there’s no bug locality: you can accidentally close the descriptor backing a `FILE*` just by passing the wrong small integer to `close` in an unrelated part of the program, and then it’ll get swapped out at the earliest opportunity.
BITD the one "fd sanitizer" I ever encountered was "try using the code on VxWorks" which at the time was "posix inspired" at best - fds actually were pointers, so effectively random and not small integers. It didn't catch enough things to be worth the trouble, but it did clean up some network code (ISTR I was working on SNTP and Kerberos v4 and Kerberized FTP when I ran into this...)
> **3. The Layer 7 Limitation** Cloudflare operates primarily at the application layer. Many failures happen deeper in the stack. Aggressive SYN floods, malformed packets, and protocol abuse strike the kernel before an HTTP request is even formed. If your defense relies on parsing HTTP, you have already lost the battle against L3/L4 attacks.
No idea how valid the video is. It could be accurate, it could be entirely simulated, it could be making some kind of simple mistake. (At least there’s a tiny bit more detail in the video description on Vimeo.) Anyway, good time to learn about the blanket “I’m under attack” mode and/or targeted rules.
> **2. The Origin IP Bypass** Cloudflare only protects traffic that proxies through them. If an attacker discovers your origin IP--or if you are running P2P nodes, validators, or RPC services that must expose a public IP--the edge is bypassed entirely. At that point, there is no WAF and no rate limiting. Your network interface is naked.
You pay a cost either way: live in a world with better funded and incentivized scammers and in a community less wealthy by a corresponding amount, or have a slightly less convenient sideloading experience.
I guess if you take the old saying extremely literally, you could conclude that every fool is guaranteed to be parted with 100% of their lifetime available money regardless of what anyone else tries to do to stop that, but that’s not true – and why old sayings (with a respectable 75% of the words right) taken literally aren’t a good basis for decision-making.
Uniformity isn’t directly important for error detection. CRC-32 has the nice property that it’s guaranteed to detect all burst errors up to 32 bits in size, while hashes do that with probability at best 2^−b of course. (But it’s valid to care about detecting larger errors with higher probability, yes.)
There’s a whole field’s worth of really cool stuff about error correction that I wish I knew a fraction of enough to give reading recommendations about, but my comment wasn’t that deep – it’s just that in hashes, you obviously care about distribution because that’s almost the entire point of non-cryptographic hashes, and in error correction you only care that x ≠ y implies f(x) ≠ f(y) with high probability, which is only directly related in the obvious way of making use of the output space (even though it’s probably indirectly related in some interesting subtler ways).
E.g. f(x) = concat(xxhash32(x), 0xf00) is just as good at error detection as xxhash32 but is a terrible hash, and, as mentioned, CRC-32 is infinitely better at detecting certain types of errors than any universal hash family.
This seems to make sense, but I need to read more about error correction to fully understand it. I was considering possibility that data could also contain patterns where error detection performs poorly due to bias, and I haven't seen how to include these estimates in probability calculations.
reply