APFS doesn’t “prefer” anything - it it will not change the bytes passed to NFC or NFD. The bytes passed for creation are stored as is ( HFS will store the NFD form on disk if you pass the NFC form to it). However APFS is normalization insensitive (if you create a NFC name on disk , you won’t be able to create the NFD version and you will be able to the name by both the NFC and NFD variants) just as HFS is - they both use different mechanisms to achieve normalization insensitivity.
USB4 is essentially a branding strategy around the USB C connector which with tunneling various standards (USB 3.2 , thunderbolt ) and speeds. Some are required and some are optional to get branded as USB4.
One could alternatively describe it more as a rebranding of thunderbolt multiplexing layer.
For those unfamiliar: USB4 does not specify how to interface with specific non-host devices like USB 3 and older did.
Instead it is a multi-protocol tunneling (think multiplexing with routing) system, that allows tunneling USB3.2, Display Port, and optionally PCI-E (i.e. what thunderbolt is known for). It also specifies a requirement of support for different alternate modes, like display-port alternate mode (non-tunneled), and optionally thunderbolt 3 alternate mode. (Thunderbolt 3 runs at a different rate than USB4, among a small list of other differences, not counting the additional features specified in USB4).
Hubs are required to support the otherwise optional thunderbolt 3 alternate mode and USB4 tunneled PCI-E, thereby making every USB4 hub a valid Thunderbolt hub, both for classic thunderbolt (TB3), and for USB4-based thunderbolt (i.e. using PCI-E tunneling in USB4, which I assume part of the updates made in TB4)
Hubs will also support plain USB 3.X from a host, since USB4 is just a negotiated alternate mode over USB3. Thus USB3 can be used unencapsulated if only the hub supports USB4; Fully encapsulated if Host, hub, and device all support USB4; or partially encapsulated, like if the host and hub support USB4, but one of the downstream devices only supports USB3. The hub becomes responsible for encapsulating/unencapulating the tunneled USB3.
In a similar way, a hub can support connecting a classic thunderbolt (TB3) device to a hypothetical host that only supports USB4 PCI-E encapsulation. The upstream port and downstream ports are independent, so one can use the TB3 data rate while the other can use USB4, and the hubs are required to pass the PCI-E data through a PCI-E switch, so everything just works. (Especially since talking TB3 is deliberately nearly the same as talking USB4, other than link speed).
Obviously, USB2 is of course supported as well, in parallel to all other modes since it has separate data lines.
But USB4 is not all perfect. For example every USB4 port on a host must support display-port output. That is not really a huge problem on say a laptop, but it is a pain on enthusiast PC building, since it means motherboards will need a display-port passthrough socket in order to support that requirement. And undoubtedly there are or will be many non-compliant devices out there that break the whole intended it-just-works approach.
Thanks for the explanation, I've been lost in the USB specification for a while. The last major read I did about it was on USB 3.0 so it's nice to have a summary. Will be easier to go dig deeper.
>It was a GC via marriage though, rules ma be different for employer sponsored GCs
Rules are indeed different for Employment based green cards. Marriage based green cards don't have a limit while the number of employment based Green cards that can be given out in a year is fixed ( in total and within that there is a 7% cap on how much each individual country can receive)
People stuck this way from India & China aren't even able to file for GC's. You can only file for a GC if your country's date is "current" (which is not the case for marriage based GC)
Their only legal basis to stay in the US is the H1-B which they can keep on renewing because they have an approved I-140. In my current situation, i have an approved immigrant petition and i will have to continue to get H1-Bs approved
till i get current (which current projections are about 40-50
years). I've been in the US for a decade and it will be many many decades before i can remove the dependence on the H1.
The math is simple - there are about 400K Indians with approved immigrant petitions. And across 2 categories the maximum number of green cards that Indians can receive in a year is about ~6K. Each petition is roughly 2 green cards
So if an Indian gets an immigrant visa approved today in 2020 , they're looking at wait of 800/6 (133) years even be able to file for a green card.
One thing is a lot of Indians got GC's in EB-1 in the last decade(2000-2020). There were some Indian IT outsourcing firms that promoted people to higher roles for EB-1 GC purposes and then demoted them after they got GCs. Thousands of people got it(thanks to mind boggling levels of office politics), now the EB-1 queue is flooded due to his abuse(fraud?). EB-1 used to be current, now the wait is again in a few years(<10).
Part of the problem here seems to be at least to some extent everyone(us Indians) flooding these queue's while native population keeping these quotas fixed(to control they don't change their society too much).
One also needs to realize there will always be limits to these things. Now given every one wants to come to US, they can't accommodate everyone. There will be limits. Limits to number of H1B's, limit to yearly GCs. What should the limit be? 6K, 60K, 600K? How much?
Imagine India doing this. We recently passed laws to restrict immigration. For some reasons we expect to shut doors to everyone, while simultaneously expecting the whole world roll out red carpets for us.
HFS+ sparse images, sorry (not quite the same as sparse files, but here DfM uses sparse files are used to create a raw image, so I wanted to lift some possible confusion, and tripped on)
Space shared APFS volumes inside a container give you the “table partitioning” you want. You can even set them up to have different case sensitivity options. All your dev work in a case sensitive volume for instance and Adobe software on a case insensitive volume on the same space shared container.
True! My disk-image-centered workflow comes from before APFS volumes were a thing; I haven't bothered to re-evaluate it. (It is nice that I can just schlep one of these volumes around by copying one file, rather than waiting for thousands/millions of small files to compress, copying the archive, and then decompressing on the other end, though. Do you know if there's an easy method of doing a block-level export from an APFS volume to a sparsebundle disk image, or vice-versa? That'd be killer for this.)
Well, APFS is much better suited to this kind of workflow. Create a space shared logical volume inside your container and turn spotlight off on that particular volume (and if you’d like, make that volume case sensitive ). There’s no need to separate that out on a diskimage
There's still the problem of Time Machine (or any other backup software you use) needing to do a complete deep scan of the volume to ensure you haven't made any changes. If you know a git repo is effectively in a "read-only" state—something you just keep around for reference, or an old project you might "get back to" a few years from now—it can speed up those backups dramatically to put the repo into some kind of archive. Disk images, for this use-case, are just archives you can mount :)
There’s some interesting pricing games being played by Apple. For all new models, The 256GB price is the same, the 128GB and 32GB have been eliminated and in their place a single 64GB model in between ( 700 ) the price of the 32 ( 649 ) and the 64 (749 ). Personally the 128 GB was the sweet spot - 32/64 are too little and 256 is too much
>from APFS-normalization-preserving to APFS-native-normalization.
The developer documentation at [1]
seems to suggest that "native" normalization is normalization preserving as well and that native normalization is based on storing the hash of the normalized name instead of storing the normalized name itself.
"
and preserves both case and normalization of the filename on disk in all variants.
"
and
"
APFS preserves the normalization of the filename and uses hashes of the normalized form of the filename to provide normalization insensitivity, whereas HFS+ stores the normalized form of the filename on disk to provide normalization insensitivity.
"
edit - Even the linked blog post says the same thing
"
macOS 10.13 will also support case-sensitive APFS, which will use native normalization. This is new in the developer beta. The filenames are still stored in the same way as prior APFS (not normalized like with HFS+), but APFS now uses normalization-insensitive hashes ...
"
>how this unicode normalization messes up filenames and cause duplicates and stale copies when roundtripping
Being normalization preserving should fix that, right ?
>Imagine any non-English speaking person entering a non-ascii name for their document
You mean there are people in Europe, China, Japan, India running into widespread problems when they create filenames in their own language in iOS 10.3+ ?
The clearest explanation is in one of the updates on that page:
"The most obvious problems arose with iOS users who transferred files from Windows (which prefers a different normalisation form to HFS+) which were named using Korean and other character sets, although this even included European languages with accented characters like ñ and é. There’s a chilling series of messages on the Apple Developer Forums in which an iOS app developer details how users running iOS 10.3 were transferring files using iTunes for Windows, but could not access those files once they were on an iOS device."
The "chilling series of messages" takes you to a link which details how File sharing through iTunes on Windows specifically was affected and that a similar file transfer product on Windows iMazing was able to fix to fix it by simply normalizing before transferring.
( The "bag of bytes" response from Apple asks developers to do exactly that btw )
That begs the question that why iTunes itself ( on Windows ) wasn't able to do the same thing as iMazing ?
Anyway that's moot now - As mentioned earlier in this thread iOS 10.3.3 and iOS 11 are changing behaviour w.r.t. this.