The simple answer is that it actually works for real-world software, is microarchitecturally feasible and flexible, and architecturally enforces non-forgeability (which is crucial allowing in-address-space compartmentalisation of distrusting software). Most schemes that take the metadata-table-on-the-side approach fall down on those last two points. MPX is particularly notorious for tanking performance, having race conditions (because loading the bounds is not atomic with loading the address) and having an extremely limited number of bounds registers (I think 4? which is even worse than the highly constrained register set of 32-bit x86) so you're constantly spilling/reloading bounds data from memory. I don't think any of them have been shown to work across the entire software stack from the kernel to core userspace runtime parts to graphical desktops like KDE.
I'll leave it to others to go into technical details, but the most obvious answer is that this effort has major industry players behind it, meaning it might actually make it into production.
This isn't a memory tagging system at all and has capabilities far beyond that (pun intended), so I don't know why whatever other approaches like MTE are out in the wild are relevant.
They're relevant because they're technologies relating to memory safety and provide some level of additional protection. However, they rely on secrets and are in general only probabilistic, so they don't deterministically mitigate all memory safety issues (you can deterministically mitigate some with clever allocations of memory "colours", but not all). CHERI and MTE-like schemes also both rely on the use of tagged memory, but in rather different ways.