Part of the problem is the wake word “hey siri” is actually handed by a separate coprocessor (AOP) with the model compiled into the firmware. While anything is technically possible, it isn’t as simple as just letting the google app run in the background since the AP is asleep when any of these gesture happen. You could probably setup the action button on the side to open an assistant, but that’s going to be a less pleasant experience (app might not be open, etc).
You can now setup Vocal Shortcuts[1] which can be used to run any shortcut or action with almost any trigger word and without saying "Siri". However, I'm not certain if it can wake the device from sleep or not.
Same with android phones - a super-specific hardcoded phrase is much easier to work in the power budgets required for an "always on" part of the device.
It's why a manufacturer (like Samsung) can change that sort of thing on their devices, but it's not realistically something an end user (or even an app) can customize in software. It's not some "arbitrary" limitation.
Back in 1992 or so the NeXT could distinguish (was it 16 or) 64 fixed, trained, phrases. Point being, it doesn’t take too much compute with a finite vocabulary.
There's open solutions for that like openwakeword and microwakeword (the latter can even run on an esp32!)
The training is a lot of work though and requires a lot of material. For Home Assistant's voice preview model they had tens of thousands of volunteers record the "okay nabu" wakeword and even still it doesn't work quite as well as hey siri on Apple devices.
Details are listed below
https://machinelearning.apple.com/research/hey-siri