My guess is that it's less about performance so much as reliability. I've had three incidents in the last two years which required connecting to a server that had an out of control process pool that had exhausted pids.
I also had a storage controller go and had to figure out how to use shell builtins to create a tmpfs and reflash the firmware.
There are many reasons to provide builtins for basic file commands, from saving the extra process start to the tiny performance boost for scripts that should probably be using find instead.
It seems like this would eliminate the need to read a file from disk and fork a new process, both of which take time. If you're just removing a single file, this is probably negligible, but if you have a script iterating over 10k files, i.e., this speed-up may be more welcome.
That's probably a good sign you should advance from shell. If the script is trivial, it's going to be trivial in python / ruby / go / crystal / ... as well. If it's not trivial, that's another reason to move.
I agree that as scripts get more complex, you should migrate from bash. But the number of files a script touches says almost nothing about its complexity.
True, but if you are iterating over 10k files and removing them, then a find|xargs pipe with xargs feeding rm the maximum number of parameters the kernel allows per fork will likely be faster than a bash interpreter loop, even with a bash builtin rm.