One major advantage is that approaches like this just work, with zero configuration, and zero persistent state to be wrong. `git grep` (and ag) are very fast on modern laptops. That changes considerations dramatically from 10 or 20 years ago or whenever TAGS were introduced.
Building a tags file is not trivial.For example if you cloned a random repository on github you'd want to just jump around and not have to have the additional step of generating it and keeping it up to date.
Maybe this changed in the last few years but the problem with TAGS files was that you have to rebuild it at every change you make. I had a cron that rebuilt the ones of the projects I was working on, but it's not efficient and not up to date. Maybe a process that monitors changes to files is a better approach. Still, rebuilding the TAGS file is useless if ag is fast enough. SSDs and lots of RAM to cache files help.
The only problem with ag is that it can't find definitions until you save the buffer to a file. Not that ctags does any better. Some tool running inside emacs could do that too but it's not so important.
It shouldn't be too hard to write a grep-buffers command (I was using one version for a while), exclude open files from ag searches and merge the result of ag+grep-buffers in a single output (eh, in fact I should just write it).
Perhaps, but I've never seen a straightforward guide to doing that. Particuarly that was generic enough that would automatically work with new projects, but I'd be thrilled to be proven wrong.