I'm confused, search indexing isn't a realtime exercise... Why would performance be an issue? Running a headless browser vs running "whatever it is they run that can execute JS" doesn't seem like a huge leap...
At Google's scale any performance drop can have massive implications. If Google's crawl rate is 100 million[1] pages a day then a 1% drop in crawl rate means 1 million less pages crawled per day (which has many implications, for example having to use more compute power to regain crawl rate which raises costs etc.)
You are right that it is not a real-time exercise but they do have crawl targets.
You cannot be flippant about "Why don't they just do X" when scale is that big.
[1] Picked out of the air but probably in the right magnitude (or even a little small)
The point is that Google probably doesn't have a lot of cycles to spare - anything else wouldn't be good business sense.
Anything that significantly adds to the load will lose them money - whether or not the operation needs to be realtime is secondary to that.
I apologise for giving offense: I wrote the comment the same way I would have made it face-to-face, which is always a bit risky in a purely textual medium.
I don't know if you are trying to be serious at this point or not. Google has millions (literally) of machines with dozens of cores each. Search is their business that makes all the money.
Google executes JavaScript and renders the full DOM for every page internally. They generate full length screenshots of every page and have pointers to where text appears on the page so they can do highlighting of phrases within the screenshot.
It isn't even a debatable question if Google reuses the Chrome engine to do this.