No search engine is refreshing every website every minute. Most websites don't update frequently, and if you poll them more than once every month, your crawler will get blocked incredibly fast.
The problem of being able to provide fresh results is best solved by having different tiers of indices, one for frequently updating content, and one for slowly updating content with a weekly or monthly cadence.
You can get a long way by driving he frequently updating index via RSS feeds and social media firehoses to provide singnals for when to fetch new URLs.
I meant this in response to the parent that Common Crawl only updates every month, which seemed to imply that this was sufficient.
This is too slow for a lot of the purposes people tend to use search engines for. I agree that you don't need to crawl everything every minute. My previous employer also crawled a large portion of the internet every month, but most of it didn't update between crawls.
See also: IndexNow [1], a protocol used by Bing, Naver, Yandex, Seznam, and Yep where sites can ping one of these search engines when a page is updated and all others will be immediately notified. Unfortunately it does seem somewhat closed as to requirements for joining as a search engine.
The irony of a website from 2 major search engines looking like it was made in the early 2000s doesn't escape me. But, to my original point, there's absolutely no way they were ignorant of well-known URIs
The problem of being able to provide fresh results is best solved by having different tiers of indices, one for frequently updating content, and one for slowly updating content with a weekly or monthly cadence.
You can get a long way by driving he frequently updating index via RSS feeds and social media firehoses to provide singnals for when to fetch new URLs.