Regarding the data granularity, I'm curious if you ever managed to figure out how to get high granularity (daily) data over a large timeframe. For those who don't know, google "helpfully" changes the granularity for you when you request longer ranges, and normalizes the response so that the largest datapoint in any given range is 100, so you can't just concatenate consecutive requests.
My very hacky attempt at solving this involves overlapping short range requests, and finding the scaling factor of best fit between them: https://github.com/bspammer/rebuild_trends
I'd be really interested in hearing if there are better approaches out there to squeeze this data out of Google.
Yeah you're spot on with those data limitations. I think the easiest solution would be to just store the more recent, granular data and accumulate that over time. But now way to get granular historical as far as I'm aware.
My very hacky attempt at solving this involves overlapping short range requests, and finding the scaling factor of best fit between them: https://github.com/bspammer/rebuild_trends
I'd be really interested in hearing if there are better approaches out there to squeeze this data out of Google.