Pretty accurate. And if you ever get asked "why do we have to do that useless thing you said?" and you reply using expressions like "futureproofing" or "anticipation", eyes might get rolled at you and you might even get management to disagree for the sake of tight deadlines or yagni.
Creating a 3d model out of 2d images requires computer vision to extract objects in the images and estimate their dimensions (including elevation). This will most likely require implementing an end-to-end deep learning model that's gonna need training, validation and test. Given the amount of data it'll have to deal with (100ks to millions of images) it'll need to load (high dimensional?) images in batch for them to get processed. This can still be done arguably on aws or Azure (or or...) with TensorFlow and HPC, but two things here, HPC bring a bit more overhead to the table, and a supercomputer could do better since none of the current cloud service providers have supercomputers that can compete especially in terms of cpu performance.
Theres no reason it needs a DL model. There's a lot of software that calculates tie points and creates point clouds from pictures, which is what they are almost certainly going to do here. DL to go from orthoimages to point cloud, if it is a thing, is probably still in the feasibility steps.
The steps are all fairly easily parallelizable until you get to a final large scale nonlinear least squares refinement step, and even then there are tricks to make the decomposition tractable. It usually just involves single images or pairs of images with no need for communication between processes until the last bit.
Why would Google acquire a startup that barely raised 15M in 2 rounds? There seems to be more to this story. Perhaps the alooma team is extremely talented? Any ideas?
I'm gonna have to side with you here, I don't see the point of what they're doing. Except for distinction and marketing. Distinction, as in, if you have total control of how your car, it is easier to create functionality that competitors don't necessarily have. Marketing, as in, marketing.
Being a software engineer and a big DL enthusiast, this is getting a bit worrying for 2 reasons:
1- We already can easily build an NLP-related model that can write code in a certain language without making syntax/build or runtime errors.
2- I wasn't worried so far because the code a model could write doesn't carry any business logic. And I've realised that, as long as reasoning is still yet to be "discovered" in AI, we would be fine. DeepMind seems now to start focusing on just exactly that.
If a job as complicated as implementing code with business logic in it can be done by an AI. I do not care whether it's an AGI or not, it's already a bit troubling.
I guess we'll first see IDEs with a built-in code-completion function. I suppose this could work first in cases where coding is boring, such as when refactoring code.