Hacker News new | past | comments | ask | show | jobs | submit login
Photogrammetry on commercial flights (2021) (leifgehrmann.com)
234 points by tildef on Aug 30, 2023 | hide | past | favorite | 38 comments



He quotes that you need a lot of points and a fancy transformation to correct images while taking into consideration the differences in elevation within the scene. While it is true that having more points is important, the better way is to actually also consider the elevation of the identified points using a digital elevation model (DEM). That increases the accuracy of the transformation a lot and reduces the number of points needed. The idea is that you build a transformation from R^3 -> R^2 instead of just R^2 -> R^2, usually a rational polynomial function.

If anybody is interested the word to search for is orthorectification.

Shameless plug. I recently published a post on my blog on how to calculate a projective transformation for an image if you know a few parameters of your camera (focal length and sensor size) and its position and orientation. My use case is satellite imagery so this is always available http://maxwellrules.com/math/looking_through_a_pinhole.html


Ahhh your blog post was precisely what I needed a few years ago. Maybe it'll come up again :)


Despite the author's criticisms, it seems like there's lots of opportunity for UAV-generated open source imagery, but I can't really find an active community for sharing it.

Open Aerial Mapp[1] seems like a good start, but doesn't seem to be particularly active.

Seems like we could use a "Mapillary[2] but from Above" type of project - only one that doesn't end up getting acquired by Facebook.

[1] https://openaerialmap.org/

[2] https://www.mapillary.com/


https://github.com/OpenDroneMap/ODM

You take a drone, point the camera mostly down (a narrow angle, not straight down), take photos of land, with some overlap, preferably from different angles, load the software, and it creates an ortophoto, 3d model, height map, all georeferenced


ODM is a way to preprocess data before sending it to OAM, but it does not itself provide a sharing platform.

One barrier for OAM seems to be that it requires the image to be already orthorectified. To be more like Mapillary you'd need a service that takes georefed pictures and does its own processing.

In the case of airplane window photos where georef is not going to be good enough, you might need existing photos to correlate with -- ADS-B track combined with time can help provide a starting guess, but not much else.


I tried to run the numbers on a similar idea some years back and our conclusion was that the processing power needed to do the photogrammetry ends up being more expensive than just renting long-term read access to the downlink from a high-resolution satellite.


You can also do this to create stereo photography!

https://en.wikipedia.org/wiki/Stereo_photography_techniques#...



> But other than taking a few photos of holiday mementos and lens-flaring sunsets, what’s the point?

OT, but for me the point is not having my body absolutely panic from experiencing all kinds of rotation and sudden lateral displacement without anything happening visually. Honestly, I have no idea how people fly anywhere else, I wouldn’t be able to. The speeds and forces experienced even on a calm commercial flight are, as far as human evolution goes, total nonsense.


What you experience on a regular flight is a complete non-issue compared to what you can experience in a car, in an amusement park, or in VR.

And if you remove the direct comparisons, then people do things like say, war, parachute jumping, or underwater welding that are way more extreme.

Really like everything it's just the matter of getting used to it. As a kid flying used to be amazingly exciting. Then I got a job that involved flying twice a week and it very quickly got routine.

The weird forces also don't last very long at all. For the vast majority of the flight is just sitting in a chair, and feels exactly like that.


> For the vast majority of the flight is just sitting in a chair, and feels exactly like that.

To me it feels like sitting in a chair that’s hurtling forward at 900 km/h and randomly rising and falling by a few meters. I’m not forgetting that for a second. It’s not that uncomfortable at cruise altitude, but I’m definitely very aware of what’s happening and how even tiny changes in pitch are tied to quite large and long acting G forces (compared to sitting in a chair, not to a rollercoaster).


It's mostly the direction: you don't get a lot of vertical acceleration on land except on a bumpy road or a amusement part. Slight loss of gravity takes some getting used to.


Same way I have no idea how people get dizzy so easily, do you get dizzy when on a boat, or when using a VR headset?


Yeah, absolutely. Any VR game that moves the camera (me) without me commanding it is torture. Like, I bump the stick by accident, the character takes two steps forward and I immediately feel a tug in my stomach and a wave of dizziness and panic.


Pilots can have the opposite problem: a plane smoothly banking at night can feel like nothing at all. So they have to trust their instruments not their inner ear.


To explain the underwhelmed response I guess most people were expecting ’google maps quality’ 3d models. Which is not an unreasonable expectation, given an aerial platform such as a drone converting photos to 3d models of large areas is commoditized. Just dump photos to an application such as Agisoft Metashape or Luma, wait a bit and you can get something like this for example: https://skfb.ly/6DvVP


wow - that's pretty good


This is very cool! How feasible would it be to take a video instead of a photo, then using landmark detection and a stitching algorithm such as SIFT to cover a larger surveying area?


I didn’t use sift, but instead a very dumb old trick of slit screen photo: http://cscheid.net/static/windowseat/

This is 5h of a single video from EWR to SFO by a former colleague. Turns out even a dumb trick like this still is enough to pick out a bunch of geographical features!


A better solution would be to use a program that does what most people think of when they hear the word photogrammetry these days, 3D reconstruction from multiple images, and then make an orthorectified image from that.

OpenDroneMap can do that for example.

https://www.opendronemap.org/webodm/


An areal photo that's orthrorectified with ground images as a secondary source would provide better normals. Of course we'd also use differential GNSS base-point targets to stitch the images together. It's difficult to get consistent color temps, exposure, etc with multiple ground images shot at various times throughout the day.


You can think of a video as a set of images taken from the same camera from different perspectives. So, you can assume as if they're different cameras (with the added bonus that the camera intrisincs remain the same), and apply "multi camera" techniques. I'm assuming you "move" the camera for parallax and what not.

With this, you can retrieve depth information by correlating the difference in position of easily identifiable points, and recreate the scene as a mesh.

This is basically the basis of photogrammetry as I know it. AI solutions may help at various stages to speed up the process too.


A big part of my day job is to develop an image processing pipeline for a satellite constellation.

Using SIFT or another keypoint detector is one of the ways to do georeferecing. You take a basemap and your image and match keypoints on both then calculate your transformation. There are a few things that make the problem harder than just use SIFT but it is a good starting point.


That would be more like photogrammetry


This post reminds me that I have been unable to find good data on local elevation that's not an OS map, a spheroid, or estimating from google earth.

Still disappointed that good free map sources are all flat, from what I can tell.


Seems much easier than when I had to do this manually onto Google Earth.


Check the laws in your country before doing this, in Sweden you need permission to distribute aerial photos.


The internet makes that problematical. Does 'distribution' include 'share on facebook'?

I've been wondering for some time, how far do national laws about data sharing matter any more?


Yes, any distribution including online. Personally I think it's a bit anachronistic to ban distribution of photos now when it's so easy to take and distribute them, not to mention all kinds of commercial services with free aerial and street view photos. People do get prosecuted for it still, so it very much matters.

https://www.lantmateriet.se/en/dissemination-permit/


I share such photos, and never got arrested. But I'm not Swedish.

If you're Swedish and share such photos in the US, is it a crime?

Where is the 'internet' at? Certainly not in any particular nation.

Thus my confusion.


law !== enforcement


thus why “nothing to hide” applies to very few people as most people could be taken to court for something and by judge or jury punished for that thing. Let alone sued.


This is not photogrammetry as the word is usually understood these days.

Photogrammetry usually means constructing a 3D model out of a number of 2D photos from lots of different angles, although there are broader definitions as well [1].

This is just skewing a photo you took out the window to overlay it on a map.

From the title, I was expecting this to be something about constant super-hi-res photography attached to commercial flights that would actually let you build 3D models of the landscape...

[1] https://en.wikipedia.org/wiki/Photogrammetry


I think that's probably a function of the context you're encountering photogrammetry in, it's literal meaning (as described in the Wikipedia article) is "measuring stuff from photos." I believe what you're describing is generally referred to as "3D reconstruction."

For what it's worth, this article was pretty much exactly what I anticipated it to be. But language is funky, and obviously other people shared your expectation (which makes it a good comment in my view).


Even in 3D rendering for animation, video games, archviz, etc. I often hear photogrammetry discussed in the context of making reusable textures, not just making specific 3D models.


Correct, the technique is called orthorectification.


That's what I was expecting as well. Still a cool project nonetheless.


pshh, Big Globe has been using photogrammetry for it's "window" displays for decades.

/s




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: