Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Those are some very cool 3D visualizations generated, but it's a bit difficult to understand what the form of the dataset they generated it from is. They say "in-the-wild" photography, but of course don't really give you a great sense.

The light->dark transitions having consistent geometry is clean though.



We use images from the Image Matching Challenge 2020 dataset. If you look at the Appendix, we list how many images we use and the process by which they were chosen.

Download and have a look! https://vision.uvic.ca/image-matching-challenge/data/


Thanks, that's a clean reference.


> They say "in-the-wild" photography, but of course don't really give you a great sense.

Flickr user photos. Citation shows up in the lower right hand corner during the video.

This appears to be a substantial improvement on current open photogrammetry/structure from motion work [1]. I hope Google supports this making its way into cultural preservation efforts [2].

[1] https://github.com/mapillary/OpenSfM (developed by Mapillary, now part of Facebook)

[2] https://www.nytimes.com/2015/12/28/arts/design/using-laser-s... (Using Lasers to Preserve Antiquities Threatened by ISIS)


Yes, I mostly meant that I don't get a great sense of "how many photos there are" in these datasets.

I saw in the paper their citation [13] pointed to https://arxiv.org/pdf/2003.01587.pdf, which in section 3 says the following:

We thus build on 25 collections of popular landmarks originally selected in [48,101], each with hundreds to thousands of images.

So hundreds to thousands of photos are used, which is a decent quantity, but definitely makes the quality of the result very impressive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: