can you elaborate on the key variables for the data? for instance, is it safe to assume 360 photos from the same angle would yield a worse model than 1 photo from 360 different angles?
what does the ideal minimal data set look like (eg, 5 photos from each 15-degree offset)?
NeRF's (and all of photogrammetry's) bread and butter is 3D consistency -- that is, seeing the same object from multiple angles. A 360 degree photo from a fixed position just won't do. As to how to select the best camera angles...I'm not sure. I believe there is research in this area for classical photogrammetry techniques, but I'm not familiar enough to point you to a body of work.
what does the ideal minimal data set look like (eg, 5 photos from each 15-degree offset)?
thanks for being so active on this thread.