The SLAM approach will work well with a validated point cloud and a new set of points for fixed objects. However if you are mapping movable or alterable objects such as vegetation I am unsure if the algorithm will still yield highly accurate results.
Another thing to consider is that if you are basing future measurements on past measurements, you need to be accurate to less than 1cm in the absolute X,Y,Z position of those points, and account for drift across your collection area. Small errors will add up to large differences in the survey set.
You are right that the SLAM dead reckoning trajectory will drift.
We are developing a mapping back-end where we register trajectories to consumer-grade GPS data, performing loop closure, and then doing a batch ICP-like optimization over multiple drives. This mostly eliminates drift as GPS, noisy as it may be, is mostly zero-mean over large areas.
Moving objects are mostly removed or ignored.
We are primarily interested in mapping urban environments for now. The SLAM does not work very well in a featureless corn field.
Are you planning on integrating these SLAM features into an API available from the device somehow? The spec sheet only mentions point cloud outputs right now.
The lidar device is not capable of running SLAM yet. We run SLAM on a computer with an Intel Core i7 processor and currently have not open sourced the algorithm.
IIRC, the lidar still lined up mostly because tree stems tend to not move, however, the larger problem was the error rate of the lidar sensor we were using. Readings further than 10m and the Hokuyo we were using tended to underestimate distances, so each scan of the forest looked a little but like the floor was curving over like that scene from Inception. Although maybe only 20 degrees. Still enough to be annoying.
Ouster SLAM works okay in a forest environment, such as driving with one Ouster OS-1 at highway speeds in Tahoe [0]. It is definitely much more challenging than an urban environment full of flat planes and right angles.
Calibration, including range biases, is probably the one factor with the greatest impact on mapping quality. For example, range bias may cause curved walls, and beam angle biases may cause curved ground.
I recall that the top scoring lidar SLAM algorithms on the KITTI data set all had to perform some calibration (for example, J. E. Deschaud found that all the beams on the Velodyne HDL-64E were tilted by 0.22 degrees [1]).
The Ouster OS-1 lidars have a slight range bias for highly reflective objects [2] but this will be fixed in a firmware update in the near future.
At the NASA autonomy incubator we had a search and rescue under the canopy project[1] that successfully used SLAM along with other methods in such an environment.
Thanks for the linked video, that sounds like an interesting project. Can the system in the project identify vegetation stems from above the canopy? Are the vegetation stems the only points of reference for the drone swarms other than their individual search area boundaries?
Good question; I am not sure. Imagine if someone were using the SLAM approach to map farm corn fields, in order to determine plant growth rates over the growing season. In that scenario I would think that the majority of the points would be returned from surfaces which were not present in the original point cloud. Of course you could set up ground control stations, surveyed using traditional techniques, and align the new data to them but then you are back to the original point cloud alignment process.
OK, but that's what I was getting at when I said "most objects are fixed", e.g., if you're driving through a neighborhood a week later, most of the cars have moved and there are some new kids toys on the lawn, but most of the points (streets, houses, poles, etc.) haven't budged.
I agree there are problems in the case of your example though.
If you're mapping corn fields, GPS + IMU will yield very good results. I wouldn't use any kind of SLAM in a farm field, it will probably worsen the position given by the GPS + IMU!!
I (and OP for that matter) do mapping with sensors with accuracies that are around 2 cm. I don't know where you got that 1 cm requirement from. ICP/SLAM drift will happen even with a perfect sensor. It really depends on the scale of what you are trying to measure.
This was a back of the napkin estimation of accuracy based on some prior experience from several years ago. If you've used sensors with 2cm accuracy and they've performed well I would be interested to know whether they would perform as well if the survey area increased. For example, would they perform as well if the survey area is 10km^2 vs 1km^2 ? Is there a limit on their performance as the survey area increases?
Well of course there is a limit, mapping at this scale absolutely requires GPS of loop-closure of some sort. Myself, working in forests, I mapped at most 100 meters by 100 meters but some people in my lab are going as large as 1 kilometer by 300 meters.
The second point is not absolutely true. Drift can be corrected for across successive scans by including it as a parameter to estimate given the scan data.
"Our SLAM algorithm is notable for being able to run in real time with not just one, but three Ouster OS-1 devices at the same time, on a typical desktop computer CPU."
It's using ICP to register sucessive lidar scans. All three lidars are calibrated so the relative positions are known and the data from all three can be combined.
LIDAR is really cool. The coolest use I know of is that a man named Steve Elkins used it in Honduras to discover lost ancient archaeological sites a few years ago. If you're interested read The Lost City of the Monkey God, it will blow your hair back.
I wonder if you could position posts or boxes (some physical object) with "weird" shapes that could be used as fixed, recognizable points for this sort of thing? So when your sensor picks it up, it's easy to immediately know that this specific object matches to object ID #1234 which is in a specific, known lat/lon/altitude/rotation/translation position.
This is very interesting. Still too expensive for hobby projects, which is fine as it's clearly not their target audience, but it made me wonder. A few years ago, cheaper (albeit shorter range and less accurate) lidar were predicted to be coming soon.
Searching in Chinese marketplaces didn't bring anything below ~$200, anyone know about very low cost lidar?
- How strongly does the performance of the SLAM depend on the type of sensor and the amount of sensors being used? I.e. I'm sure the performance using three 128-channel sensors will be better than using one 16-channel sensor.
- Will the software be made available to customers? If yes, as an SDK?
does anyone know anyone at ouster? i want to invite them to Self Racing Cars[1] - would love to offer public datasets of a known location so people can compare and contrast different platforms.
They just closed a Series B for $60 million last month, bringing total raise to $90 million. I think it is well outside the reach of individual investors at this point.
Another thing to consider is that if you are basing future measurements on past measurements, you need to be accurate to less than 1cm in the absolute X,Y,Z position of those points, and account for drift across your collection area. Small errors will add up to large differences in the survey set.