Hacker News new | past | comments | ask | show | jobs | submit login
Lidar mapping techniques using multiple sensors (ouster.io)
105 points by derek_frome on April 1, 2019 | hide | past | favorite | 34 comments



The SLAM approach will work well with a validated point cloud and a new set of points for fixed objects. However if you are mapping movable or alterable objects such as vegetation I am unsure if the algorithm will still yield highly accurate results.

Another thing to consider is that if you are basing future measurements on past measurements, you need to be accurate to less than 1cm in the absolute X,Y,Z position of those points, and account for drift across your collection area. Small errors will add up to large differences in the survey set.


I'm the author of this blog post.

You are right that the SLAM dead reckoning trajectory will drift.

We are developing a mapping back-end where we register trajectories to consumer-grade GPS data, performing loop closure, and then doing a batch ICP-like optimization over multiple drives. This mostly eliminates drift as GPS, noisy as it may be, is mostly zero-mean over large areas.

Moving objects are mostly removed or ignored.

We are primarily interested in mapping urban environments for now. The SLAM does not work very well in a featureless corn field.


Are you planning on integrating these SLAM features into an API available from the device somehow? The spec sheet only mentions point cloud outputs right now.


The lidar device is not capable of running SLAM yet. We run SLAM on a computer with an Intel Core i7 processor and currently have not open sourced the algorithm.


I funded a paper mapping vegetation in a forest, if you're curious: https://www.philsalesses.com/s/a582379.pdf

IIRC, the lidar still lined up mostly because tree stems tend to not move, however, the larger problem was the error rate of the lidar sensor we were using. Readings further than 10m and the Hokuyo we were using tended to underestimate distances, so each scan of the forest looked a little but like the floor was curving over like that scene from Inception. Although maybe only 20 degrees. Still enough to be annoying.


Ouster SLAM works okay in a forest environment, such as driving with one Ouster OS-1 at highway speeds in Tahoe [0]. It is definitely much more challenging than an urban environment full of flat planes and right angles.

Calibration, including range biases, is probably the one factor with the greatest impact on mapping quality. For example, range bias may cause curved walls, and beam angle biases may cause curved ground.

I recall that the top scoring lidar SLAM algorithms on the KITTI data set all had to perform some calibration (for example, J. E. Deschaud found that all the beams on the Velodyne HDL-64E were tilted by 0.22 degrees [1]).

The Ouster OS-1 lidars have a slight range bias for highly reflective objects [2] but this will be fixed in a firmware update in the near future.

[0] https://pics.dllu.net/file/dllu-sc/6beea0708a.png [1] https://arxiv.org/abs/1802.08633 [2] https://www.ouster.io/s/OS-1-Datasheet.pdf


Hey, I'm familiar with your work! I'm currently submitting similar work using a Husky and a Velodyne HDL-32. I don't have the problem you mention with my sensor. See: https://www.youtube.com/watch?v=V-Q-XWSWT-I&index=2&list=UUo...


Your video looks dope. What a difference 8 years makes. https://vimeo.com/16396416


Thanks for funding this work. Ground based forest mapping is an interesting area.


At the NASA autonomy incubator we had a search and rescue under the canopy project[1] that successfully used SLAM along with other methods in such an environment.

[1]: https://www.youtube.com/watch?v=2hRNx_0SWGw


Thanks for the linked video, that sounds like an interesting project. Can the system in the project identify vegetation stems from above the canopy? Are the vegetation stems the only points of reference for the drone swarms other than their individual search area boundaries?


> However if you are mapping movable or alterable objects such as vegetation I am unsure if the algorithm will still yield highly accurate results.

If most objects are fixed, won't the best solution still be the correct one?


Good question; I am not sure. Imagine if someone were using the SLAM approach to map farm corn fields, in order to determine plant growth rates over the growing season. In that scenario I would think that the majority of the points would be returned from surfaces which were not present in the original point cloud. Of course you could set up ground control stations, surveyed using traditional techniques, and align the new data to them but then you are back to the original point cloud alignment process.


OK, but that's what I was getting at when I said "most objects are fixed", e.g., if you're driving through a neighborhood a week later, most of the cars have moved and there are some new kids toys on the lawn, but most of the points (streets, houses, poles, etc.) haven't budged.

I agree there are problems in the case of your example though.


If you're mapping corn fields, GPS + IMU will yield very good results. I wouldn't use any kind of SLAM in a farm field, it will probably worsen the position given by the GPS + IMU!!


I (and OP for that matter) do mapping with sensors with accuracies that are around 2 cm. I don't know where you got that 1 cm requirement from. ICP/SLAM drift will happen even with a perfect sensor. It really depends on the scale of what you are trying to measure.

There are ways to work with dynamic environments in lidar SLAM: https://ieeexplore.ieee.org/abstract/document/6907397


This was a back of the napkin estimation of accuracy based on some prior experience from several years ago. If you've used sensors with 2cm accuracy and they've performed well I would be interested to know whether they would perform as well if the survey area increased. For example, would they perform as well if the survey area is 10km^2 vs 1km^2 ? Is there a limit on their performance as the survey area increases?


Well of course there is a limit, mapping at this scale absolutely requires GPS of loop-closure of some sort. Myself, working in forests, I mapped at most 100 meters by 100 meters but some people in my lab are going as large as 1 kilometer by 300 meters.


The second point is not absolutely true. Drift can be corrected for across successive scans by including it as a parameter to estimate given the scan data.


you can mask away the points that are moving or you expect to move (eg ignore features from cars and people)


"Our SLAM algorithm is notable for being able to run in real time with not just one, but three Ouster OS-1 devices at the same time, on a typical desktop computer CPU."

What SLAM algorithm is that? Anyone know?


It's using ICP to register sucessive lidar scans. All three lidars are calibrated so the relative positions are known and the data from all three can be combined.

https://en.m.wikipedia.org/wiki/Iterative_closest_point

This alone isn't SLAM but can be used for odometry as part of a SLAM system.


... which is not surprising they can register in near real time. ICP is not that expensive.


LIDAR is really cool. The coolest use I know of is that a man named Steve Elkins used it in Honduras to discover lost ancient archaeological sites a few years ago. If you're interested read The Lost City of the Monkey God, it will blow your hair back.


I wonder if you could position posts or boxes (some physical object) with "weird" shapes that could be used as fixed, recognizable points for this sort of thing? So when your sensor picks it up, it's easy to immediately know that this specific object matches to object ID #1234 which is in a specific, known lat/lon/altitude/rotation/translation position.

Something like steganography for these sensors in the real world: https://en.wikipedia.org/wiki/Machine_Identification_Code


Well, we used this for local calibration: https://github.com/MarekKowalski/LiveScan3D/tree/master/docs... of course this is only to calibrate the feeds relative to each other.

But coupled with GPS almost any shape could work. (Hills, landmarks, buildings.)


This is very much in use when possible. They are called fiducial markers, or sometimes registration targets, registration markers etc.


This is very interesting. Still too expensive for hobby projects, which is fine as it's clearly not their target audience, but it made me wonder. A few years ago, cheaper (albeit shorter range and less accurate) lidar were predicted to be coming soon.

Searching in Chinese marketplaces didn't bring anything below ~$200, anyone know about very low cost lidar?


The cheapest one you will find will probably be Rplidar or ydlidar x4.

Those sensors will not be great, but you will be able to do SLAM with them.

Here is a review of X4 that I wrote earlier this year: https://msadowski.github.io/ydlidar-x4-review/


Great post! Some questions:

- How strongly does the performance of the SLAM depend on the type of sensor and the amount of sensors being used? I.e. I'm sure the performance using three 128-channel sensors will be better than using one 16-channel sensor.

- Will the software be made available to customers? If yes, as an SDK?


does anyone know anyone at ouster? i want to invite them to Self Racing Cars[1] - would love to offer public datasets of a known location so people can compare and contrast different platforms.

[1] http://selfracingcars.com/


How can we invest in this?


They just closed a Series B for $60 million last month, bringing total raise to $90 million. I think it is well outside the reach of individual investors at this point.


Is that a scandal?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: