Visual SLAM using structure from motion
Visual SLAM is the process of estimating the movement of a camera in an unknown surrounding using only video as sensor input. In our project a video sequence captured by an omnidirectional camera mounted on a car is processed, producing a vehicle trajectory and a sparse 3D point cloud. Conclusions of the project is that bundle adjustment significantly improves the robustness of the system. Our ego-motion estimator performs quite well. There is some drift in the system on a scale of a few hundred meters or more, but for shorter ranges the results are excellent.