Example
We provided example code snippets for running stella_vslam with variety of datasets.
SLAM with Video Files
We provide an example snippet for using video files (e.g. .mp4
) for visual SLAM.
The source code is placed at ./example/run_video_slam.cc
.
.yaml
) according to the camera parameters.You can create a map database file by running one of the run_****_slam
executables with --map-db-out map_file_name.msg
option.
SLAM with Image Sequences
We provided an example snippet for using image sequences for visual SLAM.
The source code is placed at ./example/run_image_slam.cc
.
.yaml
) according to the camera parameters.You can create a map database file by running one of the run_****_slam
executables with --map-db-out map_file_name.msg
option.
SLAM with Standard Datasets
KITTI Odometry dataset
KITTI Odometry dataset is a benchmarking dataset for monocular and stereo visual odometry and lidar odometry that is captured from car-mounted devices.
We provided an example source code for running monocular and stereo visual SLAM with this dataset.
The source code is placed at ./example/run_kitti_slam.cc
.
Start by downloading the dataset from here.
Download the grayscale set (data_odometry_gray.zip
).
After downloading and uncompressing it, you will find several sequences under the sequences/
directory.
$ ls sequences/
00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21
In addition, download a vocabulary file for FBoW from here.
A configuration file for each sequence is contained under ./example/kitti/
.
If you built examples with Pangolin Viewer support, a map viewer and frame viewer will be launced right after executing the following command.
# at the build directory of stella_vslam
# monocular SLAM with sequence 00
$ ./run_kitti_slam \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/KITTI/Odometry/sequences/00/ \
-c ../example/kitti/KITTI_mono_00-02.yaml
# stereo SLAM with sequence 05
$ ./run_kitti_slam \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/KITTI/Odometry/sequences/05/ \
-c ../example/kitti/KITTI_stereo_04-12.yaml
EuRoC MAV dataset
EuRoC MAV dataset is a benchmarking dataset for monocular and stereo visual odometry that is captured from drone-mounted devices.
We provide an example source code for running monocular and stereo visual SLAM with this dataset.
The source code is placed at ./example/run_euroc_slam.cc
.
Start by downloading the dataset from here.
Download the .zip
file of a dataset you plan on using.
After downloading and uncompressing it, you will find several directories under the mav0/
directory.
$ ls mav0/
body.yaml cam0 cam1 imu0 leica0 state_groundtruth_estimate0
In addition, download a vocabulary file for FBoW from here.
We provided the two config files for EuRoC, ./example/euroc/EuRoC_mono.yaml
for monocular and ./example/euroc/EuRoC_stereo.yaml
for stereo.
If you have built examples with Pangolin Viewer support, a map viewer and frame viewer will be launched right after executing the following command.
# at the build directory of stella_vslam
# monocular SLAM with any EuRoC sequence
$ ./run_euroc_slam \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/EuRoC/MAV/mav0/ \
-c ../example/euroc/EuRoC_mono.yaml
# stereo SLAM with any EuRoC sequence
$ ./run_euroc_slam \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/EuRoC/MAV/mav0/ \
-c ../example/euroc/EuRoC_stereo.yaml
TUM RGBD dataset
TUM RGBD dataset is a benchmarking dataset fcontaining RGB-D data and ground-truth data with the goal to establish a novel benchmark for the evaluation of visual odometry and visual SLAM systems.
The source code is placed at ./example/run_tum_rgbd_slam.cc
.
Start by downloading the various dataset from here.
One of many example datasets can be found from here.
Download the .tgz
file of a dataset you plan on using.
After downloading and uncompressing it, you will find two directories and few text files under the rgbd_dataset_freiburg3_calibration_rgb_depth/
directory.
$ ls rgbd_dataset_freiburg3_calibration_rgb_depth
accelerometer.txt depth depth.txt groundtruth.txt rgb rgb.txt
If you would like to preprocess dataset then you can usee tool from here.
In addition, download a vocabulary file for FBoW from here.
We provided the config files for RGBD dataset at, ./example/tum_rgbd
.
For above specific example we shall use two config files, ./example/tum_rgbd/TUM_RGBD_mono_3.yaml
for monocular and ./example/tum_rgbd/TUM_RGBD_rgbd_3.yaml
for RGBD.
Tracking and Mapping
# at the build directory of stella_vslam
# monocular SLAM with rgbd_dataset_freiburg3_calibration_rgb_depth
$ ./run_tum_rgbd_slam \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/rgbd_dataset_freiburg3_calibration_rgb_depth/ \
-c ../example/tum_rgbd/TUM_RGBD_mono_3.yaml \
--no-sleep \
--auto-term \
--map-db-out fr3_slam_mono.msg
# RGBD SLAM with rgbd_dataset_freiburg3_calibration_rgb_depth
$ ./run_tum_rgbd_slam \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/rgbd_dataset_freiburg3_calibration_rgb_depth/ \
-c ../example/tum_rgbd/TUM_RGBD_rgbd_3.yaml \
--no-sleep \
--auto-term \
--map-db-out fr3_slam_rgbd.msg
Localization
# at the build directory of stella_vslam
# monocular localization with rgbd_dataset_freiburg3_calibration_rgb_depth
$ ./run_tum_rgbd_slam --disable-mapping \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/rgbd_dataset_freiburg3_calibration_rgb_depth/ \
-c ../example/tum_rgbd/TUM_RGBD_mono_3.yaml \
--no-sleep \
--auto-term \
--map-db-in fr3_slam_mono.msg
# RGBD SLAM with rgbd_dataset_freiburg3_calibration_rgb_depth
$ ./run_tum_rgbd_slam --disable-mapping \
-v /path/to/orb_vocab/orb_vocab.fbow \
-d /path/to/rgbd_dataset_freiburg3_calibration_rgb_depth/ \
-c ../example/tum_rgbd/TUM_RGBD_rgbd_3.yaml \
--no-sleep \
--auto-term \
--map-db-in fr3_slam_rgbd.msg
SLAM with UVC camera
Tracking and Mapping
We provided an example snippet for using a UVC camera, which is often called a webcam, for visual SLAM.
The source code is placed at ./example/run_camera_slam.cc
.
-n
option..yaml
) according to the camera parameters.-s
option. Please modify the config accordingly.