Running on Docker

Instructions for PangolinViewer

Dockerfile.desktop can be used for easy installation. This chapter provides instructions on building and running examples with PangolinViewer support using Docker.

The instructions are tested on Ubuntu 18.04 and 20.04. Docker for Mac are NOT supported due to OpenGL forwarding.

Note

If you’re using Ubuntu, there are easy setup scripts in scripts/ubuntu. see scripts/ubuntu/README.md

If you plan on using a machine with NVIDIA graphics card(s), please use nvidia-docker2 and the version 390 or later of NVIDIA driver. These examples depend on X11 forwarding with OpenGL for visualization.

If the viewer cannot be lanched at all or you are using macOS, please install the dependencies manually or use the docker images for SocketViewer.

Building Docker Image

Execute the following commands:

git clone --recursive https://github.com/stella-cv/stella_vslam.git
cd stella_vslam
docker build -t stella_vslam-desktop -f Dockerfile.desktop .

You can accelerate the build of the docker image with --build-arg NUM_THREADS=<number of parallel builds> option. For example:

# building the docker image with four threads
docker build -t stella_vslam-desktop -f Dockerfile.desktop . --build-arg NUM_THREADS=`expr $(nproc) - 1`

Starting Docker Container

In order to enable X11 forwarding, supplemental options (-e DISPLAY=$DISPLAY and -v /tmp/.X11-unix/:/tmp/.X11-unix:ro) are needed for docker run.

# before launching the container, allow display access from local users
xhost +local:
# launch the container
docker run -it --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix:ro stella_vslam-desktop

Note

Additional option --runtime=nvidia is needed if you use NVIDIA graphics card(s). If you’re using Docker with Native GPU Support then the options are --gpus all. Please see here for more details.

After launching the container, the shell interface will be launched in the docker container.

root@ddad048b5fff:/stella_vslam/build# ls
lib                     run_image_slam          run_video_slam
run_euroc_slam          run_kitti_slam          run_tum_slam

See Tutorial to run SLAM examples in the container.

Note

If the viewer does not work, please install the dependencies manually on your host machine or use the docker images for SocketViewer instead.

If you need to access to any files and directories on a host machine from the container, bind directories between the host and the container.

Instructions for SocketViewer

Dockerfile.socket and viewer/Dockerfile can be used for easy installation. This chapter provides instructions on building and running examples with SocketViewer support using Docker.

Building Docker Images

Docker Image of stella_vslam

Execute the following commands:

cd /path/to/stella_vslam
docker build -t stella_vslam-socket -f Dockerfile.socket .

You can accelerate the build of the docker image with --build-arg NUM_THREADS=<number of parallel builds> option. For example:

# building the docker image with four threads
docker build -t stella_vslam-socket -f Dockerfile.socket . --build-arg NUM_THREADS=`expr $(nproc) - 1`

Docker Image of Server

Execute the following commands:

cd /path/to/stella_vslam
cd viewer
docker build -t stella_vslam-viewer .

Starting Docker Containers

On Linux

Launch the server container and access to it with the web browser in advance. Please specify --net=host in order to share the network with the host machine.

$ docker run --rm -it --name stella_vslam-viewer --net=host stella_vslam-viewer
WebSocket: listening on *:3000
HTTP server: listening on *:3001

After launching, access to http://localhost:3001/ with the web browser.

Next, launch the container of stella_vslam. The shell interface will be launched in the docker container.

$ docker run --rm -it --name stella_vslam-socket --net=host stella_vslam-socket
root@hostname:/stella_vslam/build#

See Tutorial to run SLAM examples in the container.

If you need to access to any files and directories on a host machine from the container, bind directories between the host and the container.

On macOS

Launch the server container and access to it with the web browser in advance. Please specify -p 3001:3001 for port-forwarding.

$ docker run --rm -it --name stella_vslam-viewer -p 3001:3001 stella_vslam-viewer
WebSocket: listening on *:3000
HTTP server: listening on *:3001

After launching, access to http://localhost:3001/ with the web browser.

Then, inspect the container’s IP address and append the SocketPublisher.server_uri entry to the YAML config file of stella_vslam.

# inspect the server's IP address
$ docker inspect stella_vslam-viewer | grep -m 1 \"IPAddress\" | sed 's/ //g' | sed 's/,//g'
"IPAddress": "172.17.0.2"
# config file of stella_vslam

...

#============================#
# SocketPublisher Parameters #
#============================#

# append this entry
SocketPublisher.server_uri: "http://172.17.0.2:3000"

Next, launch the container of stella_vslam. The shell interface will be launched in the docker container.

$ docker run --rm -it --name stella_vslam-socket stella_vslam-socket
root@hostname:/stella_vslam/build#
See Tutorial to run SLAM examples in the container.
Please don’t forget to append SocketPublisher.server_uri entry to the config.yaml if you use the downloaded datasets in the tutorial.

If you need to access to any files and directories on a host machine from the container, bind directories between the host and the container.

Bind of Directories

If you need to access to any files and directories on a host machine from the container, bind directories between the host and the container using --volume or --mount option. (See the docker documentataion.)

For example:

# launch a container of stella_vslam-desktop with --volume option
$ docker run -it --rm --runtime=nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix:ro \
    --volume /path/to/dataset/dir/:/dataset:ro \
    --volume /path/to/vocab/dir:/vocab:ro \
    stella_vslam-desktop
# dataset/ and vocab/ are found at the root directory in the container
root@0c0c9f115d74:/# ls /
...   dataset/   vocab/   ...
# launch a container of stella_vslam-socket with --volume option
$ docker run --rm -it --name stella_vslam-socket --net=host \
    --volume /path/to/dataset/dir/:/dataset:ro \
    --volume /path/to/vocab/dir:/vocab:ro \
    stella_vslam-socket
# dataset/ and vocab/ are found at the root directory in the container
root@0c0c9f115d74:/# ls /
...   dataset/   vocab/   ...