Docker Setup
Docker provides a containerized environment for running AutoSDV without modifying your host system. This is ideal for development, testing, and simulation scenarios.
Use Cases
Docker is recommended for:
- Development and Testing: Consistent environment across different machines
- Simulation: Running AutoSDV without physical hardware
- CI/CD: Automated testing and deployment
- Quick Evaluation: Try AutoSDV without full installation
Prerequisites
Host System Requirements
- Ubuntu 20.04 or 22.04 (other Linux distributions may work)
- NVIDIA GPU with driver 470+ (for GPU acceleration)
- At least 50GB free disk space
- 16GB+ RAM recommended
Software Requirements
-
Docker Engine (20.10 or newer):
# Install Docker curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh # Add user to docker group sudo usermod -aG docker $USER # Log out and back in for group changes to take effect
-
NVIDIA Container Toolkit (for GPU support):
# Add NVIDIA repository distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \ sudo tee /etc/apt/sources.list.d/nvidia-docker.list # Install nvidia-container-toolkit sudo apt update sudo apt install nvidia-container-toolkit sudo systemctl restart docker
-
Docker Compose (optional, for multi-container setups):
sudo apt install docker-compose
Quick Start
Step 1: Clone AutoSDV Repository
git clone -b 2025.02 --recursive https://github.com/NEWSLabNTU/AutoSDV.git
cd AutoSDV/docker
Step 2: Bootstrap Docker Environment
Set up cross-architecture support (required for ARM64 emulation on x86_64):
make bootstrap
Step 3: Build Docker Image
Build the AutoSDV Docker image:
make build
This creates an image with:
- Ubuntu 22.04 base with ROS 2 Humble
- Autoware 2025.02 pre-installed
- All AutoSDV dependencies
- CUDA and TensorRT support
- Sensor driver libraries (except proprietary ones)
Step 4: Run Container
Start an interactive container session:
make run
You'll enter a shell with AutoSDV ready to use at /home/developer/AutoSDV
.
Docker Image Details
Image Architecture
The AutoSDV Docker image is built for ARM64 architecture to match the Jetson platform. On x86_64 hosts, QEMU provides transparent emulation.
Pre-installed Software
- ROS 2 Humble with desktop tools
- Autoware 2025.02 binary release
- CUDA 12.3 and TensorRT 8.6
- Cyclone DDS configured as default
- Development tools: git, vim, tmux, htop
Volume Mounts
The make run
command automatically mounts:
/tmp/.X11-unix
for GUI applications- NVIDIA GPU devices for CUDA access
Advanced Usage
Custom Run Options
Run with additional volumes or ports:
docker run -it --rm \
--gpus all \
--network host \
-v /path/to/data:/data \
-v /dev:/dev \
--privileged \
autosdv:latest
Development Workflow
For active development, mount your local code:
docker run -it --rm \
--gpus all \
-v $(pwd):/workspace/AutoSDV \
-w /workspace/AutoSDV \
autosdv:latest
GUI Applications
Enable X11 forwarding for visualization tools:
xhost +local:docker
docker run -it --rm \
--gpus all \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
autosdv:latest
Then run GUI applications like RViz:
# Inside container
rviz2
Multi-Container Setup
Create a docker-compose.yml
for complex deployments:
version: '3.8'
services:
autosdv:
image: autosdv:latest
runtime: nvidia
network_mode: host
privileged: true
volumes:
- /dev:/dev
- ./data:/data
environment:
- ROS_DOMAIN_ID=0
- DISPLAY=${DISPLAY}
command: ros2 launch autosdv_launch autosdv.launch.yaml
monitoring:
image: autosdv:latest
runtime: nvidia
network_mode: host
environment:
- ROS_DOMAIN_ID=0
command: python3 /home/developer/AutoSDV/src/launcher/autosdv_launch/autosdv_launch/autosdv_monitor.py
Run with:
docker-compose up
Container Management
Save and Load Images
Export image for deployment:
make save # Creates autosdv_docker.tar.gz
Load on another machine:
docker load < autosdv_docker.tar.gz
Clean Up
Remove container and image:
make clean
Limitations
Hardware Access
Docker containers have limited hardware access:
- No direct LiDAR access (USB/Ethernet sensors need special configuration)
- No CAN bus without
--privileged
flag - Camera access requires device mounting
Performance
- ARM64 emulation on x86_64 reduces performance
- GPU passthrough adds overhead
- Network performance may vary with Docker networking modes
Sensor Drivers
Some proprietary sensor drivers cannot be included:
- ZED SDK (requires manual installation)
- Seyond Robin-W driver (vendor-specific)
Troubleshooting
GPU Not Accessible
Verify NVIDIA runtime:
docker run --rm --gpus all nvidia/cuda:11.8.0-base-ubuntu22.04 nvidia-smi
Network Issues
Use host networking for ROS 2 communication:
docker run --network host ...
Permission Denied
For device access, run with privileges:
docker run --privileged -v /dev:/dev ...
Build Failures
Clear Docker cache and rebuild:
docker system prune -a
make bootstrap
make build
Integration with CI/CD
GitHub Actions Example
name: AutoSDV Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Docker
uses: docker/setup-buildx-action@v1
- name: Build Docker image
run: |
cd docker
make build
- name: Run tests
run: |
docker run --rm autosdv:latest \
bash -c "cd /home/developer/AutoSDV && colcon test"
Jenkins Pipeline Example
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'cd docker && make build'
}
}
stage('Test') {
steps {
sh 'docker run --rm autosdv:latest make test'
}
}
}
}
Next Steps
- Software Installation - Native installation guide
- Usage Guide - Operating AutoSDV
- Development Guide - Development workflows
- Manual Setup - Customization options