# Research: SLAM + VLM Hybrid Architecture for TurboPi

**Session 85, 2026-04-13** | Research-only — no implementation

---

## Executive Summary

Annie currently navigates with a VLM-only pipeline (Gemma 4 E2B on Panda, 54 Hz) that provides semantic goal-seeking but has **no persistent spatial memory**. Every navigation session starts from scratch — she doesn't know where she is, what she's already mapped, or how rooms connect. Adding SLAM gives her a spatial backbone: a persistent occupancy grid, localization ("where am I?"), and the foundation for natural-language goals ("go to the kitchen").

This document evaluates 5 hybrid architectures and recommends **Proposal B (Minimal SLAM + Semantic Waypoints)** as the highest-probability-of-success path. It avoids the depth-camera blocker that eliminates most academic VLM+SLAM systems, keeps SLAM classical and lightweight on the Pi, and layers semantic intelligence on top via the existing VLM pipeline.

---

## Hardware Inventory (constraints for all proposals)

| Machine | Role | Key Resources |
|---------|------|---------------|
| **Pi 5** (192.168.68.61) | Robot car. Camera, lidar, motors, safety | 16 GB RAM, 4× Cortex-A76 @ 2.4 GHz, Hailo-8 NPU |
| **Panda** (192.168.68.57) | Fast VLM inference | RTX 5070 Ti 16 GB, 4.3 GB free VRAM |
| **Titan** (192.168.68.52) | Main brain (Gemma 4 26B) | DGX Spark, 128 GB unified, Blackwell |

**Sensors on Pi:** RPLIDAR C1 (10 Hz, 10.24m, DenseBoost, CCW angles, /dev/ttyUSB0), Pico MPU-6050 IMU (100 Hz, /dev/ttyACM0), monocular USB camera (640×480), sonar (HC-SR04).

**No depth camera.** This eliminates ConceptGraphs, HOV-SG, OVO-SLAM, OK-Robot, VLMaps, and HomeRobot — all require RGB-D.

---

## R1: ROS2 on Pi 5 — Docker vs Native

### Finding: Docker is the proven path

| Option | Feasibility | Risk |
|--------|-------------|------|
| **Docker (ros:jazzy)** | Well-tested on Pi 5 aarch64. Serial passthrough via `--device=/dev/ttyUSB0`. Husarion provides multi-arch Nav2+slam_toolbox images. | Device permissions (dialout group), DDS discovery across container boundary |
| **Native build** | Tier 3 support only — no official Jazzy binaries for Bookworm. 4-8h source build. Community .deb packages exist (Ar-Ray-code/rpi-bullseye-ros2) | Build breaks, version skew, pollutes host OS |
| **Hybrid** (recommended) | ROS2 in Docker, turbopi-server stays native. Bridge via HTTP or shared socket. | Bridge latency (negligible at 10 Hz) |

**Recommendation:** Hybrid. Docker for ROS2 stack, native for turbopi-server. Bridge via a lightweight WebSocket or HTTP endpoint that forwards LaserScan + IMU data from the native lidar daemon into the Docker container as ROS2 topics. This isolates ROS2 from destabilizing the working navigation stack.

**Key reference:** `husarion/navigation2-docker` provides ready-made ARM64 images with slam_toolbox + Nav2.

---

## R2: rf2o + RPLIDAR C1 Compatibility

### Finding: Compatible, with caveats

| Question | Answer |
|----------|--------|
| rplidar_ros2 supports C1? | Yes — SLAMTEC's `sllidar_ros2` package supports C1. Alternatively, wrap our existing `rplidarc1` Python library in a ROS2 publisher node. |
| Variable density (DenseBoost)? | Compatible — rf2o's Gaussian range-derivative fitting doesn't assume uniform angular spacing. Standard `sensor_msgs/LaserScan` works. |
| CCW angle convention? | **Must flip.** ROS2 REP-103 expects CW ordering. Reverse point order when publishing LaserScan. |
| 10 Hz sufficient? | Yes — MentorPi uses rf2o at `freq: 10.0`. rf2o processes each scan in ~0.9 ms. |
| Mecanum strafing? | **Partial.** rf2o is scan-matching (no wheel model assumed), so it handles lateral motion. But pure vy (strafe without rotation) produces scan shifts that are harder to match than rotational motion. EKF fusion with wheel odom compensates. |
| 130mm chassis offset? | Define static TF: `base_footprint → laser_frame` with the measured offset. |

**Key risk:** The angle convention flip is critical. If missed, rf2o will compute inverted odometry (robot thinks it went left when it went right). Our existing `lidar.py` already handles CCW→CW conversion for the safety daemon sectors — same logic applies.

---

## R3: VLM+SLAM Hybrid Systems — State of the Art

### Academic landscape (most require RGB-D — NOT viable for us)

| System | Year | What it does | Depth required? | Edge-deployable? |
|--------|------|-------------|-----------------|------------------|
| ConceptGraphs | 2024 | Open-vocab 3D scene graphs from RGB-D | **Yes** | No (heavy GPU) |
| HOV-SG | 2024 | Hierarchical scene graphs (floor/room/object) | **Yes** | No |
| OVO-SLAM | 2024 | CLIP-labeled 3D segments | **Yes** | No |
| VLMaps | 2023 | CLIP features fused into 3D reconstruction | **Yes** | No |
| OpenMonoGS-SLAM | 2025 | First monocular open-set SLAM (Gaussian Splatting + SAM + CLIP) | **No (mono!)** | No (research prototype) |
| SayCan | 2022 | LLM grounds language in robotic affordances | No depth needed | Partially (needs action primitives) |
| LM-Nav | 2022 | CLIP-embedded topological waypoint graph + language matching | **No (mono!)** | Yes (lightweight) |
| VLM-Nav | 2026 | Mapless nav: monocular → DepthAnything-V2 → VLM decides | **No (mono!)** | Cloud VLM only |
| AnyLoc | 2023 | DINOv2 features for universal place recognition | **No (mono!)** | Yes (GPU for extraction) |
| OK-Robot | 2024 | Zero-shot pick-and-drop with VLM chain | **Yes** | No |
| HomeRobot | 2023 | Benchmark for open-vocab mobile manipulation | **Yes** | No (benchmark only) |

### What's actually viable for us (monocular camera + 2D lidar)

Only **4 systems** work without depth cameras:

1. **LM-Nav pattern** — Topological graph of CLIP/DINOv2-embedded waypoints. Navigate between waypoints using visual matching. Language queries find waypoints by CLIP similarity. **Most practical for our setup.**

2. **AnyLoc** — Universal place recognition via DINOv2 features. Could augment slam_toolbox's loop closure with visual features extracted on Panda. Monocular-compatible.

3. **VLM-Nav pattern** — Monocular → DepthAnything-V2 for depth estimation → VLM for obstacle avoidance. Validates our current approach (monocular + offboard VLM). Shows DepthAnything-V2 as a bridge if we ever need depth.

4. **SayCan pattern** — LLM as high-level planner over pre-defined action primitives. **Annie already does this** with her 4-command nav vocabulary.

### Key insight

> **No production system combines VLM + 2D SLAM at our scale.** Academic systems either require RGB-D or run on cloud GPUs. We would be building something novel by layering VLM semantic labels onto a classical 2D occupancy grid. The closest conceptual match is VLMaps (CLIP features on map cells), but we'd do it in 2D with periodic VLM captioning instead of dense CLIP embedding.

---

## R4: Architecture Proposals (5 options, ranked by probability of success)

### Proposal A: Full Nav2 Stack
**"Classical robot, VLM bolted on"**

```
Pi Docker:  rplidar_ros2 → rf2o → EKF → slam_toolbox → Nav2 → cmd_vel
            ↑ IMU                                         ↑ goal pose
Panda:      VLM labels map regions periodically
Titan:      Annie sends Nav2 goal poses, monitors progress
```

- **What you get:** Full path planning, obstacle avoidance, recovery behaviors, map persistence
- **What you lose:** Annie's VLM-driven reactive navigation (replaced by Nav2's costmap planner)
- **Risk:** Nav2 is complex (costmaps, behavior trees, recovery plugins). Configuration hell on a mecanum platform. Two competing obstacle avoidance systems (Nav2 costmap vs existing lidar safety daemon).
- **Probability of success:** **40%** — high reward but high integration complexity, and replaces working reactive nav with something untested on this chassis

### Proposal B: Minimal SLAM + Semantic Waypoints (RECOMMENDED)
**"Annie keeps driving, but now she knows WHERE she is"**

```
Pi Docker:  rplidar_ros2 → rf2o → EKF → slam_toolbox (mapping + localization only)
            ↑ IMU                    ↓
            Bridge ←────── /map + /tf (robot pose in map frame)

Pi Native:  turbopi-server exposes GET /slam/pose → {x, y, theta, map_frame}
            turbopi-server exposes GET /slam/map → occupancy grid PNG

Panda:      VLM still drives navigation (4-command, 54 Hz) — unchanged
            Periodically: camera frame → VLM → scene label → POST /slam/annotate

Titan:      Annie orchestrates. Now she can:
            - "Where am I?" → query /slam/pose → "you're at (2.3, 1.1), near the kitchen"
            - "Go home" → query /slam/pose + ArUco homing (already works)
            - "Go to the kitchen" → query annotated map → plan sequence of waypoints
            - Build spatial memory over time (rooms, landmarks, routes)
```

- **What you get:** Localization, persistent map, spatial memory — without replacing the working VLM nav
- **What you keep:** All existing navigation (VLM streaming, ArUco homing, lidar safety daemon)
- **What's new:** Only the SLAM layer (slam_toolbox in Docker) + bridge + 2 HTTP endpoints + periodic VLM annotation
- **Risk:** Bridge between Docker ROS2 and native turbopi-server is the main integration point. Low risk — just HTTP/WebSocket.
- **Probability of success:** **80%** — smallest delta from current working system, well-tested components (slam_toolbox, rf2o), bridge is simple HTTP

### Proposal C: Topological Semantic Map (LM-Nav inspired)
**"No occupancy grid — just a graph of named places"**

```
Phase 1 (Exploration):
  Annie drives around using VLM nav
  Every N seconds: capture frame → extract DINOv2/CLIP features on Panda
  Save: waypoint = {pose_from_IMU_integration, image_features, VLM_label}
  Build graph: waypoints connected by traversed paths

Phase 2 (Navigation):
  "Go to the kitchen" → CLIP similarity search over waypoint labels
  → Plan path through graph (A* on waypoint connections)
  → Navigate waypoint-to-waypoint using VLM visual matching
```

- **What you get:** Natural language navigation without occupancy grid. Lightweight, no ROS2 needed.
- **What you lose:** No metric map (can't answer "how far is the kitchen?"). No obstacle-aware path planning.
- **Risk:** IMU-only pose integration drifts badly over time (no lidar correction). Visual matching between current frame and stored waypoint is hard to get reliable.
- **Probability of success:** **55%** — elegant concept but unproven at our scale. Pose drift without SLAM is the killer. Could work as a LAYER on top of Proposal B.

### Proposal D: SLAM + DepthAnything-V2 Bridge
**"Fake depth camera via monocular depth estimation"**

```
Pi:     Monocular camera frame → Panda
Panda:  DepthAnything-V2 → pseudo-depth map → 3D point cloud
        → Could feed rtabmap or other visual SLAM
Pi:     2D lidar SLAM (slam_toolbox) runs independently
Titan:  Fuses 2D SLAM map with VLM-annotated 3D features
```

- **What you get:** 3D scene understanding without a depth camera. Enables ConceptGraphs-like semantic mapping.
- **Risk:** DepthAnything-V2 is metric-scale but noisy. Network latency (Pi → Panda → inference → back) adds ~50ms per frame. 3D reconstruction from monocular depth is an active research problem.
- **Probability of success:** **30%** — too many unproven components chained together. Cool for a research project, risky for a robot that needs to work.

### Proposal E: Pure VLM + Spatial Memory (No SLAM)
**"Annie remembers places without a map"**

```
No new hardware/software. Extend Annie's existing memory:
- After each navigation session, Annie logs: {goal, route_description, success, VLM_observations}
- Over time, builds a text-based spatial model: "the kitchen is to the left from the hallway"
- Navigation: Annie reasons over text memories to plan routes
- No metric coordinates, no occupancy grid
```

- **What you get:** Zero integration complexity. Works today.
- **Risk:** Text-based spatial reasoning is unreliable ("left from the hallway" depends on which way you're facing). No loop closure. No drift correction.
- **Probability of success:** **60%** for basic room navigation, **20%** for reliable multi-room planning — fundamentally limited by lack of metric spatial representation.

---

## Proposal Comparison Matrix

| Criterion | A: Full Nav2 | B: Minimal SLAM (rec.) | C: Topological | D: DepthAnything | E: Pure VLM |
|-----------|:---:|:---:|:---:|:---:|:---:|
| **Probability of success** | 40% | **80%** | 55% | 30% | 60% |
| **Integration complexity** | Very High | Low | Medium | Very High | None |
| **Preserves working nav?** | No | **Yes** | Yes | Yes | Yes |
| **Persistent map?** | Yes | **Yes** | Partial | Yes | No |
| **Metric localization?** | Yes | **Yes** | No (drifts) | Yes | No |
| **Natural language goals?** | With work | **With VLM annotation** | Native | With work | Partial |
| **ROS2 required?** | Yes (full) | **Yes (minimal)** | No | Partial | No |
| **New hardware needed?** | None | **None** | None | None | None |
| **Time to first working demo** | 3-4 sessions | **1-2 sessions** | 2-3 sessions | 4-5 sessions | 0 (already works) |
| **Ceiling (how good can it get?)** | Very high | High | Medium | Very high | Low |

---

## Recommended Path: Proposal B with Proposal C as a future layer

### Why B wins

1. **Smallest delta from working system.** Annie's VLM nav, ArUco homing, and lidar safety daemon continue unchanged. SLAM is purely additive.

2. **Well-tested components.** slam_toolbox + rf2o + EKF are battle-tested in ROS2. Husarion provides ARM64 Docker images. MentorPi's config is a ready-made reference.

3. **Immediate value.** Even before semantic annotation, just knowing "I am at (2.3, 1.1) in the map" enables: return-to-position, multi-room exploration without getting lost, spatial memory in the Context Engine.

4. **Natural upgrade path.** Once the SLAM foundation works:
   - Add Proposal C's topological waypoints as a semantic layer on top of the metric map
   - Add AnyLoc visual place recognition to augment loop closure
   - Add VLM-based room labeling to the occupancy grid
   - Eventually, if a depth camera is added, upgrade to full 3D semantic SLAM

### Implementation phases

**Phase 1: SLAM Foundation (1 session)**
- Docker: `ros:jazzy` + `slam_toolbox` + `rf2o_laser_odometry` + `robot_localization`
- Bridge: turbopi-server publishes LaserScan + IMU via WebSocket → Docker ROS2 node subscribes
- Expose: `GET /slam/pose` and `GET /slam/map` on turbopi-server
- Validate: robot drives around, map builds, pose is stable

**Phase 2: Annie Integration (1 session)**
- Annie queries `/slam/pose` to know where she is
- After navigation: Annie logs waypoint to Context Engine `{pose, VLM_scene_label, timestamp}`
- New tool: `get_robot_location()` → "I'm at (2.3, 1.1), which Annie previously labeled 'hallway near kitchen'"

**Phase 3: Semantic Annotation (1 session)**
- Periodic VLM captioning: every 5s during exploration, Panda's VLM labels the scene
- Labels attached to map cells nearest to current pose
- Annie can query: "what rooms have I visited?" → annotated map retrieval
- Goal navigation: "go to the kitchen" → find kitchen waypoint → drive toward it using VLM nav

**Phase 4: Topological Layer (future)**
- Add Proposal C as an overlay: named places connected by traversed routes
- AnyLoc features for visual waypoint matching
- This is where LM-Nav-style natural language navigation becomes possible

---

## R5: CPU/Memory Budget on Pi 5

### Per-Component Resource Usage

| Component | CPU (steady) | CPU (peak) | RAM |
|-----------|:---:|:---:|:---:|
| rplidar_ros2 node | 2-5% of 1 core | — | 20-30 MB |
| rf2o_laser_odometry | 1-3% of 1 core | — | 15-25 MB |
| robot_localization EKF | 1-2% of 1 core | — | 20-30 MB |
| slam_toolbox (async) | 15-25% of 1 core | **2-3 cores for 10-15s** (loop closure) | 250-350 MB |
| ROS2 daemon + DDS | 3-5% of 1 core | — | 80-120 MB |
| Bridge node | 1-2% of 1 core | — | 15-25 MB |
| **Total ROS2 stack** | **~25-40% of 1 core** | **2-3 cores burst** | **~400-580 MB** |

### Budget Assessment

| | Current | After ROS2 | Headroom |
|---|---|---|---|
| **RAM** | ~1-2 GB | ~2-2.5 GB | **13-14 GB free** |
| **CPU (steady)** | ~1-1.5 cores | ~1.5-2 cores | **2-2.5 cores idle** |
| **CPU (peak)** | ~2 cores | ~3-4 cores | **Tight during loop closure** |

**Mitigation for loop closure spikes:**
- Pin safety daemon to core 0 via `taskset` — guaranteed not to starve during SLAM optimization
- Set `map_update_interval: 5.0` (not 3.0)
- Use `resolution: 0.05` (not 0.025) — halves CPU, quarters grid memory
- Docker `--cpuset-cpus=1,2,3` isolates ROS2 from safety-critical core 0

**Verdict: Pi 5 (16 GB) has ample headroom.** The only concern is loop closure CPU spikes competing with camera capture and safety daemon, which core pinning addresses.

---

## R6: Existing TurboPi-Specific ROS2 Work

| Find | URL | Relevance |
|------|-----|-----------|
| **turbopi_ros2** (SergioRincon07) | github.com/SergioRincon07/turbopi_ros2 | **Direct match** — ROS2 Humble on TurboPi. Motor/servo/camera drivers. Study `turbopy_base_driver` for motor control patterns. |
| **linorobot2** | github.com/linorobot/linorobot2 | DIY ROS2 robots with mecanum support, Nav2+slam_toolbox+EKF pre-wired. Best scaffolding option. |
| **husarion/navigation2-docker** | github.com/husarion/navigation2-docker | Multi-arch (ARM64+AMD64) Nav2+slam_toolbox Docker images. Directly usable on Pi 5. |
| **pi_hailo_ros2** (koh43) | github.com/koh43/pi_hailo_ros2 | Pi 5 + Hailo-8L + ROS2 object detection node. Reference for Hailo+ROS2 integration. |
| **ros2_controllers/mecanum_drive_controller** | control.ros.org | Official ROS2 mecanum controller. Could replace MentorPi's custom controller. |
| **VLMaps** | github.com/vlmaps/vlmaps | CLIP features on map cells — conceptual reference. Requires RGB-D (not usable directly). |

**Notable absence:** No one has combined VLM navigation with 2D SLAM on a small robot car. This is genuinely novel work.

---

## Key Technical Decisions for Implementation

### 1. Bridge Architecture (turbopi-server ↔ Docker ROS2)

**Option A: WebSocket bridge (recommended)**
```
turbopi-server (native Python)
    ↓ WebSocket (localhost:9090)
ROS2 bridge node (Docker, rclpy)
    ↓ publishes
/scan (LaserScan), /imu (Imu), /odom_raw (Odometry)
```
- turbopi-server's lidar daemon already produces scan data at 10 Hz — forward via WebSocket
- Pico IMU bridge already reads at 100 Hz — forward heading as `sensor_msgs/Imu`
- cmd_vel integration: Docker publishes odom_raw by integrating cmd_vel (same as MentorPi's odom_publisher_node.py)

**Option B: Shared serial (not recommended)**
- Give Docker direct access to /dev/ttyUSB0 and /dev/ttyACM0
- Means turbopi-server can't use these devices simultaneously
- Would require rewriting lidar daemon as a ROS2 node

**Option C: rosbridge_suite (viable fallback)**
- Standard ROS2 WebSocket bridge (port 9090)
- JSON-based, higher overhead than raw WebSocket
- But well-tested and documented

### 2. SLAM Configuration (adapted from MentorPi)

```yaml
slam_toolbox:
  ros__parameters:
    scan_topic: /scan              # from bridge node
    odom_frame: odom
    map_frame: map
    base_frame: base_footprint
    mode: mapping                  # switch to localization once map is saved
    resolution: 0.05               # 5cm cells — sufficient for indoor robot car
    map_update_interval: 5.0       # reduce CPU spikes
    minimum_travel_distance: 0.3   # don't process scans while stationary
    minimum_travel_heading: 0.3    # ~17° minimum rotation before new scan
    max_laser_range: 10.0          # RPLIDAR C1 max (10.24m, leave margin)
    do_loop_closing: true          # essential for multi-room mapping
    use_scan_matching: true
    use_scan_barycenter: true
```

### 3. EKF Configuration (3-source fusion)

```yaml
# Fuse: wheel odometry (cmd_vel integration) + rf2o (lidar scan matching) + IMU (heading)
ekf_filter_node:
  ros__parameters:
    frequency: 30.0                # 30 Hz sufficient for 0.3 m/s robot
    odom0: /odom_raw               # cmd_vel dead-reckoning
    odom0_config: [true, true, false,   # x, y, not z
                   false, false, true,   # not roll/pitch, yes yaw
                   true, true, false,    # vx, vy, not vz
                   false, false, true]   # not vroll/vpitch, yes vyaw
    odom1: /odom_rf2o              # lidar scan matching
    odom1_config: [true, true, false,
                   false, false, true,
                   false, false, false,
                   false, false, false]
    imu0: /imu                     # Pico MPU-6050
    imu0_config: [false, false, false,
                  false, false, true,    # yaw only (heading from IMU)
                  false, false, false,
                  false, false, true]    # vyaw
```

---

## Risk Register

| Risk | Severity | Mitigation |
|------|----------|------------|
| CCW angle convention not flipped → rf2o computes inverted odom | **CRITICAL** | Existing lidar.py already does CCW→CW for safety daemon. Same logic in bridge. Test with rviz2 before any driving. |
| Loop closure CPU spike starves safety daemon | HIGH | Pin safety daemon to core 0 via `taskset`. Docker `--cpuset-cpus=1,2,3`. |
| Docker container doesn't start on boot → robot drives without SLAM | MEDIUM | Not a safety issue — SLAM is additive. Annie falls back to VLM-only nav (current behavior). |
| rf2o accuracy degrades during pure mecanum strafing | MEDIUM | EKF fuses rf2o with IMU heading — compensates for rf2o's rotational bias during lateral motion. Validate empirically. |
| WebSocket bridge adds latency | LOW | At 10 Hz, even 10ms bridge latency is 1% of the scan period. Not measurable. |
| slam_toolbox map file corruption | LOW | Save maps periodically (`ros2 service call /slam_toolbox/serialize_map`). Keep last 3 snapshots. |

---

## What NOT To Do

1. **Don't install Nav2 in Phase 1.** Nav2's costmap planner conflicts with Annie's VLM nav. Add it only if/when we want autonomous path planning without VLM.

2. **Don't replace the lidar daemon.** Keep the native Python lidar daemon for the safety daemon and bridge scan data to ROS2. Don't give Docker exclusive serial access.

3. **Don't attempt 3D SLAM without a depth camera.** DepthAnything-V2 is promising but unproven for real-time robotic SLAM. Stick with 2D occupancy grid.

4. **Don't over-configure slam_toolbox.** Start with MentorPi's proven config, change only `scan_topic` and `max_laser_range`. Tune after first successful map.

5. **Don't build the semantic layer before the spatial foundation works.** Phase 1 (SLAM) must be solid before Phase 3 (VLM annotation) starts.

---

## References

### Systems surveyed
- ConceptGraphs (MIT, ICRA 2024) — concept-graphs.github.io
- HOV-SG (RSS 2024) — hovsg.github.io
- LM-Nav (Berkeley, 2022) — sites.google.com/view/lmnav
- SayCan (Google, 2022) — say-can.github.io
- VLMaps (Google, ICRA 2023) — vlmaps.github.io
- AnyLoc (RA-L 2023) — anyloc.github.io
- OK-Robot (NYU, CoRL 2024) — ok-robot.github.io
- VLM-Nav (PLOS ONE, 2026) — monocular + DepthAnything-V2 + VLM
- OpenMonoGS-SLAM (Dec 2025) — first monocular open-set SLAM
- VoxPoser (Stanford, CoRL 2023) — LLM-composed 3D value maps

### ROS2 on Pi 5
- husarion/navigation2-docker — ARM64 Nav2+slam_toolbox images
- SergioRincon07/turbopi_ros2 — ROS2 Humble on TurboPi
- koh43/pi_hailo_ros2 — Hailo-8L + ROS2 on Pi 5
- linorobot2 — DIY ROS2 mecanum robots
- ros2_controllers/mecanum_drive_controller — official mecanum support

### SLAM components
- slam_toolbox (Steve Macenski) — async SLAM for ROS2
- rf2o_laser_odometry (MAPIRlab) — lidar scan matching at 0.9ms/frame
- robot_localization — EKF/UKF for multi-sensor fusion
- MDPI Electronics 2024 — slam_toolbox vs Cartographer benchmarks
