Open AR Cloud james.jackson โœ‰ openarcloud

SpatialDDS โ€” The Typed Spatial
Data Bus for Connected Systems

Robots, vehicles, base stations, AR/XR devices, IoT sensors, and digital twins all produce spatial data โ€” detections, poses, maps, zones. Today each domain invents its own formats and transport. SpatialDDS provides a shared, typed, open protocol so spatial data flows across domains and operators without per-integration custom work โ€” the interoperability layer for spatial computing.

14
Profiles
214
Conformance
Checks
5
Datasets
Validated
4
Bridges
Built
SpatialDDS Multi-Operator Intersection Demo
Live Demo

Multi-Operator Intersection Fusion

Three vehicle fleets and a 6G base station share typed Detection3D messages on a SpatialDDS bus. A fusion service correlates detections across sources, publishes fused tracks with provenance, and detects trajectory conflicts in real time.

โ— โ— โ— Three AV operators with different sensor mixes
โ—‡ Infrastructure radar with false-alarm filtering
โ—† Fused tracks with multi-source attribution
โŠ— Planned trajectory conflict detection
โ—‹ Coverage circles + spatial discovery
10 Hz
Per operator
100%
Multi-source
2.9ร—
vs best AV
โš  1
Conflict

Gaps SpatialDDS Addresses

Gap

No cross-domain spatial schema

ROS 2 has sensor_msgs, V2X has BSM/CAM, IoT has custom JSON. None define typed 3D detections, map alignments, or spatial zones that work across all three.

โ†’ Detection3D, MapAlignment, SpatialZone, GeoPose โ€” domain-neutral types
Gap

No multi-operator coordination

Existing stacks assume single-operator control. When Fleet A and Fleet B share an intersection, there's no standard for topic namespacing, provenance, or trust boundaries.

โ†’ Per-operator namespaces, source_operator provenance, discovery with coverage
Gap

No infrastructure sensing integration

6G base stations will be sensors. No protocol connects their radar/beam observations to robot perception stacks. Custom APIs per vendor, per deployment.

โ†’ RadTensorFrame, RfBeamFrame, Detection3DSet โ€” same types for vehicles and infrastructure
Gap

No spatial discovery

New participants can't ask "who has mapped this area?" or "which sensors cover this intersection?" Discovery is either manual config or nonexistent.

โ†’ Announce with coverage geometry, CoverageQuery with spatial filtering

What SpatialDDS Provides

Typed Spatial Messages

Schema-enforced types for every spatial concept โ€” not opaque bytes or ad hoc JSON. Every consumer knows exactly what fields exist, what they mean, and how to interpret them.

Detection3D ยท FramedPose ยท GeoPose ยท VisionFrame ยท LidarFrame ยท RadDetectionSet ยท RadioScan ยท PlannedTrajectory

Coordinate Frame DAG

FrameRef by UUID with transform chains. Multiple maps, multiple robots, multiple operators โ€” all linked by typed transforms with uncertainty. No more "base_link" collisions.

FrameRef ยท FrameTransform ยท GeoAnchor ยท CovMatrix ยท PoseSE3

Spatial Discovery & Coverage

Participants announce what they sense and where. Consumers query by spatial region. Late joiners discover the network without manual configuration.

Announce ยท CoverageQuery ยท CoverageElement ยท TopicMeta ยท ServiceKind

Multi-Source Fusion Support

Source provenance, uncertainty, entity correlation. A fusion service consumes Detection3D from N sources and publishes FusedTrack โ€” without knowing the sources' internals.

FusedTrack ยท EntityBinding ยท source_operator ยท confidence boosting

Map & Twin Lifecycle

Maps go from BUILDING to OPTIMIZING to STABLE. Inter-map alignments carry evidence and revision numbers. Zone state tracks occupancy in real time.

MapMeta ยท MapAlignment ยท SpatialZone ยท ZoneState ยท SpatialEvent

Real-Time DDS Transport

Built on OMG DDS with configurable QoS โ€” BEST_EFFORT for high-rate sensors, RELIABLE for detections, TRANSIENT_LOCAL for metadata. Peer-to-peer, no central broker required.

RADAR_RT ยท VIDEO_LIVE ยท EVENT_RT ยท RADIO_SCAN_RT ยท BlobRef + BlobChunk

Ecosystem Bridges โ€” Meet Every Community Where They Are

BUILT

MCAP Recorder

SpatialDDS โ†” MCAP files

Record multi-source spatial streams. Replay for re-analysis. Ingest into ML training pipelines (LeRobot, Open X-Embodiment). Visualize in Foxglove.

BUILT

ROS 2 Bridge

SpatialDDS โ†” ROS 2 topics

Translate sensor_msgs and vision_msgs. Operator-scoped UUIDv5 frames solve tf2 collisions. Separate DDS domains โ€” zero interference with your robot stack.

BUILT

MQTT Bridge

SpatialDDS โ†” MQTT broker

Edge-to-cloud over Mosquitto or AWS IoT Core. QoS mapping, retained messages for metadata, per-operator topic policies. Loop prevention built in.

BUILT

WebSocket Bridge

SpatialDDS โ†” browsers

Client-driven subscriptions with glob patterns. Topic discovery. Rate limiting. Bidirectional โ€” browser apps can publish back. Built-in debug dashboard.

๐Ÿš€

One-Button Deploy โ€” Laptop to Cloud

docker compose up runs locally in 30 seconds โ€” synthetic multi-operator intersection data, fusion service, web dashboard. ./deploy.sh deploys the same stack to AWS Fargate in 5 minutes โ€” ALB with WebSocket, all four bridges, ~$2.50/day. Same Docker image, same DDS domain, same dashboard.

30s
Local
5m
AWS

SpatialDDS 1.6 Specification

14 profiles ยท 60+ IDL types ยท 214 conformance checks across 5 public datasets ยท Core, sensing, discovery, anchors, mapping, semantics, events, and provisional profiles for RF beam and radio fingerprinting.

The Grounding Layer for
AI World Models

Every world model โ€” learned dynamics, foundation models for robotics, digital twins, planning agents โ€” needs structured, real-time observations of the physical world. The model architecture is advancing rapidly. The infrastructure for connecting models to reality is not. SpatialDDS fills that gap.

SpatialDDS carries observations and predictions.
Not episodes. Not rewards. Not latent representations.
Those belong to the ML ecosystem โ€” connected through bridges.

Every profile answers a question a world model asks
What objects exist, where? Detection3D, FusedTrack
What does it look like? VisionFrame, CamIntrinsics
What's the 3D structure? LidarFrame, GeometryTile
What RF environment? RadioScan, RfBeamFrame
Where am I in the world? GeoPose, GeoAnchor
What zones, what state? SpatialZone, ZoneState
What just happened? SpatialEvent
What does this agent intend? PlannedTrajectory
What data sources exist? Announce, CoverageQuery
Which messages = same thing? EntityBinding
RECORD โ†’ TRAIN
MCAP bridge records spatial streams โ†’ LeRobot / Open X-Embodiment ingest as episodes. Spatial semantics survive the roundtrip.
SUBSCRIBE โ†’ INFER
A model inference server subscribes to live SpatialDDS streams, runs prediction, publishes PlannedTrajectory or Detection3D back to the bus.
DISCOVER โ†’ OBSERVE
Discovery profile provides the obs-space manifest โ€” what types, at what rates, with what coverage. A spatial context window for embodied AI.

Validated Against 5 Public Datasets

I.1 nuScenes Autonomous driving โ€” 6 cameras, LiDAR, 5 radars, 3D annotations 27 checks
I.2 DeepSense 6G V2I beam prediction โ€” FMCW radar, 60 GHz phased array, camera 44 checks
I.3 S3E Multi-robot SLAM โ€” 3 UGVs, LiDAR, UWB ranging, inter-robot loops 38 checks
I.4 ScanNet Indoor scenes โ€” RGB-D, semantic labels, spatial zones, events 35 checks
I.5 LaMAR Multi-device AR โ€” HoloLens, iPhone, NavVis scanner, anchors, WiFi/BT 70 checks

14 Profiles โ€” Stable + Provisional

core Poses, transforms, blobs
sensing.vision Cameras, depth
discovery Announce, coverage
sensing.lidar Point clouds, meshes
anchors GeoAnchor, AnchorSet
sensing.rad Radar detections + tensors
semantics Detection2D/3D, classes
sensing.imu Accel, gyro, mag
ar_geo GeoPose, ENU frames
mapping Maps, alignment, lifecycle
events Zones, alerts, state
sensing.rf_beam mmWave beams
sensing.radio WiFi, BLE, UWB
Stable
Provisional