Multi-domain Agile Networked Ground, Unmanned, and Skyward System
A DoD HBCU/MSI Equipment/Instrumentation Initiative
MAGNUS is a multi-domain testbed spanning land, air, sea, and simulation for physical & virtual robotics,
zero-touch networking, resilient communications, scalable computer vision, and AI/ML experimentation.
It expands capabilities at UMBC’s R2C2 and integrates with the Distributed Virtual Proving Ground (DVPG).
Overview
MAGNUS provides a shared, interoperable, and extensible testbed to accelerate research in AI/ML, networking,
autonomy, and human–agent teaming. It enables deployment (rapid fielding of ideas on real robots),
comparison (A/B evaluation on vetted scenarios and assets), and extension (bootstrapping
new capabilities that can be promoted into the core facility).

At-a-Glance
- Domains: UGV · UAV · USV · Underwater · Digital Twin
- Focus: Perception · Communications · Federation · Human–Robot Teaming
- Interoperability: ROS 1/2, ARL ground/UAS autonomy stacks, DVPG integration
- Access: On-site & remote (VPN) for UMBC and partner HBCU/MSI institutions
Research Thrusts
T1. Interoperable Sensing & Communications (RobSenCom)
Co-design of robotics and network stacks to synchronize heterogeneous agents and unattended sensors.
Time/event co-simulation (Gazebo/Unity with NS-3), QoS-aware transport, and opportunistic reconfiguration
for LOS/NLOS and fault tolerance.
T2. Fault-Tolerant IoT/IoBT in Contested Environments
Service-oriented middleware for resilient, long-range communications across air/sea/land assets,
protocol adaptation by information priority, and mission-critical delivery under adversarial conditions.
T3. Perception & Sensor Integration for Scene Understanding
Cross-modal learning with RGB/LiDAR for detection, tracking, and semantic segmentation with few labels.
Domain adaptation and transfer from public and MAGNUS-collected datasets (UMBC campus, Grace’s Quarters).
T4. Self-Supervised Multiview Activity Recognition
View-invariant recognition of pose/gesture/actions from ground & aerial perspectives; robust decision-making
for search & rescue and HRI. Deployed on Qualcomm RB5 UAVs and Jackal UGVs.
T5. Robust & Federated ML on the Edge
Byzantine-robust, privacy-preserving federated learning with quantization-based compression to reduce
bandwidth and protect updates; class-distribution–aware training for non-IID field data.
T6. Asynchronous Federated Learning (Audio + Vision)
Asynchronous FL for resilient learning when agents drop out; multi-modal cueing (acoustics + imagery) on
resource-constrained platforms (Jetson, Coral).
T7. A2GS Target Detection (Air–Ground–Satellite)
Multi-source domain adaptation, multimodal fusion (RGB+LiDAR+satellite), and model compression for edge
deployment to detect and localize high-value assets in adverse conditions.
T8. Digital-Twin Orchestration
Bi-directional streaming between physical sites and virtual scenarios (Unity/Gazebo) to emulate
hard-to-stage battlefield conditions; DVPG node triggers real robot behaviors.
T9. Language-Guided Autonomy & Speech Interfaces
Speech recognition → NLU/entity grounding → scene-prompted perception (VLMs) → navigation/planning.
Train on A100s; quantize for deployment on UGV/UAV edge compute.
T10. Collaborative Perception (PaCME)
Unified semantic panoramic views from UAV+UGV RGB/LiDAR; dynamic/static separation and selective
exchange for bandwidth-aware collaboration and cover/obstacle reasoning.
T11. Ground-Based Soil Segmentation for Terrain Analysis
Higher-fidelity terrain typing beyond “traversable” vs “non-traversable” using ground perspectives,
informed by agrivision and satellite datasets; supports mobility planning across vehicle classes.
Current Ongoing Projects
Narrow Bridge Crossing

- Policy training with safety shields
- Cross-site replay via DVPG
- Digital-twin hazard injection (smoke/blast) halts real-world plan
Contactless Physiological Sensing

- Edge deployment for in-field monitoring
- Privacy-preserving signal processing
Facilities & Equipment
- UGV: Clearpath Jackal · Husky
- UAV: 4× Qualcomm RB5 (ModalAI) · Parrot Anafi
- Surface/Underwater: BlueBoat · BlueROV2
- Legged: Boston Dynamics Spot · Ghost Robotics Vision 60
- Sensors: RGB, LWIR/SWIR, LiDAR (Velodyne class), mm-wave radar, magnetometers, GPS/IMU
- NVIDIA Accelerators: H200 · H100 · RTX 4090
Photos (sample)
-
Clearpath Jackal -
Clearpath Husky -
ModalAI / Qualcomm RB5 -
Parrot Anafi -
BlueBoat -
BlueROV2 -
Boston Dynamics Spot -
Ghost Robotics Vision 60
Sites, Network & Remote Access
- Physical sites: UMBC campus test ranges and Grace’s Quarters (field)
- Virtual: Unity/Gazebo digital twins with DVPG connectivity
- Interoperability: ARL ground autonomy, Phoenix/UAS stacks; ROS 1/2 (DDS)
- Remote access (VPN): Secure, request-based access for partner HBCU/MSI institutions to develop, deploy, and evaluate on MAGNUS assets
Team & Partners
Leadership: Prof. Nirmalya Roy (PI), Dr. Anuradha Ravi (co-PI), Dr. Abu-Zaher Faridee (co-PI) — Department of Information Systems, UMBC.
UMBC Centers & Collaborations
- CARDS — Center for Real-time Distributed Sensing and Autonomy
- UCYBR — Center for Cybersecurity
- CAST — Center for Adaptive Soldier Technologies collaboration
- ArtIAMAS, SARA, A2I2, DVPG, Human–Agent Teaming
Access & Governance
- Eligibility: UMBC researchers; partner HBCU/MSI collaborators by agreement
- Safety & Range Rules: Required certifications for field tests; checklist before deployment
- Data Policy: Security tiers for datasets, update logs for federated models, and DVPG sharing protocols
- Requesting Access: Submit a short proposal (objectives, assets, sites, dates). Support available for scenario design and instrumentation.
Acknowledgement
This project has been supported by U.S. Army Grant #W911NF2410367.
Get Involved
Interested in collaborating, accessing the testbed, or proposing a new experiment?
Contact the CYPRESS Center team at UMBC.