Intelligence Layer

MERCURY
AI System

From post-flight analysis engine to airborne cognitive system. Real-time 3D reconstruction, edge inference, swarm coordination, and predictive environmental modeling — thinking at 30,000 feet.

40 TOPS Edge Performance
5s Decision Loop
97% Detection Precision
67g Jetson Orin Nano Mass
Ch 8.4 — Edge Hardware

From Ground
to Sky

Current MERCURY runs on a ground station laptop. The migration target: NVIDIA Jetson Orin Nano — 67 grams, 15 watts, 40 TOPS AI performance. Small enough for the payload bay. Powerful enough for real-time inference.

Specification
RTX 4050 (Ground)
Jetson Orin Nano (Airborne)
Platform
Laptop
Edge Module
Mass
~2,000g
67g
Power Draw
150W
15W
AI Performance
~100 TOPS
40 TOPS
Deployment
Post-flight only
Real-time, airborne
Processing Mode
Batch (video file)
Streaming inference
Latency
20–40 min
5 seconds
// Migration Strategy
$ python optimize.py --model mercury_v2 --target jetson_orin
→ TensorRT compilation: FP32 → INT8 quantization
→ Memory reduction: (FP32→INT8)
→ Accuracy loss: <2%
→ Streaming architecture: batch → frame-by-frame inference
→ Sliding window context: 60 frames active optimization
$ ./validate --phase 1
Ground test complete — Month 1–2
Airborne integration — Month 3–4
⏳ Flight testing, latency measurement — Month 5–6
_
Real-Time Reconstruction

Seeing While
Flying

Traditional structure-from-motion requires global optimization after landing. MERCURY's hierarchical approximation builds the world frame-by-frame at 30,000 feet.

100Hz
Local Layer — Visual-Inertial Odometry
Gyroscope + accelerometer + optical flow fusion. High-rate pose estimation. No 3D reconstruction — pure position tracking, robust to GPS loss.
High Rate
2Hz
Window Layer — Sparse 3D Structure
Recent 60 frames maintained in active optimization. Sparse 3D structure for immediate surroundings. Accurate enough for obstacle avoidance and near-field navigation.
Active Window
0.1Hz
Global Layer — Persistent Map
Background thread incorporating window results into persistent map. Loop closure detection recognizes previously-visited locations, corrects drift, maintains long-term consistency.
Background
2Hz
Render Layer — Live Operator View
Current 3D view streamed to ground station. Operator sees UAV's evolving perspective — not delayed video feed. Operator sees what's happening now.
Streaming
Object Detection at the Edge

From Pixels
to Understanding

YOLOv8-medium backbone, TensorRT-optimized, fine-tuned for environmental targets. Single-frame detection at 85% precision, multi-frame temporal fusion reaches 97%.

📷
Camera Frame
640×640px input, 15 FPS on Orin Nano
🧠
YOLOv8
TensorRT-compiled, INT8, 85% mAP baseline
📍
3D Projection
Ray cast through camera → 3D mesh → GPS coord
🔗
Temporal Fusion
2-second window tracking, 97% precision
Alert & Action
Autonomous response trigger, operator summon
Detection Targets
Waste piles Smoke plumes Unauthorized persons Off-road vehicles Water discoloration
Precision Metrics
Single Frame85%
Multi-Frame Fusion97%
GPS Accuracy±5m
Autonomous Response
Waste pile → reduce altitude for documentation Person in restricted zone → orbit + continuous tracking Smoke plume → vertical profile + sensor correlation Emergency → services auto-alerted
Swarm Coordination

One Mind,
Many Bodies

Multiple UAV with shared world model, market-based task allocation, and Byzantine-fault-tolerant consensus. A five-UAV swarm covers a 10km wildfire perimeter in 12 minutes vs. 45 for a single unit.

PHANTOM-01 P-02 P-03 P-04 P-05
Frequency Band 900MHz / 2.4GHz
Throughput 500 kbps
Air-to-Air Range 5 km
Topology Self-healing mesh
Fault Tolerance Byzantine-fault
Map Sync Interval 10 seconds
5-unit coverage speed 12 min / 10km²
Predictive Intelligence

From Observation
to Anticipation

Not just data — recommendations. MERCURY models fire spread, pollution transport, and infrastructure degradation, delivering time-stamped action guidance while keeping humans in final authority.

Fire Spread Modeling

Coupled atmosphere-fire model. Inputs: current fire perimeter, fuel type, terrain slope, wind field. Output: spread probability map, time-of-arrival contours, recommended firebreak placement.

Pollution Transport

Gaussian plume model with AURA-measured wind profiles. Source location from detection, emission rate from concentration and dispersion geometry, downwind impact prediction for evacuation planning.

Infrastructure Degradation

Trend analysis from repeat surveys. Erosion rate from 3D change detection, pipeline exposure trajectory, failure probability timeline. Predictive maintenance scheduling before critical threshold.

Decision Support Output

"Deploy containment team to coordinates X, Y. Evacuate zone Z within 45 minutes. Inspect pipeline segment P within 72 hours." Operator approves, modifies, or overrides. Human authority preserved.

MERCURY Evolution Roadmap

Timeline to
Cognition

01
Foundation
Current
Post-flight 3D reconstruction on RTX 4050. Proven COLMAP pipeline, YOLOv8 detection, operator-reviewed results.
PROVEN
02
Edge Migration
Month 1–6
Jetson Orin Nano integration. TensorRT optimization. Ground and airborne validation. 5-second decision loop achieved.
IN PROGRESS
03
Real-Time Intel
Year 1–2
Streaming 3D reconstruction, live object detection, autonomous mission adaptation, operator-on-the-loop interface.
PLANNED
04
Fleet Cognition
Year 2–3
Federated learning across fleet. Swarm coordination. Predictive modeling. Each UAV benefits from every other's experience.
PROJECTED