Book a Briefing

NeurIPS 2023 · TMLR · GNU Radio Conference · Leidos AI Accelerator

AI that lives inside the constraint, not outside it.

Commercial AI assumes cloud access, data portability, and internet connectivity. Defense programs assume none of those. Our AI scientist spent seven years at IC and defense contractors building models for exactly these constraints — RF signal classification for SIGINT programs, encrypted inference so AI never touches unencrypted classified data, synthetic data generation when raw data can't cross boundaries, and efficient architectures that run in air-gapped enclaves with a fraction of normal compute.

Book Requirements Workshop

NeurIPS 2023 · TMLR Accepted · Patent WO2023220583A1 · CAGE 10S34

7 years in IC / defense AI research NeurIPS 2023 · TMLR accepted Built for air-gapped and encrypted environments

TorchSig / Sig53Patent WO2023220583A1GNU Radio 2022IWSPA 2024Leidos AI AcceleratorApplied Insight

Cleared programs (TS/SCI and below): we staff cleared personnel for delivery when the requirement exists and contracting supports it. Specifics are confirmed during acquisition and onboarding.

Seven Years of Applied Research Inside the IC

Before MRI, our AI scientist built these systems at Leidos' AI Accelerator and Applied Insight — an IC contractor — for programs that couldn't use commercial tools. Each research area maps directly to a problem defense AI buyers actually face.

GNU Radio Conference 2022

RF Signal Classification at Scale

The problem: SIGINT and EW programs need to classify RF signals across dozens of modulation types at scale — without relying on cloud inference.

TorchSig and Sig53: an open-source PyTorch ML toolkit and 5-million-sample dataset across 53 signal classes, designed for transformer-based signal classification on local hardware. Built for the signals intelligence community.

NeurIPS 2023 · Patent WO2023220583A1

Synthetic Data When Real Data Can't Move

The problem: Programs can't share raw training data across classification levels. AI models starve for data they can't legally access.

TabMT: a masked-transformer architecture that generates statistically faithful synthetic tabular data — evaluated on network traffic and intrusion detection datasets. Train on synthetic. Deploy on real. No raw data crosses any boundary.

IWSPA 2024 · Leidos + AWS

AI Inference on Encrypted Data

The problem: Classified data can't be decrypted to run inference. Programs need AI that operates on ciphertext — fully homomorphic encryption without the performance cliff.

Published frameworks for privacy-enhancing AI including fully homomorphic encryption (FHE) for SageMaker endpoints — enabling real-time AI inference on encrypted inputs without decryption. The data never leaves its protected state.

TMLR Accepted · NeurIPS 2024

Efficient Models for Constrained Hardware

The problem: Air-gapped enclaves run on constrained hardware. Commercial foundation models require GPU clusters that aren't available in disconnected environments.

Mamba state-space model research: 2.15× faster inference and 65% less memory than transformer equivalents, with mathematically proven stability under mixed-precision fine-tuning. Models that actually run on the hardware you have.

What We Build

Every system starts with your network topology, your data classification constraints, and your hardware. No cloud callbacks. No runtime dependencies. No vendor lock-in to a model that phones home. See our full capability statement for procurement details and NAICS codes.

AI / ML

Custom Model Development

Models trained on your mission data — not consumer-grade foundation models with a fine-tuning wrapper. NeurIPS-published architecture expertise applied to your specific problem, including novel architectures when off-the-shelf won't work.

AI / ML

Disconnected AI Ops

Model versioning, inference pipelines, and lifecycle management without internet access. Optimized for your actual hardware and network topology. Audit trails and change control built for environments with strict configuration management.

AI / ML

Synthetic Data Pipelines

Generate statistically faithful training data from sensitive corpora — so your models can train without raw classified data crossing classification boundaries. Evaluated on network traffic and security datasets.

Convergence

Privacy-Preserving Inference

AI inference on encrypted data using FHE and differential privacy frameworks. The model never sees the plaintext. Applicable to programs where raw data cannot be decrypted for processing under any circumstances.

AI / ML

Agentic Workflows

Multi-step AI agents for document processing, intelligence workflows, and decision support — with human-in-the-loop checkpoints, confidence scoring, and audit logging at every decision point.

Convergence

AI × Security Convergence

Threat detection models trained on your actual network traffic — not generic baselines. Adversarial ML testing before production deployment. Security architecture designed around the AI system it protects. Our cybersecurity practice handles the security side.

Which AI engagement fits your program?

Two questions. One recommendation.

What's your deployment constraint?
What's your primary need?

Sixty minutes. Your mission, our whiteboard.

The requirements workshop starts with your network topology, your data classification, and your latency constraints. No slides. No demos. Just engineering on a whiteboard. Need penetration testing or security hardening alongside the AI work? Same team.