The Defence Middleware Project — Origin, Build, and Where This Is Going
The preprint for Software-Defined Protocol Translation Middleware for MIL-STD-1553B and STANAG 4586 Interoperability is out on Zenodo (here).
This post is not a summary of the paper. The abstract does that. This is about how the project came to exist — which is a different story, and one that I think is worth writing down while the details are still fresh.
How I Found the Problem
I was not looking for a defence problem.
The interest in defence systems came sideways, the way most things that end up mattering seem to. I had been reading about the Indian defence modernisation programme, i.e. Atmanirbhar Bharat Mission, iDEX, the push to reduce dependence on Western proprietary hardware — not as a technical exercise but just out of general interest in what indigenous capability-building actually looks like when a country tries to do it seriously.
At some point in that reading I encountered the specific problem of mixed-generation fleets.
The Indian Air Force operates platforms from different eras with fundamentally different avionics architectures. Legacy combat aircraft — the kind that have been flying for twenty or thirty years — use MIL-STD-1553B as their avionics bus. It is a 1973 standard. It was designed for deterministic, low-bandwidth communication between onboard systems. It works. It has worked for fifty years.
Unmanned aerial systems — the Rustom-2, the newer generation of UAVs entering service — are governed by STANAG 4586, a NATO standard for UAV interoperability that is architecturally entirely different. Different addressing scheme. Different timing model. Different semantics for the same operational concepts.
These two classes of system cannot talk to each other natively. When a Tejas Mk.1A and a Rustom-2 are operating in the same theatre, the data they each carry — targeting information, navigation data, sensor outputs — cannot be shared directly. The existing solutions are proprietary hardware gateways from Western vendors. They are opaque, non-auditable, expensive, and they create a dependency that is precisely what Atmanirbhar Bharat is meant to eliminate.
I read about this and thought: this is a software problem, not a hardware problem.
That thought was the beginning.
The First Few Weeks
The first thing I did was find the actual standards.
MIL-STD-1553B is a US military standard and is publicly available. STANAG 4586 Edition 2 is a NATO standard and is more restricted in its distribution, but enough of it is accessible in technical literature and secondary documentation to understand the architecture clearly.
Reading both documents in parallel is a strange experience. They are solving the same basic problem — moving data between systems in a military context — and they solve it in ways that are not just different but philosophically different. MIL-STD-1553B is built around a Bus Controller that has complete authority over all communication. Everything goes through the controller. The bus is deterministic. STANAG 4586 is built around a disaggregated architecture where a Vehicle-Specific Module handles the platform-side communication and a Core UAV Control System handles the operator side. The two standards have different notions about where authority lives in the system.
Understanding that philosophical difference was more important than understanding any specific field.
Once I understood it, the translation problem became concrete. It was not just a matter of remapping fields. It was a matter of deciding what to do in cases where the conceptual structure of one standard has no equivalent in the other — and being honest about those cases rather than pretending they do not exist.
That is what the gap analysis became. Forty-nine field-level incompatibilities. Six categories: DIRECT (one-to-one mapping), LOSSY (mapping with information loss), APPROXIMATE (semantically similar but not identical), STUB (placeholder where no real mapping exists), DROP (field discarded because no equivalent exists), and UNTRANSLATABLE (fields where translation is not possible without external information the architecture does not have).
The taxonomy came from trying to be honest about the problem rather than optimistic about it.
Building the Architecture
The architectural insight that made the project tractable was straightforward once I saw it.
The naive approach to making $N$ systems interoperable is to build $N \times (N-1)$ pairwise translators. For two standards that is two translators. For five standards that is twenty. For ten it is ninety. That scales badly and it means every new system you add to the network requires building translators to every existing system.
The alternative is a hub-and-spoke model: build a protocol-agnostic Common Internal Representation — a CIR — and build one adapter from each protocol to the CIR. Now $N$ systems require $2N$ adapters instead of $N \times (N-1)$ translators. Adding a new system requires building two adapters, not a full set of pairwise mappings.
This is not a novel idea in software architecture. The same pattern appears in compiler design (intermediate representations), in database abstraction layers, in API gateway architectures. What was novel was applying it specifically to the MIL-STD-1553B / STANAG 4586 gap and working out what the CIR actually had to look like to preserve the information that matters from both standards without losing the semantic distinctions that are operationally important.
The five-component architecture that resulted — Address Registry, MIL-STD-1553B Adapter, STANAG 4586 Adapter, Translation Engine, Audit Logger — is not complicated. Each component has a single clear responsibility. The Translation Engine does the mapping. The Audit Logger records every field-level decision, including the cases where information is dropped or approximated.
The Audit Logger was something I added relatively late and ended up thinking was among the most important parts. In a defence context, knowing that a translation happened and exactly how it happened is not optional. It is a requirement for any serious deployment. A proprietary hardware gateway gives you a result. It does not tell you how it got there.
The implementation is in Python using asyncio for the bus emulation and real UDP sockets for the STANAG transport. It is not production hardware. It is a simulation-based validation at Technology Readiness Level 4 — proof that the architecture works as described, not a deployed system. The TRL 4 assessment was deliberate; claiming more than you have demonstrated is a specific failure mode I wanted to avoid.
Sub-millisecond Translation Engine latency. Zero silent information loss. Every dropped or approximated field is logged explicitly.
The Preprint
The paper has four contributions.
The gap analysis is the most time-consuming part of the work and probably the most immediately useful, independent of the architecture. There is no publicly available systematic comparison of MIL-STD-1553B and STANAG 4586 at the field level that I could find. The taxonomy and the 49-incompatibility analysis is, as far as I know, the first attempt to make that mapping explicit.
The architecture is the central contribution. The comparative analysis — showing that the software-defined approach has structural advantages over commercial hardware gateways in auditability and indigenous modifiability — makes the argument for why this matters beyond just being a technically interesting exercise.
The deployment roadmap from TRL 4 to TRL 6 is there because a paper that only demonstrates feasibility in simulation is less useful than one that is honest about what remains to be done.
The preprint is available at doi.org/10.5281/zenodo.19363710. The code is on GitHub at github.com/Vector-Pi/defence-middleware.
What I Think This Is Actually the Beginning Of
The middleware paper solves a specific problem: getting two existing standards to talk to each other in a way that is open, auditable, and not dependent on Western proprietary hardware.
But I have been thinking for a while now about what solving that specific problem is actually pointing toward.
The interoperability problem is a symptom. The underlying problem is that defence systems were not designed to be integrated. Each platform was built to do its job independently. Communication between platforms was an afterthought, handled through standards that were developed in isolation and never designed to compose cleanly. The result is a collection of capable individual systems that cannot function as a coherent whole.
What I want to build, eventually, is the architecture that makes them a coherent whole.
The way I think about it: what defence needs is not better individual platforms. It is an operating system.
Not an operating system in the narrow software sense — a thing that runs on a device and manages its resources. An operating system in the sense of a central coordination layer that sits across all platforms, connects them through a common data fabric, and provides unified command and control regardless of what hardware is underneath.
The vision has a few components.
A unified data fabric. Every platform — aircraft, UAV, ground vehicle, naval asset, satellite — produces data continuously. Most of that data currently lives in silos. The middleware project is the first layer of this: getting data from legacy standards into a format that can be understood by other systems. But that is the foundation, not the structure. The full fabric would normalize data from every class of defence asset into a common representation that any other asset or any control system can consume.
Platform-agnostic autonomy. Once the data fabric exists, you can put computation on top of it. Autonomy functions — navigation, target identification, threat assessment, resource allocation — that currently have to be implemented separately for every platform type could instead be implemented once, against the common representation, and deployed across any hardware that is connected to the fabric. A drone and a fighter aircraft and a ground station all feeding into the same data layer means autonomy logic written once can operate across all of them.
Geospatial network integration. The operational picture — where every asset is, what it is doing, what it is seeing — is currently assembled manually or through bespoke systems. With a unified data fabric and common communication protocols, building a real-time operational picture from first principles becomes tractable. Every connected asset is a node in a live geospatial network. The network knows the state of every node.
A single command chain. The fundamental bottleneck in multi-domain operations is coordination latency — the time it takes for a decision made at one level to propagate to the assets that need to act on it, filtered through incompatible systems and manual interpretation at each step. A unified operating system with a common data layer and platform-agnostic control interfaces collapses that latency. One command, correctly formatted, propagates to every relevant asset simultaneously.
Working in quantitative finance taught me something that transfers directly here. When the data is clean, consistent, and correctly structured, signals that were previously invisible become obvious. The same underlying market reality looks like noise through a fragmented, poorly-normalised data pipeline and looks like a clear actionable signal through a clean one. The difference is not in the world — it is in the quality of the information infrastructure. Once you have the signal, algorithms can act on it faster and more precisely than any manual process. Strategies that would require hours of human deliberation become executable in milliseconds.
The same logic applies to a unified defence command chain. When every asset is feeding clean, correctly-formatted, semantically consistent data into a common layer, the operational picture stops being something that has to be manually assembled and interpreted at each level of command. It becomes a signal — and like any clean signal, it can be acted upon by algorithms, plans, and strategies far faster than the current fragmented architecture allows. The coordination bottleneck is not fundamentally a human problem. It is a data quality and data infrastructure problem. Fix the infrastructure and the command chain becomes as fast as the decisions themselves, not as slow as the translation layers between them.
AI integration. This is where the architecture becomes something genuinely different from what exists today.
With all defence assets on a common data fabric, with real-time operational awareness, with platform-agnostic control interfaces — you have the substrate for AI-assisted decision making that is actually grounded in the full operational picture rather than in a partial view. The AI does not replace human command authority. It processes the data faster and more completely than any human operator can, identifies patterns and threats across the full sensor network, models consequences of different courses of action, and presents that analysis to decision makers in a way that makes better decisions faster.
The goal is not autonomous warfare in the sense of machines making kill decisions without human oversight. The goal is AI-assisted warfare in the sense of human decisions that are better informed, faster, and more coherent across the full operational picture than any current system allows.
That is the end state. The middleware project is the first thing on the path toward it.
Where I Am Now
The preprint is out. The paper is submitted for publication. The implementation is at TRL 4.
The next step on the technical side is the TRL 5–6 work — moving from simulated bus emulation to hardware-in-the-loop testing with actual 1553B and STANAG-compliant hardware interfaces. That requires access to physical hardware that I do not currently have. The path there involves either institutional partnership or the iDEX programme.
In parallel, I am starting to think more concretely about the data fabric architecture — what the Common Internal Representation would need to look like if it were not just bridging two standards but serving as the foundation for a full unified operating system across arbitrary platform types.
That is a much larger project. But most large projects start as a specific problem that someone took seriously.
This one started with reading about why a Tejas and a Rustom-2 cannot share data.
The preprint: doi.org/10.5281/zenodo.19363710
The code: github.com/Vector-Pi/defence-middleware