Message flow and integrity
fluxrig is designed to ensure absolute data integrity and context persistence across asynchronous, stateless execution paths. This document defines how the platform maintains a continuous record of a signal's journeyeven when it leaves the rig to traverse external, untrusted networks.
The challenge: asynchronous excursions
In a high-fidelity distributed pipeline, a Request and its Response are decoupled, asynchronous events.
- Request (
fluxMsg A): Carries rich operational metadata (e.g., source context, session invariants, risk indicators). - The Excursion: The Rack emits the signal to an external system (e.g., a banking network or industrial PLC) and immediately releases the thread to process subsequent messages. It does not block.
- Response (
fluxMsg B): Arrives as a fresh signal instance. By default, this returning signal possesses no knowledge of the original request's context.
The Goal: To deterministically correlate the returning Response with its original Context, ensuring absolute auditability, compliance, and deterministic routing.
Wire strategies
Not all data requires the same durability profile. fluxrig allows you to optimize the Wire per-flow based on the nature of the signal and required performance resolution.
| Strategy | Transport | Durability | Latency (Typical) | Industry Use Case |
|---|---|---|---|---|
| Standard | NATS JetStream | Durable | 1 - 3ms | High-Assurance Payments, Auditable IoT, Finality. |
| Turbo | Go Channels | Volatile | < 10µs | (Planned) Intranode High-Speed Logic. |
WARNING
Turbo Wires (Planned): Turbo Wires offer sub-millisecond performance by bypassing the NATS bus for local intra-rack flows. This strategy is currently in technical design and targeted for the v0.5.0 milestone.
State management: metadata vs. coat check
To solve the context loss problem, fluxrig utilizes two distinct patterns depending on whether the signal is within the trusted rig mesh or crossing an external boundary.
The Signal Metadata (Intra-Rig Context)
When a signal moves between Gears or Racks, it carries its context in-band via the Metadata map.
- Mechanism: Key-value pairs stored directly in the
fluxMsgDeterministic CBOR (RFC 8949) envelope. - Propagation: The metadata travels with the message. When a Rack publishes to a durable stream, the entire envelope is persisted as a single atom of truth.
- Durability: Guaranteed by the underlying transport layer with
At-Least-Oncedelivery and high-availability retention.
The Coat Check (External Correlation)
When a signal must leave the rig to traverse an external network (e.g., raw TCP) that does not support the fluxMsg envelope, we implement the Coat Check pattern.
- The Drop-off: Before the signal exits the Rack, its metadata context is serialized and stored in a high-speed NATS KV store.
- The Ticket: A unique identifier guaranteed to be returned by the external systemsuch as a Transaction Stand-in (STAN) or Retrieval Reference Number (RRN)—serves as the correlation key.
- The Pickup: When the response signal arrives, the Rack uses the "Ticket" to retrieve and re-attach the original metadata to the new
fluxMsg, restoring the signal's full identity and audit trail.
IMPORTANT
Sovereign Security: The Coat Check is the technical foundation for Tokenization. Sensitive data (like PANs) can be "checked in" at the Edge Rack and never traverse the central Mixer or external telemetry backends, significantly reducing compliance audit scope.
The context lifecycle
The following sequence illustrates the Coat Check pattern during a typical asynchronous transaction excursion.
Operational control signaling
Beyond business data signals, fluxrig maintains a dedicated, high-priority Control Plane Signaling hierarchy (flux.ctrl.>) for out-of-band management and safety triggers.
Signal types & patterns
| Signal | Subject Pattern | Description | Impact |
|---|---|---|---|
| Kill Switch | flux.ctrl.kill.> | Emergency cessation of Gear processing. | Immediately halts the target Gear's internal loops. |
| Conn Close | flux.ctrl.close.> | Orchestrated termination of an I/O transport. | Triggers a clean socket closure and resource release. |
| Scenario Update | flux.ctrl.sync.> | Pushing a new execution topology. | Initiates the Hot-Reload Process. |
Security & delivery
- Order of Precedence: Control signals always bypass the standard data-plane queues to ensure immediate execution, even if the primary business queues are saturated.
Reliability: connectivity convergence
To achieve the Sovereign Continuity objective, fluxrig implements a relentless connectivity handshake during every deployment and hot-reload.
The Relentless Handshake
When a Rack starts or reloads a Scenario, it does not immediately activate the gear logic. Instead, it enters a Convergence Phase:
- Sync Probes: The Rack emits
FlagSyncProbesignals (internal NATS control messages) across every defined Wire in the topology. - Propagation Loop: These probes circulate through the NATS mesh every 500ms.
- Finality Check: The Rack waits until every signal path confirms it is "hot" and reachable across the distributed nodes.
- Gear Activation: Only after 100% convergence is confirmed are the business and protocol gears (e.g., ISO8583/Wasm) allowed to start processing real-world traffic.
NOTE
This mechanism solves the First-Message Loss problem typically found in distributed messaging systems, where JetStream subjects may take milliseconds to propagate to all nodes after a topology change.
Reliability: sagas and compensation signals
fluxrig treats failures as data rather than exceptions. This allows for the orchestration of complex, distributed transactions without fragile locks, prioritizing deterministic terminal states.
- Pattern: Optional Error Routing: Gears can define a logical
.errport for high-fidelity error handling. Note that this is a Logic-Driven Pattern: the engine provides the wiring infrastructure, but the individual Gear implementation must be coded to explicitly emit problematic signals to the.errport upon failure. - Saga Pattern: This pattern enables the implementation of Sagas, where a failure at a specific node triggers a compensating signal (e.g., a reversal or an automated alert) to restore the system to a clean terminal state.
- Finality Governance: We enforce a policy where every signal eventually reaches a "Success" or "Failure" state, ensuring the system remains self-healing, auditable, and compliant with institutional data standards.
TIP
Transport Abstraction: By leveraging high-level messaging abstractions, fluxrig decouples business logic from the underlying NATS transport. This allows you to test complex Gear logic in-memory without a network server, ensuring high-fidelity validation during development.