Blog
Barfinex as an Operating System: The Five-Service Architecture
A deep look at how Provider, Detector, Advisor, Inspector, and Studio connect to form a complete AI-native trading operating system — with typed event contracts, structured AI reasoning, and end-to-end auditability.
The Design Principle
A trading system that crashes because one component locked up is a liability. A system where deploying a new strategy requires restarting the data pipeline is fragile. A system where you cannot trace why a trade was placed at 3am is unacceptable in production.
Barfinex is built around a single design principle: every stage of the trading lifecycle gets its own service, every service communicates through typed and auditable contracts, and every decision is a traceable event.
This means you can update Detector strategy logic without touching the Provider data feed. You can adjust Inspector risk policies without restarting the Advisor. You can trace any executed order back to the signal that generated it, the AI reasoning that approved it, and the risk check that validated it.
The Five Services in Detail
Provider (port 8081) — The Foundation
Provider is the single source of truth for all market data in the system. No other service connects directly to an exchange. Everything goes through Provider.
What Provider does:
- Connects to exchange WebSocket feeds and REST APIs (Binance, Alpaca, others via connector plugins)
- Normalizes raw tick data into OHLCV candles at multiple timeframes
- Detects and repairs candle gaps — identifies missing time ranges, fetches from historical API, validates the filled seams
- Tracks ingestion health per exchange connector; exposes over 25 debug and audit endpoints specifically for data integrity verification
- Maintains a registry of all running services (Detector instances, Advisor, Inspector). Other services register with Provider at startup and send periodic heartbeats. Provider proxies all external traffic to registered services.
- Exposes a unified REST API and WebSocket
/ws— the only connection point Studio needs
The practical effect: you deploy one URL, one auth token, one WebSocket. Studio and any external clients talk only to Provider. The rest of the system is invisible from the outside.
Detector (port 8101) — The Strategy Runtime
Detector is where your trading strategy lives. Each Detector instance is a TypeScript configuration object describing what to observe, what conditions to evaluate, and how to weight them into a scored signal.
Key characteristics:
- Rule engine — evaluates each rule condition against real-time candle and order flow data from Provider on every new candle
- Scored output — each rule carries a numeric weight; signals are only emitted when the combined score crosses a configured threshold
- Full attribution — the signal event includes a complete record of which rules fired, which didn't, and what each contributed to the score
- Instance isolation — multiple Detector configurations run simultaneously without sharing state. An error in one does not affect others.
- Multiple strategies in parallel — run different strategies for different symbols, different market regimes, or A/B testing different rule weights on the same symbol
Advisor (port 8009) — The AI Decision Engine
Advisor is the AI decision engine. This is the layer that distinguishes Barfinex from a pure rule-based system.
The Advisor runs a structured 8-stage pipeline on every incoming signal from any Detector instance:
- Market context — fetches current prices, positions, and account state through Provider's MCP tools
- Market quality gate — scores data freshness, spread stability, and liquidity; blocks if below threshold
- Independent signal evaluation — applies its own rule and ML scoring layer separately from Detector's assessment
- Conviction scoring — combines into a normalized [0, 1] confidence value
- Conviction calibration — applies logistic regression with per-regime scaling (Platt scaling and isotonic regression); accounts for historical performance distribution across different market regimes
- LLM synthesis — sends a structured context object to the configured LLM; receives directional bias, reasoning text, and sizing recommendations; validates the output programmatically; emits a hallucination detection event if the output is inconsistent
- Decision validation — blocks if spread > 0.4% or risk/reward < 1.2
- Execution intent — emits a typed event that Inspector subscribes to
Over 28 distinct event types are telemetered from the Advisor pipeline, stored in the time-series audit log, and visible in Studio.
Additionally, Provider exposes the Advisor's full API as Model Context Protocol tools — making the Advisor directly callable from any LLM that supports tool use, without additional integration code.
Inspector (port 8008) — The Risk Governor
Inspector is the gate between decision and execution. No order reaches the exchange without Inspector's explicit approval.
For each execution intent from Advisor, Inspector evaluates:
- Position size against configured limits
- Total portfolio exposure against caps
- Session drawdown against thresholds
- Consecutive losses against limits
- Cooldown periods and time-of-day restrictions
Available responses: ALLOW, BLOCK, THROTTLE, STAND_DOWN, CLOSE_ALL.
Beyond policy validation, Inspector manages the full protective order lifecycle for every position it opens: places stop-loss and take-profit orders immediately after fill, adjusts them as price moves, cancels them when positions close. It also tracks slippage, fill rates, and execution latency for every trade, and runs periodic reconciliation against the actual account state on the exchange.
Studio (port 8010 BFF) — The Operations Terminal
Studio is the Next.js terminal UI. It connects exclusively through Provider's unified gateway — no direct connections to Detector, Advisor, or Inspector ports.
What Studio surfaces:
- Live candlestick charts with signal overlays and position markers
- Per-rule firing history for every signal with score attribution
- Advisor decision log: reasoning traces, conviction scores, calibration state, blocking events
- Inspector risk dashboard: active positions, KPIs, drawdown state, audit log
- Capital efficiency metrics: utilization, suppression, and reservation analytics
Studio is read-only with respect to execution — it observes the pipeline, it does not control it.
The Event Bus
Services communicate asynchronously through Redis pub/sub. All channel names and payload shapes are defined in the shared libs/types TypeScript package. If a channel name or payload structure changes, the TypeScript compiler catches it before it reaches production.
This typed contract layer is what allows services to be deployed, updated, or replaced independently without coordination failures.
Data Flows End-to-End
Exchange
→ Provider (ingest, normalize, publish to bus)
→ Detector (subscribe to candles, evaluate rules, emit signal)
→ Advisor (subscribe to signal, run 8-stage AI pipeline, emit execution intent)
→ Inspector (subscribe to intent, validate policy, place order via Provider)
→ Exchange (order placed)
All stages → event bus → Studio (observe everything in real time)
Every arrow in this diagram is a typed event with a timestamp, a payload schema, and an audit record. Nothing in the system communicates through undocumented side channels.
Deployment
The full stack runs locally with a single docker-compose up. Each service can also be deployed to separate infrastructure as scale requirements grow. The only hard routing constraint: every service that needs market data must be able to reach Provider. All other services can be arranged freely.
In the next article, we walk through the Detector rule engine — how to express a trading strategy as a typed configuration and what you get for observability by default.