How Barfinex Works
Advisor — AI decision engine
Advisor is Barfinex's AI decision engine. It runs a structured 8-stage reasoning pipeline on every trading signal — from market quality checks to LLM synthesis and decision validation.
What the Advisor Is
The Advisor is the AI decision layer of Barfinex. It is not a thin wrapper around an LLM call. It is a structured 8-stage pipeline that runs on every signal emitted by the Detector, producing a validated execution intent that Inspector can act on.
Every stage is deterministic in sequence. Every decision is a typed event in the audit log. The entire pipeline is observable in Studio.
The 8-Stage Pipeline
Stage 1 — Market context
The Advisor fetches current market state through Provider's MCP tools: live prices, account positions, open orders, and recent execution history. This context is structured and passed forward through the pipeline.
Stage 2 — Market quality gate
The market quality engine scores data freshness, spread stability, and liquidity depth. If the quality score falls below the configured threshold, the signal is blocked at this stage and no further computation runs. The block is logged with the quality score and reason.
Stage 3 — Rule and ML evaluation
Independent of the Detector's scoring, the Advisor runs its own signal evaluation layer — combining rule-based conditions with optional ML feature scoring. This produces a secondary conviction signal that adds evidence for or against the Detector's assessment.
Stage 4 — Conviction scoring
The rule and ML outputs are combined into a single normalized conviction score in [0, 1]. This score represents the Advisor's confidence in the signal before calibration.
Stage 5 — Conviction calibration
The raw conviction score is calibrated using logistic regression with per-regime scaling. Two calibration methods are supported: Platt scaling and isotonic regression. The calibrated score accounts for historical performance distribution across different market regimes, reducing overconfidence in volatile conditions and underconfidence in trending ones.
Stage 6 — LLM synthesis
The calibrated conviction, market context, and rule attribution are assembled into a structured context object and sent to the configured LLM. The LLM produces:
- Directional bias (long / short / neutral)
- Confidence assessment with reasoning text
- Suggested position sizing adjustments
The LLM is given a constrained context — not a free-form prompt — and its output is parsed programmatically. If the output is malformed or contradicts the quantitative signal by a significant margin, a hallucination detection event is emitted and the cycle is blocked.
Stage 7 — Decision validation
The LLM output is validated against hard constraints:
- Spread must be below 0.4%
- Risk/reward ratio must be at least 1.2
If either constraint fails, the decision is blocked regardless of conviction score. The validation result is logged.
Stage 8 — Execution intent
If all stages pass, the Advisor emits a typed execution intent event to the bus. Inspector subscribes to these events and applies its own risk policy layer before any order is placed.
Telemetry
Every stage of the pipeline produces typed events published to the Redis event bus and stored in the time-series audit log. Notable event types include:
ADVISOR_CONVICTION_SNAPSHOT— conviction score at each stageADVISOR_ATTRIBUTION— which signals contributed to the decision and by how muchADVISOR_REGIME_ROTATION— when the market regime detection triggers a calibration changeADVISOR_HALLUCINATION_DETECTED— when LLM output is flagged as inconsistentADVISOR_MODEL_SWITCHED— when the active LLM model changesADVISOR_CONFIDENCE_LOW— when calibrated confidence falls below the alert thresholdADVISOR_DECISION_BLOCKED— when any stage blocks the decision, with the blocking reason
This telemetry is accessible in Studio's Advisor decision log and queryable through the time-series storage layer.
MCP Integration
Advisor's full API is exposed as Model Context Protocol tools through Provider. This means any LLM with MCP support can:
- Query the latest Advisor decisions and their reasoning
- Request a decision cycle on a specific symbol
- Inspect calibration state and conviction history
- Check active strategy configurations
This makes Barfinex natively composable with AI agents and LLM-driven workflows.
Execution Modes
The Advisor supports two execution routing modes:
Inspector-gated (default) — Execution intents are published to the event bus. Inspector subscribes, applies its risk policy layer, and places orders if approved. This is the recommended mode for production.
Direct Provider — With ADVISOR_DIRECT_EXECUTION_ENABLED=true, the Advisor calls the Provider order API directly, bypassing Inspector. This is intended for specific testing scenarios. Do not use in production without understanding the implications.
How the Advisor Connects
- Event bus — required. The Advisor subscribes to Detector signal events and publishes decision events.
- Provider — required. The Advisor registers with Provider at startup via the app registry. All market context is fetched through Provider's API. After registration, Studio and API clients reach the Advisor through Provider's proxy at
/api/advisors/<appKey>/....
What You See in Studio
- Decision log — each cycle shows the conviction score, calibrated confidence, LLM reasoning text, and final decision
- Rule attribution — which signals contributed to the decision and their weights
- Block events — why a decision was blocked at which stage
- Regime state — current market regime classification and calibration mode
Next Steps
- Barfinex architecture — How Advisor fits into the full pipeline
- Advisor API reference — REST endpoints and event types
- Inspector overview — How execution intents become orders