LIVE QUANT DASHBOARD

These are the core services I have built recently - a distributed compute scheduler, a Monte Carlo risk engine, and a contract intelligence API. Each one has its own FastAPI backend with real models behind it.

You do not need to click anything. Just look at the Online status, the live tiles, and the proofs if you want to see raw JSON. It is meant to be simple to skim while still showing real engineering work.

Distributed Compute Scheduler

Job queue and scheduler with priorities, resources, and preemption.

Live demo

In practice: this is a small internal job scheduler. It accepts work from different teams, queues it, and runs jobs based on priorities and resources.

Hiring signal: shows I can design and ship a small distributed system, not just train models.

Workers
4
Queue depth
2
Jobs running
1
p50 (ms)
32
p95 (ms)
120
JSON proof

This panel shows either the latest health response or a saved example from a real run of the service.

{
  "status": "ok",
  "workers": 4,
  "queue_depth": 2,
  "jobs_running": 1,
  "p50_ms": 32,
  "p95_ms": 120,
  "note": "Sample output from a previous run of the scheduler."
}

Monte Carlo Risk Engine

Simulates price paths and volatility shocks using stochastic models.

Live demo

In practice: this runs thousands of market scenarios to show the range of possible outcomes, not just a single forecast.

Hiring signal: demonstrates I understand risk and portfolio thinking and can wrap quant models in a clean API.

Paths
10000
Horizon (days)
252
VaR 95%
-0.18
CVaR 95%
-0.24
Latency (ms)
45
JSON proof

This panel shows either the latest health response or a saved example from a real run of the service.

{
  "status": "ok",
  "paths": 10000,
  "horizon_days": 252,
  "var_95": -0.18,
  "cvar_95": -0.24,
  "latency_ms": 45,
  "note": "Sample output from a previous Monte Carlo run."
}

Contract Intelligence Engine

Extracts legal intelligence from MSP and SaaS contracts via FastAPI and PyTorch.

Live demo

In practice: it reads MSP and SaaS contracts and pulls out key terms like pricing, renewal dates, and SLAs so teams do not have to scan PDFs by hand.

Hiring signal: shows I can combine NLP models with rule based extraction and expose them as a reliable service.

Docs indexed
128
Entities per doc
34
Latency (ms)
87
Accuracy
0.93
JSON proof

This panel shows either the latest health response or a saved example from a real run of the service.

{
  "status": "ok",
  "docs_indexed": 128,
  "entities_per_doc": 34,
  "latency_ms": 87,
  "accuracy": 0.93,
  "example_entities": [
    "renewal_date",
    "termination_for_cause",
    "sla_uptime"
  ],
  "note": "Sample output from a previous contract extraction run."
}