Roadmap

Where Stallari is going

An honest, phase-based view of what's been built, what's in progress, and where the platform is headed.

Developer preview beta — early access invitations now going out.

Phase 1–3

Foundation

Shipped

Swift harness, vault engine, dispatch engine, provider abstraction, trace store.

Phase 4

Intelligence layer

Shipped

Agent memory with importance decay, recall, and consolidation. Post-dispatch memory pipeline.

Phase 5

Native interfaces

Shipped

macOS app — published, notarised, Developer ID signed. Setup wizard, fleet sidebar, digest views, plugin management, and privacy controls.

  • Setup wizard with 9-step onboarding and vault detection
  • Marketplace browser with service, tier, and category filters
  • Privacy exclusion editors and content redaction
  • Secure Enclave credential store for providers and tools
  • Self-upgrade via GitHub Releases
Phase 6

Multi-device

Shipped

Run Stallari across macOS devices in a fleet. Manage agent traces, jobs, and schedules from anywhere.

  • Discover peers automatically via Tailscale or Bonjour (mDNS)
  • Enroll devices with a QR scan — per-peer tokens, individually revocable
  • Browse agent traces and coordinate dispatch across your fleet
  • Single-writer dispatch lease prevents duplicate runs
Phase 7

Productisation

Shipped

Plugin marketplace, Packs, commerce, identity, and developer preview documentation.

  • Plugin marketplace with conformance tiers — tools (MCPs) and packs (bundled agents, skills, and manifests)
  • 20+ open-source Blade MCPs (Home Assistant, Shopify, Xero, Paddle, Office 365, Gmail, Xbox Live, Todoist, Mastodon, iMessage, Cloudflare, and more)
  • 4 open-source community MCPs (Blender 3D, YouTube, and more)
  • Multi-contract plugins — a single MCP can declare conformance to multiple service contracts
  • Organisation packs — teams share private packs scoped to their org
  • Setup wizard — 9-step onboarding with vault detection, provider config, and plugin install
  • Developer ID signed and notarised — installs without Gatekeeper warnings
  • Binary distribution and in-app self-updates
  • Marketplace sign-in with GitHub and Apple
  • Paddle checkout and licence management — tips and paid community packs
  • Guide browser — in-app documentation and walkthroughs
  • Built-in identity provider — OIDC authentication, seat-based licensing, and role-based dispatch gating
  • Opt-in crash reporting and process guardian — open-source SDK with PII scrubbing and circuit breakers
  • Platform observability — stale binary detection, MCP health probes, subprocess orphan cleanup, and integration health heartbeats
Phase 8

Networking and service mesh

Shipped

Daemon-routed MCP over HTTP with Tailscale HTTPS and ACME certificates, bearer token auth, iCloud KV dispatch lease, and remote traces.

  • HTTPS with automatic ACME certificates (Tailscale Serve)
  • MCP routed through daemon HTTP with HMAC-SHA256 request signing
  • LM Studio — local LLM access over Tailscale HTTPS
  • Remote trace persistence with SQLite dedup
  • Cloudflare Tunnel and ACME certificate support — expose services without Tailscale (accommodate certain corporate policies and iOS single-VPN limitation)
  • Networking settings — unified UI for managing exposed services and providers
Phase 9

Embedded dispatch engine

Active

Native agent workflows with multi-provider graph execution, five specialised methodologies, adaptive routing, and Pack-driven orchestration.

  • Multi-provider support — run workflows locally with LM Studio, on Anthropic, xAI, OpenAI-compatible, or Apple on-device models
  • 5 graph methodologies — map-reduce, critique-refine, plan-execute, gate, and consensus — with 102 skills across 12 packs
  • Adaptive routing — trace-informed confidence thresholds with few-shot exemplar injection
  • Provider probe caching — cold start from 12 seconds to under 3 seconds
  • Memory discipline — resource limits, health monitoring, message trimming, and subprocess recycling
  • Pack-driven orchestration — define agent workflows as portable YAML manifests
  • Apple Foundation Models for on-device pre-classification (no API key needed)
  • Webhook-triggered dispatch for inbound events (GitHub, Xero, Home Assistant) with auto-wiring
  • Per-node model routing — assign different providers and tool allowlists to individual graph nodes
  • Privacy enforcement — content exclusions, domain-scoped tool filtering, and per-tool allowlists
  • iOS companion app
  • Full native dispatch without CLI subprocess fallback
  • Streaming tool-use in embedded backends
  • MLX Swift — native local model execution without external servers; Nemotron 3 (Nvidia), Qwen 3.5 (Alibaba), Phi-4 (Microsoft), Llama 3.1 (Meta)
  • ISO 42001 compliance posture — alignment with EU AI Act standards
Phase 10

Open Memory

Horizon

Your intelligence is yours. Full-fidelity export of every memory, association, and learning your agents have built — in open formats, with documented schemas, importable anywhere.

  • One-command memory export — Markdown, JSON, or SQLite, with full association graph
  • Documented memory schema — published spec so other tools can read your memories
  • Memory import — bring learnings from other systems into Stallari
  • Association graph visualisation — explore what your agents have learned and how concepts connect
  • Memory portability guarantee — if you leave, your intelligence leaves with you
Phase 11

Federation and beyond

Horizon

Engage with other users' Stallari clusters. Shared private inferencing and coordination between independent deployments.

  • Multi-vault support
  • Shared private inferencing with LM Link — share idle local GPU capacity to friends and family
  • Inter-user coordination protocols

Want to follow along?

You're on the list. We'll be in touch.

Something went wrong. Try again or email [email protected].