Capabilities

Everything under
the hood

Stallari is a complete AI operations platform. Here's what it does and how it works — in depth.

Core

Autonomous Dispatch

Agents that run without you

Stallari doesn't wait for you to open a chat window. Agents fire on schedule — your inbox is triaged at 6am, your daily brief lands before coffee. They fire on events — a file lands in your vault inbox and classification starts immediately. Or you trigger them on demand from the menu bar.

Every dispatch is logged as a trace with full tool call history, token counts, cost tracking, and classification results. When something goes wrong, you see exactly what happened and why.

  • Schedule-based dispatch (cron-like intervals)
  • Event-driven triggers (filesystem watch, webhook)
  • On-demand from menu bar or chat
  • Crash recovery with saga checkpointing
  • Structured outbox for deferred actions
  • Cost tracking per dispatch and per agent

feat-dispatch-mixed.png

Core

Multi-Agent Architecture

A team, not a chatbot

Stallari runs a hierarchy of specialised agents. An orchestrator routes work to domain operators — one for your knowledge base, another for email, another for scheduling, another for your home. Each agent has its own state, its own scope, and its own memory.

Agents communicate through structured handoffs, not chat relay. A writer registry prevents conflicts when multiple agents need to modify the same resource. The result is a coordinated team that divides complex work without stepping on each other.

  • Orchestrator → operator hierarchy
  • Domain-scoped agents with isolated state
  • Structured inter-agent handoffs
  • Writer registry for conflict prevention
  • Decision ledger — every action logged
  • Human-in-the-loop correction at any point

feat-agents-handoff.png

Intelligence

Stallari Memory

Intelligence that compounds — and stays yours

Every interaction teaches your agents something. Observations, decisions, corrections, commitments, preferences — each becomes a memory stored as a Markdown file in your vault and indexed in an open SQLite database.

Memories decay over time unless rehearsed. Related memories strengthen their associations through Hebbian reinforcement. Periodic consolidation merges related memories into higher-order insights. The entire system is transparent, configurable, and exportable.

  • Seven memory types (observation, decision, correction, commitment, episodic, semantic, preference)
  • Hebbian associations — co-recalled memories strengthen bonds
  • Time-based decay with rehearsal bumps
  • Three tiers: sacred (never decay), active, cooling
  • Periodic consolidation of related memories
  • Domain isolation with cross-agent recall
  • Full export: Markdown + SQLite + association graph

feat-memory-list.png

Infrastructure

Fleet Coordination

Your machines, one system

Stallari discovers your devices automatically. On the local network, Bonjour finds machines in milliseconds. Over the internet, Tailscale provides an encrypted mesh with identity built in — no custom auth, no certificates to manage, no ports to open.

Cross-host dispatch locks ensure the same job never runs twice. A Mac mini handles overnight batch work. A laptop runs on-demand requests. They share state through your vault and coordinate leases via iCloud key-value storage.

  • Bonjour zero-config LAN discovery
  • Tailscale WAN mesh (encrypted, identity-aware)
  • Cross-host dispatch locks (first machine wins)
  • iCloud KV lease coordination
  • Fleet health dashboard with per-machine status

feat-fleet-status.png

Ecosystem

Marketplace

Install a workflow in one click

The Stallari Marketplace is a curated catalogue of Packs — pre-built agent workflows for common tasks. Inbox triage, daily briefing, email drafting, property management, infrastructure monitoring. Install with one click and the dispatch system handles the rest.

Packs use a two-axis trust model: author type (first-party, certified partner, verified developer, community) crossed with readiness level (production, beta, experimental). You always know what you're installing and who built it.

  • One-click Pack installation
  • Two-axis trust model (author type × readiness)
  • Sealed pack security inspection
  • GPG-signed pack verification
  • Pack detail pages with capability listing
  • Revenue share for pack authors (coming)

feat-marketplace-detail.png

Infrastructure

Local Inference

Your machine is the model server

Every Mac with Apple Silicon is a capable inference machine. A 32GB MacBook runs Nemotron 3 30B or Llama 4 Scout comfortably. Even a 16GB iMac handles Phi 4 or Gemma 3 without breaking a sweat. Stallari runs models directly on your hardware using MLX — Apple's native machine learning framework, optimised for the unified memory architecture in every M-series chip.

Multi-axis provider routing lets you send sensitive content to local models while using cloud providers for tasks that need more capability. A certified model manifest with SHA-256 verification ensures you're running the exact model you expect. LM Studio is also supported as an alternative runtime.

  • Native MLX inference on Apple Silicon (no external server)
  • Runs 7B–30B models on 16–32GB machines
  • LM Studio supported as alternative runtime
  • Apple Foundation Models support (when available)
  • Certified model manifest with SHA-256 verification
  • Multi-axis provider routing (content class × source exclusions)

feat-providers-local.png

Trust

Privacy & Security

Local-first is not a marketing term

Stallari runs entirely on your hardware. There is no Stallari cloud, no telemetry server, no account system that phones home. Your vault, your models, your dispatch traces — all local. Encryption at rest protects your vault index and memory database.

The app is code-signed with a Developer ID certificate, notarised by Apple, and distributed as a verified DMG. Sealed packs undergo security inspection before installation. Provider routing gives you granular control over which data reaches which model.

  • No cloud required — runs entirely on your machines
  • Encryption at rest for vault index and memory database
  • Code-signed and Apple-notarised
  • Sealed pack security inspection pipeline
  • Multi-axis provider routing for data classification
  • Zero-trust default — explicit opt-in to external services

Foundation

Vault-Native

Every artefact is a file you can read

Stallari doesn't maintain a separate database you can't see. It works on a vault — a folder of plain text files on your Mac. Dispatch traces are Markdown. Memories are Markdown. Digests, decision ledger entries, agent state — all Markdown files with structured YAML frontmatter. Open them in any text editor, any time.

This isn't just a storage choice. It means you can search your agent's work with any tool you like. Browse it in Obsidian with queries and graph view. Back it up with git. Read it on your phone. Grep it from the terminal. The system is transparent by architecture, not by policy.

  • A vault is just a folder of Markdown files
  • All artefacts stored as Markdown + YAML frontmatter
  • Decision ledger for full audit trail
  • Works with Obsidian, any text editor, or the command line
  • Git-compatible (version control your agent's work)
  • Human-readable by default, machine-queryable by design

feat-vault-obsidian.png

Ready to try it?

10 concurrent dispatches, free forever. No credit card required.

You're on the list. We'll be in touch.

Something went wrong. Try again or email [email protected].