Agentic Behavior Reference
Adama is not a passive request-response system. Documents are living virtual machines that act on their own -- they transition states on timers, execute cron jobs, propagate reactive changes, call external services, and orchestrate multi-turn AI conversations. The cluster infrastructure around them performs garbage collection, dead node detection, capacity rebalancing, and automatic document sleep/wake cycles.
I think of it as: you write the rules, and the platform runs the show. This chapter catalogs everything it does autonomously so nothing surprises you at 3 AM.
Document-Level Autonomous Behavior
These actions happen within a single document's lifecycle without any external trigger from a connected client.
State Machine Auto-Execution
When a document transitions to a new state via transition, the runtime automatically invokes that state's code block in a new transaction after the current transaction commits.
record Task {
public string name;
public string status;
}
table<Task> tasks;
#start {
// do initial setup
transition #waiting;
}
#waiting {
// the runtime automatically enters this state
// after #start completes
}
Delayed transitions schedule automatic execution after a specified number of seconds. The timer is durable -- it survives server restarts. The platform persists the target state and scheduled time, and the document will be reloaded to execute the transition even if the server was down. That last part is worth emphasizing: the document will wake up and run, even if nobody is connected.
#processing {
// do some work, then check again in 60 seconds
transition #processing in 60;
}
What to expect: After any mutation that calls transition, you'll see a follow-up transaction for the new state. Delayed transitions cause the document to wake up at the scheduled time, even from a cold/unloaded state.
Cron Job Execution
Documents with @cron annotations automatically execute their cron bodies when the scheduled time arrives.
Schedule types:
// runs once per day at 2:30 AM
@cron cleanup daily 2:30 {
// purge old records
(iterate tasks where status == "done").delete();
}
// runs once per hour at minute 0
@cron refresh hourly 0 {
// poll external data
}
// runs once per month on the 1st
@cron billing monthly 1 {
// generate invoice
}
What to expect: Cron jobs run autonomously. A document with an @cron hourly 0 job will be loaded into memory at the top of every hour, execute the cron body, persist changes, and potentially unload again. All of this happens with zero client interaction. The document might be in memory for only a few seconds -- loaded, work done, unloaded. Cron timers are durable and survive server restarts.
Reactive Propagation
When any reactive value is mutated, dirty flags propagate up and subscriber invalidation cascades to dependents. This is the mechanism that makes Adama reactive -- changes flow automatically without you writing update calls.
public int x;
public int y;
public formula sum = x + y;
public formula doubled = sum * 2;
When x changes, sum is automatically invalidated and recomputed. Since doubled depends on sum, it's also invalidated and recomputed. Connected clients receive updates for all affected values.
What to expect: A single field mutation can trigger a cascade of recomputations across formulas and dependent views. You never need to manually propagate changes. That's the whole point.
Automatic Delta Sync to Clients
After every transaction that modifies document state, the runtime computes per-client JSON diffs (deltas) and pushes them to all connected clients.
public int score;
// only the owner can see their secret
private int secret;
bubble my_view = secret + score;
Each connected client receives only the fields they're authorized to see. Privacy policies (private, bubble, viewer-dependent expressions) are evaluated per-client, and deltas contain only the fields that actually changed.
What to expect: After any state change -- message, cron, state transition, service delivery -- all connected clients receive JSON patches with only what changed from their perspective. No application code needed for this; it just happens.
Async Message Queue Processing
Messages sent to channels are queued and automatically processed during the document's invalidation cycle.
message ChatMessage {
string body;
}
channel send(principal sender, ChatMessage msg) {
// messages are queued and processed automatically
// you don't need to poll or drain the queue
}
What to expect: Messages sent while a state machine is blocking (awaiting input) are queued (up to 256 items) and processed when the document next runs its invalidation cycle. No polling, no manual draining.
Service Call Result Delivery
When a document calls an external service (Stripe, HTTP, Twilio, etc.), the call is managed asynchronously with deduplication and automatic result delivery.
service weather {
class WeatherRequest {
string city;
}
class WeatherResponse {
double temperature;
}
method<WeatherRequest, WeatherResponse> get;
}
public formula current_weather =
weather.get(@no_cache, {city: "NYC"});
What to expect: External service calls are fire-and-forget from the document's perspective. Results arrive asynchronously and the document automatically re-evaluates with the new data. Identical service calls within the same evaluation cycle are deduplicated -- so you don't accidentally hammer an external API.
Replication Engine
Documents can replicate data to external services automatically. The platform tracks each replicated value, detects changes via content hashing, and manages create/update/delete operations with automatic retry.
What to expect: Replicated values sync automatically. If a service goes down, the engine backs off exponentially and retries. When the service recovers, pending creates/updates/deletes resume without intervention. You write the replication declaration; the platform handles the rest.
AI Agent Tool Loops
Documents can define agent blocks that orchestrate multi-turn LLM conversations with tool use. The runtime manages the entire ask -> tool call -> tool result -> follow-up loop autonomously. See the Agents language reference for the full syntax.
message AddTaskInput { string name; }
message AddTaskOutput { bool success; }
message TopicInput { string topic; }
message TopicOutput { bool ack; }
agent Helper {
instructions = "You are a helpful task manager.";
model = "gpt-4o";
temperature = 0.7;
max_tokens = 1024;
max_tool_rounds = 5;
mutable string last_topic;
@description("Add a new task")
tool<AddTaskInput, AddTaskOutput> add_task {
tasks <- {name: request.name, status: "todo"};
return {success: true};
}
@description("Set the current topic")
mutating tool<TopicInput, TopicOutput> set_topic {
last_topic = request.topic;
return {ack: true};
}
}
session<Helper> chat;
When a client sends an agent .ask(), the document autonomously orchestrates the conversation:
- The user message is appended to the session and sent to the LLM with available tool schemas
- If the LLM responds with tool calls, the runtime dispatches each tool within the document's transaction model
- Tool results are appended and another LLM call fires automatically
- The loop continues until the LLM finishes or
max_tool_roundsis reached - The entire conversation state -- including processing status and in-flight tool names -- streams to connected clients in real time via delta sync
Safety guarantees: Tools can't mutate document state (compiler enforced). mutating tool can only write agent mutable fields. max_tool_rounds is required -- the compiler rejects agents without it, because unbounded LLM tool loops are a recipe for disaster. token_budget can cap total consumption. If an ask fails, all mutable changes are discarded atomically.
Crash recovery: Session state (history, mutables, queue) is fully persistent. On document restore, sessions stuck in processing state are auto-reset after staleness_timeout seconds (default 60).
What to expect: Each tool call executes within the document's transaction model, mutations are persisted, and results stream to connected clients. The agent can perform multiple rounds of tool use before completing -- all without further client input.
Debug Log Publishing
Documents can publish debug messages via the @debug statement. When external subscribers are connected, the runtime formats and delivers messages in real time. When nobody's listening, the statement is a no-op with zero overhead.
@debug("processing round {0}, items={1}", round, items.size());
Debug subscriptions are gated by a debug policy in the @static block. If no policy is defined, subscriptions are denied by default. See the Debug Logging language reference for details.
What to expect: Debug messages flow to subscribers in real time during document execution. Zero cost when nobody is listening.
Document Lifecycle Hooks
The runtime automatically invokes lifecycle methods at key moments:
| Hook | When Invoked | Purpose |
|---|---|---|
@construct |
Document creation | Initialize state |
@load |
Document loaded from storage | Post-load setup |
@connected |
Client connects | Accept/reject connections |
@disconnected |
Client disconnects | Cleanup client state |
@asset |
Asset (file) attached | React to uploads |
@can_attach |
Before asset attachment | Gate upload permission |
@delete |
Deletion requested | Accept/reject deletion |
@construct {
// called once when the document is created
score = 0;
}
@connected (who) {
// return true to allow the connection
return true;
}
@disconnected (who) {
// cleanup when a client disconnects
}
@delete {
// return true to allow deletion
return tasks.size() == 0;
}
What to expect: These hooks fire automatically at the appropriate lifecycle moments. You don't call them manually. A connection from a new client invokes @connected and if it returns true, the client receives delta updates.
Auto-Invalidation Scheduling
Documents can declare how often they need invalidation. The runtime automatically schedules invalidation callbacks at the requested interval, triggering cron checks, queue processing, and reactive recomputation.
What to expect: If a document declares a 5-second invalidation interval, the runtime will automatically invoke invalidation every 5 seconds. Transparent, no application code.
Document Self-Deletion
A document can destroy itself by calling @destroy. Gone. Permanently removed from storage and memory.
@cron expire daily 0:00 {
if (is_expired) {
@destroy;
}
}
What to expect: Documents can self-destruct. If a cron job or state machine determines the document is no longer needed, it can delete itself without external instruction. This is useful for ephemeral sessions and temporary data.
Platform-Level Autonomous Behavior
These actions are performed by the infrastructure to manage the cluster, documents, and resources. You don't configure or trigger them -- they just run.
Document Sleep/Wake Cycle
Inactive documents are automatically unloaded from memory to conserve resources. Documents with pending work (cron jobs, delayed state transitions) register wake-up times so the platform reloads them when needed.
How it works:
- Every ~30-45 seconds, the platform checks each loaded document's inactivity timer
- Documents idle for more than 120 seconds (default) are unloaded from memory
- Before unloading, the platform checks for pending cron jobs or delayed transitions
- If pending work exists, a wake-up time is registered (persisted in the database)
- At the scheduled time, the document is automatically reloaded
What to expect: Documents come and go from memory automatically. A document with a daily cron job may be in memory for only a few seconds per day -- loaded, cron executed, changes persisted, unloaded. This is all transparent; you don't need to think about it unless you're debugging memory usage.
Overlord Agents
In clustered deployments, the Overlord node runs multiple autonomous agents that perform cluster-wide maintenance. Each runs on a periodic schedule, independently.
Garbage Collector
Schedule: Every 60-120 seconds
Finds documents flagged for asset garbage collection, identifies orphaned assets, and deletes them from cloud storage.
What to expect: After assets are removed from a document (deleting an uploaded image, for example), the garbage collector eventually reclaims the storage. Not immediate -- expect 1-2 minutes latency.
Space Deletion Bot
Schedule: Every 30-60 seconds
Finds spaces marked for deletion and systematically removes all their documents, then the space record itself.
What to expect: Space deletion is eventual. After marking a space for deletion, the bot cleans it up within 30-60 seconds per cycle, potentially taking multiple cycles for spaces with many documents.
Dead Node Detector
Schedule: Every 30-60 seconds
Scans for nodes that haven't reported a heartbeat within 15 minutes and flags them.
What to expect: If a node dies, it'll be detected within 15 minutes. The sentinel_behind metric can trigger alerts in your monitoring system.
Storage Reporter
Schedule: Every 5 minutes
Queries storage inventory for per-space usage and sends billing records.
What to expect: Storage billing data updates every 5 minutes. This is the source of truth for per-space resource consumption.
Prometheus Target Maker
Schedule: Every ~500ms
Watches for topology changes and generates a targets.json file for Prometheus service discovery.
What to expect: Prometheus targets update within 500ms of topology changes. The file is only rewritten when endpoints actually change.
Gossip Dumper
Schedule: Every ~500ms
Generates an HTML dump of the cluster gossip state for debugging and monitoring.
Storage Reconciliation
Schedule: One-time run, 60 seconds after startup
Compares the database document directory against cloud backups, logging any discrepancies.
What to expect: After a node starts, expect a one-time reconciliation pass that may take several minutes for large deployments.
Gossip Protocol
All nodes in the cluster continuously exchange topology information via a gossip protocol. Nodes discover each other, register their roles and ports, and propagate membership changes.
What to expect: New nodes are discovered within 1-2 seconds. Node departures are detected when gossip fails and the node stops appearing in exchanges. No manual service discovery configuration needed.
Routing Table
The routing table automatically maps document keys to backend hosts. As backends report their document inventory via gossip, the routing table updates.
What to expect: Requests are automatically routed to the correct backend. When a backend joins or leaves, routing updates within seconds. No manual routing configuration needed.
Capacity Management
Each backend node monitors CPU and memory usage, automatically taking protective actions when thresholds are crossed.
CPU thresholds:
| Threshold | Action |
|---|---|
| 75% | Add capacity (deploy hot spaces to new hosts) |
| 85% | Rebalance documents across region |
| 97% | Reject new document connections |
| 98% | Reject existing connections |
| 99% | Reject messages |
Memory thresholds:
| Threshold | Action |
|---|---|
| 80% | Force garbage collection |
| 85% | Add capacity |
| 90% | Rebalance |
| 92% | Reject new connections |
| 95% | Reject existing connections |
| 98% | Reject messages |
Every ~90 seconds the platform also offloads low-activity spaces to reduce load on busy nodes.
What to expect: Under normal load, the capacity agent is invisible. Under heavy load, it autonomously sheds traffic, requests more capacity, and rebalances. In extreme cases (99% CPU), it will reject incoming messages to prevent total node failure. All decisions are automatic. The system protects itself.
MCP (Model Context Protocol) Interface
The web server exposes a WebSocket endpoint at /~mcp implementing JSON-RPC 2.0 for AI agent integration. External AI systems can discover tools, resources, and prompts provided by the platform.
Supported methods: initialize, tools/list, tools/call, resources/list, resources/read, prompts/list, prompts/get, ping
What to expect: MCP-compatible AI clients connected to /~mcp can autonomously discover and use tools exposed by the platform.
What Runs Without You
| Behavior | Trigger | Frequency | Scope |
|---|---|---|---|
| State machine transitions | transition keyword |
Per-transition | Document |
| Delayed state transitions | transition #s in N |
Scheduled | Document |
| Cron jobs | @cron annotation |
Daily/hourly/monthly | Document |
| Reactive propagation | Any field mutation | Synchronous, immediate | Document |
| Delta sync to clients | Any state change | After every transaction | Document |
| Message queue drain | Queued messages | During invalidation | Document |
| Service result delivery | Async service response | On arrival | Document |
| Replication sync | Data changes | On commit, with backoff | Document |
| AI agent tool loops | Agent .ask() |
Multi-round, autonomous | Document |
| Debug log publishing | @debug statement |
When subscribers exist | Document |
| Lifecycle hooks | Connect/disconnect/load | On event | Document |
| Document sleep | Inactivity (120s default) | Every ~30-45s check | Platform |
| Document wake | Pending cron/timer | At scheduled time | Platform |
| Garbage collection | Orphaned assets | Every 60-120s | Cluster |
| Space deletion | Deleted space | Every 30-60s | Cluster |
| Dead node detection | Missing heartbeat | Every 30-60s (15min threshold) | Cluster |
| Storage reporting | Timer | Every 5 minutes | Cluster |
| Gossip | Peer discovery | Every 250-1000ms | Cluster |
| Routing updates | Topology changes | On gossip event | Cluster |
| Capacity management | CPU/memory thresholds | Every 250ms sampling | Node |
| Load shedding | High load | Every ~90s | Node |
| Prometheus targets | Topology changes | Every ~500ms | Cluster |