External Integration with Replication

Adama documents are isolated by design -- each document is its own stateful VM. That's the whole point; it's what gives you the strong consistency and reactive guarantees. But real applications need to participate in a larger world: search indexes, analytics databases, billing systems, and cross-document queries all require data to flow out of documents into external systems.

The replication engine solves this. It watches expressions inside documents, detects changes via content hashing, and automatically pushes creates, updates, and deletes to external services with durable retry. You declare what to replicate and where; the platform handles the when and the error recovery.

Why Replication

Here's the core problem: documents can't query each other. If you have 10,000 game documents and need a global leaderboard, no single document can iterate across all others. That's a hard constraint of the isolation model.

Replication solves it by pushing each document's relevant state to a shared external store (like MySQL) where cross-document queries become trivial. Each document independently pushes its score; a separate query against MySQL builds the leaderboard. Simple.

When you need this:

  • Cross-document indexes -- Push scores, usernames, or metadata to MySQL so a separate query can build leaderboards, search results, or directories
  • External system sync -- Keep a billing system, CRM, or analytics pipeline up to date as document state changes
  • Event sourcing -- Push state snapshots to an audit log or data warehouse
  • Search indexing -- Feed document data into Elasticsearch or similar systems

How Replication Works

Replication has two parts: a service declaration (what can receive replicas) and a replication field (what to replicate).

Service Declaration

A service declares replication methods that accept a specific message type:

message LeaderboardEntry {
  int score;
  string player_name;
}

service db {
  internal = "mysql.example.com";

  // This service can receive LeaderboardEntry replicas
  replication<LeaderboardEntry> upsert_score;
}

The replication<Type> method declaration tells the compiler that db supports a replication method called upsert_score that accepts LeaderboardEntry messages.

Replication Field

A replication field binds an expression to a service method. The runtime watches the expression and syncs changes:

public int score = 0;
public string player_name = "unknown";

// Replicate this expression to the db service's upsert_score method
replication<db:upsert_score> my_score = {score: score, player_name: player_name};

Every time score or player_name changes, the replication engine:

  1. Recomputes the expression {score: score, player_name: player_name}
  2. Hashes the result and compares to the last successfully replicated hash
  3. If different, sends a create (first time) or update (subsequent) to the service
  4. On success, stores the new hash and the service-assigned key
  5. On failure, retries with exponential backoff (capped at 60 seconds)

The hash comparison is the key insight here -- no hash change means no network call. If you set the score to its current value, nothing happens. No wasted work.

Monitoring Replication Status

The replication field itself is a reactive value you can expose to clients:

public int score = 0;

replication<db:upsert_score> my_score = {score: score};

// Expose the sync status to the client
public formula sync_status = my_score;

The status tracks the current state of the replication (pending, in-flight, failed, etc.), so clients can show sync indicators if you want.

Complete Example: Cross-Document Leaderboard

This is the canonical use case. Each game document replicates its winner's score to MySQL. A separate service queries MySQL to build the global leaderboard.

// The data shape to replicate
message ScoreEntry {
  string player;
  int score;
  string game_id;
}

// External database service
service leaderboard_db {
  internal = "mysql.scores";

  replication<ScoreEntry> upsert;
}

// Game state
public string winner = "";
public int high_score = 0;

// Replicate the current winner's score
// When winner or high_score changes, the entry is automatically synced
replication<leaderboard_db:upsert> score_entry = {
  player: winner,
  score: high_score,
  game_id: Document.key()
};

message SubmitScore {
  string player;
  int score;
}

channel submit(SubmitScore msg) {
  if (msg.score > high_score) {
    high_score = msg.score;
    winner = msg.player;
    // score_entry will automatically replicate the new values
  }
}

On the MySQL side, the service implementation receives create/update/delete calls and maintains the scores table. A separate API can then query SELECT * FROM scores ORDER BY score DESC LIMIT 100 to build the leaderboard. Each document pushes independently; MySQL aggregates.

Replication in Records (Table-Level Replication)

Replication fields can live inside records, enabling per-row replication from tables:

message ProductListing {
  int price;
  string name;
}

service catalog {
  internal = "mysql.catalog";

  replication<ProductListing> sync_product;
}

record Product {
  public int id;
  public string name;
  public int price;
  public bool published;

  // Each product row replicates independently
  replication<catalog:sync_product> listing = {price: price, name: name};
}

table<Product> _products;

Each row in _products gets its own replication status. When a product's price or name changes, only that row's replication fires. When a row is deleted from the table, the replication engine sends a delete to the service.

This pattern is powerful for maintaining external catalogs, user directories, or any per-entity sync. Each row is independent, and the replication engine handles all of them concurrently.

Dynamic Replication

For flexible schemas, use dynamic as the replication type:

service analytics {
  internal = "analytics.example.com";

  replication<dynamic> track;
}

// Replicate arbitrary JSON
replication<analytics:track> event = {x: 123}.to_dynamic();

The Replication State Machine

Each replication field follows a state machine that handles reliable delivery:

Nothing --> PutRequested --> PutInflight --> Nothing (success, hash stored)
                                        --> PutFailed --> PutRequested (retry)

Nothing --> DeleteRequested --> DeleteInflight --> Nothing (success)
                                              --> DeleteFailed --> DeleteRequested (retry)

The important behaviors:

  • Change detection: MD5 hash of serialized value. No hash change = no network call
  • Create vs Update: First replication creates (service assigns a key). Subsequent changes update using that key
  • Deletion: When the parent record is deleted from a table or the document shuts down, a delete is sent
  • Retry with backoff: Failed operations retry with exponential backoff, randomized, capped at 60 seconds
  • Crash recovery: Replication state (including pending deletes) is persisted in the document and survives restarts
  • Tombstones: Pending deletes are tracked as tombstones until confirmed, preventing data leaks on crash

The crash recovery part is worth emphasizing. If the server goes down after a row is deleted but before the delete is replicated, the tombstone is persisted. When the document comes back up, it will complete the delete. No orphaned data in the external system.

Integration Patterns

Pattern: Global Directory

Push user profiles from per-user documents to a shared directory:

message UserProfile {
  string display_name;
  string avatar_url;
  bool online;
}

service directory {
  internal = "mysql.users";
  replication<UserProfile> sync;
}

public string display_name;
public string avatar_url;
public bool has_connections;

replication<directory:sync> profile = {
  display_name: display_name,
  avatar_url: avatar_url,
  online: has_connections
};

Every user's document independently pushes its profile. The directory service can be queried for search, friend suggestions, or presence -- all the things that require seeing across document boundaries.

Pattern: Analytics Pipeline

Push document metrics for aggregation:

message GameMetrics {
  int total_moves;
  int duration_seconds;
  string outcome;
}

service analytics {
  internal = "analytics.pipeline";
  replication<GameMetrics> report;
}

public int move_count;
public int start_time;
public string result;

replication<analytics:report> metrics = {
  total_moves: move_count,
  duration_seconds: Time.timestamp() - start_time,
  outcome: result
};

Pattern: Billing Sync

Keep a billing system informed of subscription state:

message BillingState {
  string plan;
  int seats;
  bool active;
}

service billing {
  internal = "stripe.sync";
  replication<BillingState> sync_subscription;
}

public string plan = "free";
public int seats = 1;
public bool active = true;

replication<billing:sync_subscription> sub = {
  plan: plan,
  seats: seats,
  active: active
};

When to Use Replication

Need Solution
Cross-document queries (leaderboards, search) Replicate to MySQL, query externally
External system sync (billing, CRM) Replicate relevant fields
Audit trail Replicate state snapshots to a log
Per-row external sync Replication fields in records
One-off external calls Use service.method().await() instead

Replication is for ongoing synchronization of state. For one-off actions (send an email, charge a card), use regular service calls. The replication engine is overkill for fire-and-forget operations, and it's not designed for them.

At core, replication is the escape hatch from document isolation. It lets each document push its state outward, where external systems can query across all of them. The document model gives you consistency and reactivity; replication gives you the cross-document capabilities that the isolation model can't provide on its own.

Previous Agentic Behavior
Next Appendix