Adros Contract Layer
Audience: Engineers wiring Adros into an orchestration layer
Author: Adros team
Status: Live on production (api.adros.ai). See Quickstart to connect.
Schema version: v1
Purpose: Wire orchestration-layer workers (Paperclip, OpenClaw, custom) to call these endpoints so every specialist becomes deterministic — no guessing what to produce, no free-text handoffs, no drift between cycles.
What's been built (Adros side)
1. New table column
clients.lifecycle_state (varchar 30, default new) + clients.lifecycle_updated_at (timestamptz)
Migration file: api/db/migrations/021_client_lifecycle_state.sql
2. New service module
api/services/contract_service.py — the spec library. Hard-coded playbooks for every (stage, task_type) combination, returning structured deliverable specs.
3. New REST endpoints (all under /api/v1)
| Method | Path | Purpose |
|---|---|---|
| GET | /contract/spec?stage=...&task_type=... | Get one deliverable spec |
| GET | /contract/specs?stage=... | List specs (optionally by stage) |
| POST | /contract/writeback | Push structured findings into memory |
| GET | /me/clients/{client_id}/state | Get client lifecycle state |
| PATCH | /me/clients/{client_id}/state | Set client lifecycle state |
4. New MCP tools (4 total)
| Tool | What it wraps |
|---|---|
get_deliverable_spec(stage, task_type) | GET /contract/spec |
list_deliverable_specs(stage) | GET /contract/specs |
contract_writeback(task_type, stage, findings, client_name, ...) | POST /contract/writeback |
get_client_state(client_name) | GET /me/clients/{id}/state |
set_client_state(client_name, lifecycle_state, reason) | PATCH /me/clients/{id}/state |
REST is for your Python workers (admin-worker, data-analyst-worker). MCP is for LLM agents (Jarvis, Hermes, Creative Director) calling Adros via natural language.
Auth (same as existing Adros endpoints)
REST: Authorization: Bearer <mcp_token> header. Same token you currently use for /api/v1/me/memory/*.
MCP: X-User-Token: <mcp_token> header on the MCP connection (already configured in your existing setup).
If you're already calling Adros memory routes, this works with zero auth changes.
Lifecycle state machine
new → intake → research → creative → approved → live → optimizing
↓
(steady state)
↓
paused (optional pause)States:
- new — created, no work started
- intake — gathering business context, personas, brand
- research — keyword/market/competitor research
- creative — generating ad creatives + copy
- approved — approved by operator, ready to launch
- live — campaigns running on Meta/Google
- optimizing — in weekly optimization cycle (steady state)
- paused — temporarily not being worked on
Existing clients are backfilled to optimizing by the migration, since they're presumed live.
Example flows
Two complete walkthroughs showing how an orchestration layer uses the Contract endpoints. Both use placeholder client names (Acme Travel, Acme Wellness) — substitute your own.
Flow 1 — Weekly review for an existing client
Scenario: a travel client Acme Travel has been in optimizing state for months. The orchestration layer runs a weekly review every Monday.
# 1. Orchestration layer checks the client's current state
GET /api/v1/me/clients/<client-uuid>/state
→ {"lifecycle_state": "optimizing", "suggested_next_state": "optimizing"}
# 2. Fetch the deterministic spec for this task
GET /api/v1/contract/spec?stage=optimizing&task_type=weekly_review
→ {
"stage": "optimizing",
"task_type": "weekly_review",
"produce": [
"weekly_metrics_xlsx",
"creative_refresh_recommendations",
"search_term_negatives",
"budget_reallocation_plan"
],
"data_sources": [
"cometly_30d", "meta_insights_30d", "meta_insights_7d",
"google_insights_30d", "google_search_terms_14d"
],
"adros_tools": [
"list_insights",
"google_campaign_performance_report",
"google_search_terms_report",
"match_strategy"
],
"output_template": "meta_ads_audit",
"approval_required": true,
"notify_sam_threshold": "any creative_fatigue OR wasted_spend > $50 OR CPA > 1.5x target",
"format": "xlsx + chat summary + action log",
"recommended_specialist": "Hermes",
"next_stage": "optimizing",
"review_gates": [
"Hermes craft review",
"Orchestration layer client-context review",
"Operator approval"
],
"schema_version": "v1"
}
# 3. Orchestration layer creates a task with the spec embedded
# so the assigned specialist doesn't have to call Adros twice
POST {task-ledger}/issues
{
"title": "[Acme Travel] Weekly review — week of Jan 13",
"assignee": "Hermes", # from spec.recommended_specialist
"metadata": {
"adros_spec": <full spec from step 2>,
"client_id": "<client-uuid>",
"client_name": "Acme Travel"
}
}
# 4. The specialist wakes up, reads the spec from issue metadata,
# executes each adros_tool listed, produces the artifacts,
# then writes findings back to Adros
POST /api/v1/contract/writeback
{
"client_id": "<client-uuid>",
"task_type": "weekly_review",
"stage": "optimizing",
"findings": {
"cpa_actual": 18.50,
"cpa_target": 25.00,
"winning_audience": "interest:hiking + age:25-44",
"fatigued_ads": ["ad_id_123", "ad_id_456"],
"wasted_spend_usd": 67,
"recommended_action": "scale winning audience, refresh fatigued creatives"
},
"artifact_path": "/tmp/acme-travel-weekly-2026-01-13.xlsx",
"metrics_after": {"cpa": 18.50, "roas": 3.2, "spend_7d": 1200}
}
→ {
"success": true,
"weekly_log_id": "<uuid>",
"business_context_updated": true,
"artifact_storage": {
"convention": "gdrive_per_client_per_task",
"key": "<user_id>/<client-uuid>/weekly_review/2026-01-13T03-00-00Z",
"path": "/tmp/acme-travel-weekly-2026-01-13.xlsx",
"ready_for_admin_upload": true
},
"schema_version": "v1"
}
# 5. The specialist hands off the artifact path to an upload worker,
# which stores the file at the storage key above and posts the
# final location back into the task for the review gate.
#
# 6. After operator approval (if required), no state change is needed —
# optimizing is the steady state and the cycle repeats next week.Flow 2 — Onboarding a new client
Scenario: the operator tells the orchestration layer to onboard a new wellness client Acme Wellness.
# 1. Create the client record in Adros (via existing /me/clients API
# or the dashboard)
# 2. Set the initial lifecycle state
PATCH /api/v1/me/clients/<client-uuid>/state
{
"lifecycle_state": "intake",
"reason": "New client signed, starting onboarding"
}
# 3. Fetch the intake spec
GET /api/v1/contract/spec?stage=intake&task_type=gather_business_context
→ {
"produce": ["business_context_record", "personas_seed"],
"adros_tools": ["get_intake_questions", "save_business_context", "save_persona"],
"approval_required": false,
"recommended_specialist": "Admin",
"next_stage": "research",
...
}
# 4. The Admin specialist runs the intake conversation with the operator,
# saves business context + personas via the listed adros_tools, then
# writes findings back.
# 5. Advance the client to the next lifecycle stage
PATCH /api/v1/me/clients/<client-uuid>/state
{"lifecycle_state": "research", "reason": "Intake complete"}
# 6. Repeat for research → creative → approved → live → optimizing.
# Each stage has its own spec and its own recommended specialist.Available specs (v1)
| Stage | Task type | Specialist | Approval? |
|---|---|---|---|
intake | gather_business_context | Admin | No |
intake | build_brand_profile | Admin | No |
research | keyword_research | Data Analyst | No |
research | market_research | Data Analyst | Yes |
creative | generate_creatives | Creative Director | Yes |
approved | launch_campaigns | Hermes | No |
live | first_week_check | Data Analyst | No |
optimizing | weekly_review | Hermes | Yes |
optimizing | creative_refresh | Creative Director | Yes |
optimizing | search_term_audit | Hermes | Yes |
optimizing | budget_reallocation | Hermes | Yes |
Call GET /api/v1/contract/specs with no filter to get the full list at runtime, or filter by ?stage=<state> to get only the specs for one lifecycle stage.
Integrating the Contract layer — minimal checklist
To wire the Contract layer into an orchestration stack, an integrator typically needs:
-
A thin REST client. Five functions in any language that wrap the five endpoints:
get_spec(stage, task_type) -> dictlist_specs(stage=None) -> list[dict]writeback(client_id, task_type, stage, findings, **kwargs) -> dictget_client_state(client_id) -> dictset_client_state(client_id, state, reason="") -> dict
-
Spec-driven task dispatch. When the orchestration layer creates a new task, it should call
get_specfirst and embed the result in the task metadata. Specialists then read the spec from their task rather than re-fetching. -
Structured writeback on completion. When a specialist finishes, call
writebackwith the findings dict + artifact path. The response returns the canonical storage key that an upload worker should use. -
State transitions at lifecycle boundaries. Call
set_client_statewhenever the client advances a lifecycle stage. Callget_client_statewhenever the orchestration layer needs to decide "what's next." -
Schema version pinning. All responses include
"schema_version": "v1". Workers should fail loud on version mismatch so you discover breaking changes early.
Behavior contract details
writeback storage convention
The artifact_storage.key returned from writeback is the canonical path for an upload worker to use:
gdrive://Adros/{user_id}/{client_id_or_solo}/{task_type}/{timestamp}/Example: gdrive://Adros/<user-uuid>/<client-uuid>/weekly_review/2026-01-13T03-00-00Z/
Inside that folder, the upload worker stores the actual artifact file. This means every weekly cycle for a given client has a stable, predictable folder path — easy to audit, easy to reference in future cycles.
findings storage in business_context
When you call writeback, the findings dict gets stored in business_context.raw_research under TWO keys:
raw_research = {
"weekly_review_latest": <most recent findings>,
"weekly_review_history": [<last 20 findings, oldest to newest>],
# ... other task_types similarly
}This means specialists can call get_business_context() and immediately see "what did we find last cycle" without a separate query.
Schema versioning
Every response includes "schema_version": "v1". When we change the spec structure later, we'll bump to v2 and your workers should refuse to process v2 specs until they're updated. Pin your worker code to v1 for now — easy migration later.
Error responses
- 404 if no spec exists for a
(stage, task_type)pair → don't retry, log it, escalate - 400 if invalid
lifecycle_state→ log it, escalate - 401 if auth fails → check token, escalate
Known gotchas
Client resolution by name. The client_id lookup in MCP tools uses an internal resolver that expects the client to already exist in Adros. If the orchestration layer creates a brand-new client and immediately tries to call get_client_state(client_name=...), the resolver may return empty. Mitigation: create the client first via the dashboard or Adros API, then call Contract tools.
Base URL prefix. All Contract REST endpoints live under /api/v1/*. Hitting https://api.adros.ai/contract/spec without the /api/v1 prefix returns 404. See the FAQ for the most common setup mistake.
Schema version pinning. Every response includes "schema_version": "v1". When Adros bumps to v2 in the future, workers should refuse v2 specs until they're updated. Pin your parsing logic to v1 explicitly and fail loud on version mismatch.
— Adros team