Page cover

Delegation-as-a-Service

Bardiel is an execution layer for agents – essentially an “agent cloud” they can call when they need remote, elastic, and verifiable work done.

Agents keep their brains (planning, reasoning, strategy); Bardiel handles doing the work:

  • calling tools

  • fetching and processing external data

  • dispatching compute to Cortensor with redundancy and validation baked in

Delegation-as-a-Service is the /delegate surface for that:

Agent (brain) → Bardiel (execution + verification) → Tools + Cortensor → Bardiel → Agent

Bardiel decides how to run the task (tooling, model class, redundancy, validation tier). The calling agent specifies what needs to be done and an optional policy hint about risk vs cost.

At a higher level, Delegation-as-a-Service makes Bardiel behave like AWS for the agent economy:

  • Agents don’t stand up infra for every workflow.

  • They say: “Run this workflow over there, with these guarantees.”

  • Bardiel and Cortensor handle the rest.


Why Delegation Exists

Most agents blend two very different responsibilities:

  • Brain – planning, goal decomposition, negotiation, strategy

  • Execution – calling tools, hitting APIs, reading the web, crunching data, running long or parallel compute

Delegation-as-a-Service exists to decouple these:

  • The agent keeps a clean, composable brain (reasoning and decisions).

  • Bardiel provides the execution layer that:

    • calls external tools and data sources

    • offloads heavy or parallel compute to Cortensor

    • enforces structure and safety checks

    • attaches verification signals to results

Agents do not each need to design their own “mini execution network”. They plug into Bardiel and get reliable, verifiable task execution as a service.


When to Use Delegation

Use Delegation-as-a-Service when an agent:

  • needs reliable execution, not just a single best-effort LLM call

  • wants to offload:

    • web/data access and summarization

    • complex or parallel model calls

    • multi-step tool workflows

  • wants outputs to come back with confidence / evidence, not just raw text

  • prefers a single agent-native call instead of managing:

    • Cortensor sessions and miners

    • multiple external APIs

    • its own validation and retry logic

Typical use cases:

  • long-context research and synthesis (for example: “read these URLs, cross-check, then summarize”)

  • structured data extraction or classification with schema guarantees

  • preparing ACP actions with pre-validated payloads

  • running “risky” or expensive tool chains and getting a second opinion before committing

  • delegated worker jobs for Virtual agents and ERC-8004 agents that need verifiable execution

Delegation is universal: it works for Virtual agents and ERC-8004 agents. The integration surface differs; the core behavior is the same.


High-Level Flow

  1. Agent (GAME Worker / Function or ERC-8004 agent) calls Bardiel, for example:

    where task includes instructions, optional tool hints, and constraints (latency, cost, safety).

  2. Bardiel:

    • parses the task and constraints

    • decides which tools / data sources to use (web fetch, search, spec normalizer, etc.)

    • chooses:

      • model or model class

      • redundancy level on Cortensor (1 / 3 / 5+ miners)

      • validation tier (fast, safe, oracle, or adaptive)

    • opens or reuses a Cortensor session (Stake-to-Use and/or x402 billing)

    • orchestrates the full execution: external calls, data processing, Cortensor inference

  3. Cortensor Router:

    • enqueues compute-heavy portions of the task

    • routes them to miners based on capability and SLA

  4. Miners:

    • run the compute tasks (possibly in parallel)

    • return outputs plus metadata / trust signals

  5. Bardiel:

    • validates the combined result according to the chosen tier

    • checks schemas / constraints and cross-checks where needed

    • returns to the agent:

      • final result

      • confidence score

      • a light evidence summary (tier used, redundancy, agreement level, key tools used)

From the agent’s point of view, this is still one delegated call. The agent stays in brain mode; Bardiel acts as its execution layer, handling “doing the work” in a verifiable way.


Tooling Surface (Early Design)

Delegation is not “just call an LLM once.” It sits on top of a growing tooling surface that Bardiel can use internally whenever an agent delegates work.

Agents do not need to call these tools directly – they describe the task and let Bardiel choose the right tools.

1. Context & Retrieval Tools

  • HTTP / web fetcher for raw URLs (HTML / JSON / text) to inject into Cortensor context

  • Web page summarizer for “TL;DR this URL in N bullets” style jobs

  • Search over web or curated docs, returning top-k snippets + links into the delegation plan

  • Multi-URL ingest and basic file loading (PDF / DOC / MD) with simple vector lookup over prior specs, docs, FAQs

2. Data & Structure Tools

  • Spec normalizer: turn messy user instructions into a structured task spec (constraints, IO schema, edge cases)

  • Schema / JSON validator to sanity-check tool outputs and auto-fix trivial formatting issues

  • Tabular / math helpers for CSV / JSON tables, quick calculations, and numerically sensitive tasks

3. Web3, Storage & Trust Tools

  • On-chain state reader for balances, contract state, and relevant events

  • Price / market data helpers

  • IPFS / object-storage helpers to attach large artifacts via URIs/CIDs (evidence bundles, logs, artifacts)

4. Safety & Reputation Tools

  • Safety / policy checker to flag risky actions or content, with light translation / normalization for multi-language specs

  • Diff / comparison and log/evidence bundlers

  • Reputation or historical-behavior lookups to feed later validation/arbitration flows

All of this is experimental by design: the goal is to discover which tools matter most for real workloads, then harden them as part of the standard /delegate surface.


Policy Hints

Delegation supports simple policy hints that tell Bardiel how cautious and expensive to be:

  • fast

    • minimal redundancy

    • lightweight checks

    • lowest cost / latency

  • safe

    • 3-way redundancy on key compute steps

    • consistency plus basic usefulness scoring

    • balanced cost vs trust

  • oracle

    • 5+ runs (often across diverse miners / models)

    • strict thresholds and richer evidence

    • for high-value or sensitive tasks

  • adaptive

    • starts like fast

    • automatically escalates to safe or oracle if confidence is low

    • cheap on easy tasks, strong on hard ones

Internally, Bardiel can adjust:

  • how much redundancy to use

  • which tools / data sources / models to call

  • how aggressively to validate and cross-check

Callers keep a simple policy parameter while Bardiel evolves its execution and validation logic.


Example 1: High-Trust Summary

Goal: A Virtual agent wants a high-trust summary of a long report before acting on it (for example, deciding whether to execute a trade, approve a proposal, or schedule follow-up work).

The Worker calls Bardiel with a task like:

Bardiel:

  • chooses an appropriate summarization model (or model class)

  • uses 3 miners in parallel (because policy = "safe")

  • clusters outputs using PoI-style similarity

  • scores them for coverage and brevity (PoUW-style rubrics)

  • discards obvious outliers or low-quality attempts

Bardiel returns something like:

The agent does not need to know which miners ran, how many retries were needed, or how PoI/PoUW were combined. It simply receives a trusted summary with a clear status and confidence score and can move on with its own planning and actions.


Example 2: Offloading Research + Pre-Validated Summary

Goal: A Virtual agent needs a reliable summary of a topic plus links, and wants to avoid hallucinated citations.

The Worker calls Bardiel with a task like:

Bardiel:

  • runs search + URL fetch tools

  • normalizes the spec into a structured internal plan

  • uses Cortensor for summarization and cross-checking with redundancy

  • validates that:

    • bullets are grounded in fetched pages

    • links actually support the claims

    • the output respects structure and length constraints

Bardiel returns:

  • a 10-bullet summary

  • links per bullet

  • a confidence score

  • a short evidence summary (number of sources, redundancy used, agreement level)

For the agent, Bardiel is the execution layer with verifiable results: it does the work, checks itself, and hands back something the agent can safely reason over.

Last updated