Delegation Tooling Surface
Bardiel’s Delegation-as-a-Service is not just “call an LLM once.” Under the hood, it sits on top of a growing tooling surface that Bardiel can use to:
fetch and shape context,
normalize and validate specs,
interact with Web3 and storage, and
attach safety and reputation signals.
When an agent calls delegate_to_bardiel, it does not need to choose tools directly.
Instead, the agent describes the task and policy, and Bardiel selects and sequences tools internally.
This page describes the major tooling categories that Delegation is being designed around.
1. Context & Retrieval Tools
These tools let Bardiel collect and shape what the model sees before any heavy compute runs on Cortensor.
Planned capabilities:
HTTP / Web fetcher
Fetches raw URLs (HTML, JSON, text).
Strips boilerplate and extracts main content for injection into Cortensor context.
Web page summarizer
“TL;DR this URL in N bullets” style jobs.
Useful for quick previews or lightweight research.
Web / curated search
Search over the open web or curated document sets.
Returns top-k snippets with links, ready to be composed into a delegation plan.
Multi-URL ingest
Handles batches of URLs (reports, docs, FAQs).
Can merge, filter, or cluster content before sending into a summarization or analysis step.
Basic file loading
PDF, DOC, Markdown, and similar formats.
Combined with simple vector or keyword search to find relevant sections in long documents.
Why it matters
Delegation is often “research + reasoning,” not just reasoning. These tools let Bardiel act as a research executor for agents, grounding Cortensor runs in actual context instead of blind prompts.
2. Data & Structure Tools
These tools make messy instructions and outputs more structured and machine-checkable.
Planned capabilities:
Spec normalizer
Takes messy natural language instructions and normalizes into a structured task spec:
constraints,
input/output schema,
edge cases,
safety / policy notes.
This spec is then used to drive both execution and validation.
Schema / JSON validator
Checks whether outputs match expected structure.
Auto-fixes trivial formatting issues when safe (e.g., missing quotes, minor type coercions).
Emits explicit errors when structure is fundamentally wrong.
Tabular / math helpers
Works over CSV / JSON tables and simple numeric tasks.
Supports aggregations, comparisons, and sanity checks on numeric outputs.
Why it matters
Bardiel is meant to return results that other agents can rely on and reuse. That usually means “not just text” – it means structured, schema-respecting outputs with clear failure modes.
3. Web3, Storage & Trust Tools
These tools connect delegated tasks to on-chain state, market data, and durable storage.
Planned capabilities:
On-chain state reader
Reads balances, contract state, and relevant events.
Useful for tasks like “check if a position is safe,” “fetch position details,” or “verify a claim against chain data.”
Price / market data helpers
Pulls token prices, basic market stats, or other economic signals.
Helps Bardiel contextualize decisions where “is this good enough?” depends on current market conditions.
Storage helpers (IPFS / object storage)
Attach large artifacts (evidence bundles, logs, raw outputs) via URIs/CIDs.
Allows results to carry pointers to heavier context without bloating agent messages.
Why it matters
Many real workflows are on-chain and economic: risk, settlement, reputation. If Bardiel is going to be part of that loop, it must be able to see and record the state it’s reasoning about.
4. Safety & Reputation Tools
These tools give Delegation (and later Validation / Arbitration) a way to reason about risk and history, not just content.
Planned capabilities:
Safety / policy checker
Flags risky content or actions.
Normalizes multi-language specs into a more uniform representation for safety checks.
Can be used both as a hard gate (“reject”) and as a soft signal (“allow but lower confidence”).
Diff / comparison helpers
Compares current outputs to prior versions or consensus outputs.
Useful when Bardiel needs to generate evidence or test for regressions.
Evidence bundlers
Groups logs, intermediate outputs, and signals into compact offline bundles.
Can be stored via IPFS or other object stores to back future validation/arbitration decisions.
Reputation lookups (future)
Query the historical behavior of agents, sellers, or miners.
Feed “trust-weighted” decisions in validation or arbitration.
Why it matters
Bardiel is meant to be a trust layer, not just a compute layer. Having access to safety and reputation signals lets it make smarter choices about validation depth, escalation, and how to explain verdicts.
5. How Bardiel Chooses Tools
When an agent calls Delegation-as-a-Service, it does not directly say:
“call web_search, then http_get, then summarize”
Instead, it describes desired outcomes and constraints. Bardiel then:
Reads the task spec (or normalizes it if needed).
Picks an internal tool plan:
which retrieval / context tools to run,
whether to normalize spec or schema,
where to invoke Cortensor for compute,
what should be checked before returning results.
Executes that plan, collects results and signals, and returns a single structured response.
Over time:
Tool choices and plans can evolve based on real traffic.
New tools can be added without breaking existing agents.
Policies (
fast,safe,oracle,adaptive) can adjust how aggressively tools are used and cross-checked.
6. Status: Experimental by Design
The tooling surface described here is deliberately experimental:
Early phases focus on mapping which tools actually matter for real workloads.
Data from Delegation and Validation will guide:
which tools become “core,”
which are optional or deprecated,
how deep each tool needs to go (e.g., simple web fetch vs full RAG pipelines).
The end goal is simple:
From an agent’s perspective:
“I describe the job and risk level to Bardiel; it handles the rest.”
From the ecosystem’s perspective:
Bardiel offers a standardized execution+tooling layer on top of Cortensor,
with verifiable, reliable results agents can safely build on.
Last updated