Three-Tier Skill System

Every InTouch Hub and ClawHub skill.
With the governance layer OpenClaw doesn't ship.

InTouch runs three tiers of skills: imports from upstream ClawHub (5,000+ OpenClaw skills, always converted to deterministic YAML before execution), native MD skills (markdown that orchestrates the AI assistant's tools), and Job Files (Jobs-as-Code) (the code your senior engineers can actually read). All three tiers run under the same RBAC, the same encrypted credential vault, the same audit log, and the same alerting fabric. Discovery is OpenClaw. Execution is enterprise.

Three Skill Formats. One Runtime.

Different authors, different use cases, same deterministic execution path.

Tier 1

OpenClaw Imports

Imported, never run raw. Browse or search ClawHub from inside InTouch (via the ClawHub view). Preview the SKILL.md. On install, the AI analyzes the skill once and generates a native YAML tool or YAML job. Subsequent runs are zero-LLM-cost, auditable, schedulable.

5,000+ skills, growing. Install from the ClawHub view in the InTouch UI.

Tier 2

MD Skills (Native)

A markdown file with YAML frontmatter, authored directly in InTouch. The AI assistant treats the markdown as instructions and orchestrates the 42+ platform tools to carry them out. Useful for skills that need the AI's flexibility on inputs but your tools for the work.

Lives inside your server. Versioned. RBAC-gated.

Tier 3

Job Files (Jobs-as-Code)

A full job in version-controllable YAML. Tools, dependencies, output pipes, AI steps, schedules, alerts — all declared. Git-diffable. Code-reviewable. Runs deterministically without any AI in the critical path.

The form that survives "the author left the company."

OpenClaw Standalone vs InTouch

OpenClaw is a great skill discovery and AI-agent format. InTouch is the enterprise runtime it doesn't ship.

Governance Layer OpenClaw (standalone) InTouch
Access controlNone — user runs everythingRBAC — roles, projects, publisher permissions
CredentialsShell env vars (export API_KEY=…)Encrypted vault, never exposed to AI context
Audit trailNoneFull job log with timestamps, inputs, outputs
SchedulingManual invocation onlySeven native schedule types, triggers, AI triggers
AlertingNoneSubscriber notifications across 8 channels
ConcurrencyNoneCollision detection, exclusive job locks
Ownership / multi-userSingle user on a local machineSingle-tenant, RBAC within the tenant¹
Execution costLLM tokens on every runZero — converted jobs run deterministically

¹ One server per organization; users within the tenant are access-controlled via RBAC. InTouch does not run multi-tenant — that's a design choice, not a gap. It's what makes the credential vault, audit trail, and RBAC model tractable.

Discover, Convert, Operate

OpenClaw skills are never run raw in InTouch. They are always converted to native InTouch automation first — a YAML tool or a YAML job, as the AI decides based on the skill's shape. ClawHub is a discovery and import source, not a runtime.

1

Discover

Browse or search ClawHub from inside InTouch (the ClawHub view). Preview SKILL.md before installing. Filter by highlighted / non-suspicious / version / recency.

2

Convert

The AI analyzes the skill's requires block, its execution pattern (REST, CLI, script), and its credentials. It generates a YAML tool (if the skill is a single reusable operation) or a YAML job (if it's a multi-step workflow). Incompatible patterns are flagged, not faked.

3

Operate

The converted artifact runs deterministically. Zero LLM cost per execution. Full logging, alerting, scheduling, RBAC. The AI was paid once; the job runs forever.

The Provider Is a Config Setting, Not an Architecture

The AI tool inside a converted skill — or inside any YAML job, or anywhere the assistant runs — can target any of 9 providers: Anthropic, OpenAI, Google Gemini, Mistral, Groq, DeepSeek, xAI, Hugging Face, or Ollama. Swappable per tool. The job definition doesn't change; only the tool name and credential reference do.

If Anthropic refuses your domain (financial advice, legal guidance, medical synthesis) — swap to Ollama with a local model. Zero restrictions, zero API cost, no data leaving your network. Works air-gapped if you need it.

The skill doesn't need to be rewritten. The job doesn't need to be rebuilt. The anthropic tool is replaced with ollama, and the same YAML runs on the same schedule.

# Before — Anthropic - name: summarize tool: anthropic credential: anthropic-prod model: claude-sonnet-4-6 prompt: "Summarize: ${input}" # After — Ollama (local, air-gapped) - name: summarize tool: ollama credential: ollama-internal model: llama3.2 prompt: "Summarize: ${input}"

Four Ways to Run a Skill

From the AI assistant

In the AI assistant chat (web UI, PWA, or Android app), type @ followed by the skill name. The skill runs and the result streams back into the conversation.

On a schedule

Attach a schedule directly to a skill. No job wrapper, no ceremony. Seven schedule types available.

From a trigger

A file, folder, or AI trigger invokes the skill with event data — new files, changed dates, trigger event type.

REST API

POST /api/skill/run with {name, input}. Integrate from anything that can HTTP: CI pipelines, portals, custom apps.

Why "Convert First, Run Later" Wins

Deterministic

An AI that interprets a skill afresh every run can improvise. Converted YAML doesn't. If you need a report to come out the same way every Monday, improvisation is the enemy.

Zero Per-Run AI Cost

OpenClaw standalone consumes LLM tokens on every invocation. A converted InTouch job consumes them zero times. Over a year of daily runs, that's the difference between a real line item and a rounding error.

Fails Predictably

When the YAML job fails, the failure mode is a specific tool's exit code and error text. Compare to "the AI sometimes doesn't call the retry tool." Predictable failures are the ones you can fix.

5,000+ Skills. One Governance Layer.

Start with the free Personal edition. Browse ClawHub from inside the app. Install a skill, watch it convert, run it on a schedule, never pay an LLM bill for it again.

Get Personal Edition See vs OpenClaw