yarnnnyarnnn

How It Works

Declare. Build.
Run. Supervise.

yarnnn is an operating system for recurring knowledge work. You declare what you're trying to accomplish. Agents are created around that intent through conversation. The OS runs the operation — scheduled, connected, accumulating. You supervise from the cockpit.

00

The operating model

Every yarnnn workspace runs as an operation, not a series of queries. Three layers make it work.

Kernel

The kernel runs it

Scheduled recurrences, platform connections, and deterministic pipelines execute without you present. LLM reasoning is reserved for work that genuinely requires judgment — not arithmetic, not formatting, not retrieval.

Substrate

The substrate accumulates

Your workspace is the persistent memory of the operation — tool context, prior outputs, preferences from your edits, domain knowledge from every run. The substrate is what makes Day 90 different from Day 1.

Judgment

Judgment is independent

What agents want to do and whether they should are two separate questions. An independent layer evaluates proposed actions against your declared intent before they bind. Autonomy that you can actually trust.

01

Declare your intent

Tell YARNNN what you're trying to accomplish — a domain you want to track, a recurring deliverable you want produced, an operation you want running. Plain language. No configuration forms.

In conversation

I want a weekly competitive intelligence brief. Track three competitors, synthesize what changed, and have it in my inbox every Monday morning.

Got it. I'll create a Researcher scoped to competitive intelligence and a Writer for the brief. Once you confirm, I'll set it to run every Sunday evening so you have it Monday morning.

The mandate

What you're trying to accomplish

Your declared intent is the north star the system reasons against. Agents evaluate their own output against it. The judgment layer evaluates proposed actions against it. The operation is always trying to serve it.

Task shapes
Recurring

Runs on cadence indefinitely

Goal-bound

Runs until success criteria are met

Reactive

Fires on event or on-demand

02

Agents are created through conversation

You don't pick from a catalog. A conversation with YARNNN is how agents come into existence — scoped to your domain, drawing from a palette of specialist roles. The team is authored, not provisioned.

The specialist palette

Six roles. Your agents are built from them.

YARNNN drafts specialist combinations per task from a universal palette. Your domain agents are persistent entities that accumulate expertise over time.

R
Researcher
Finds and evaluates sources
A
Analyst
Synthesizes patterns and meaning
W
Writer
Drafts polished deliverables
T
Tracker
Monitors signals and changes
D
Designer
Creates charts, images & visuals
S
Slack
Reads your channels & threads
N
Notion
Reads your pages & databases
G
GitHub
Follows repos & activity
Y
YARNNN
The orchestrator you talk to
Authorship

The team is yours. Built over time.

Each agent accumulates domain knowledge, learned preferences, and output history specific to your work. The switching cost begins with the first one.

Context sources
Chat

Describe what matters

Docs

Upload files agents reference

Slack

Channels and threads

Notion

Pages and databases

03

The operation runs

Agents connect to your tools, execute on schedule, and accumulate context from every cycle — whether you're online or not. The kernel handles what's deterministic. LLM judgment handles what actually requires reasoning.

Scheduled execution

Daily, weekly, monthly — or event-triggered. Tasks run on cadence without you initiating them.

Platform-connected

Agents read fresh context from Slack, Notion, and GitHub every cycle. The substrate stays current.

Accumulating

Prior outputs feed future ones. Domain knowledge deepens with every run. The team gets better at its job.

Multi-agent example
S

Slack connector keeps fresh internal context available each cycle

R

Researcher adds external signals and market movements

W

Writer synthesizes both into a finished brief

Delivered Monday 8 AM. Every week.

04

You supervise from the cockpit

Review what ran, redirect what needs changing, and watch the operation compound over time. You work inside yarnnn — not consuming reports elsewhere. The cockpit is where the team is tuned and the pending decisions are made.

Feedback loop

The weekly recap is too long. Lead with risks and keep it under 500 words.

Got it. Updated to lead with risks, capped at 500 words. That preference carries forward to every future run.

The cockpit
OverviewWhat's happening and what needs you
AgentsYour team — identity, health, accumulated expertise
WorkWhat's running, what's produced, what's scheduled
ContextWhat the workspace knows — accumulated and searchable
ReviewProposed actions and the judgment trail
Independent review

Agents propose.
A separate layer judges.

What your agents want to do and whether they should do it are two separate questions — answered by two different layers. An independent judgment function reads your declared intent and principles, evaluates proposed actions, and decides whether to execute, queue for your review, or defer pending more information. This is what makes higher autonomy trustworthy rather than reckless.

Approve

If the proposed action aligns with your declared intent and falls within your delegated autonomy ceiling — the action executes. No manual approval needed.

Queue

If the action exceeds your autonomy ceiling or the judgment layer isn't confident, it surfaces in your review queue. You decide.

Defer

If the proposal has an evidence gap, the judgment layer commissions the missing research before deciding. It doesn't guess.

Why it gets better, not stale

The substrate is the moat. Not the model underneath — that's becoming a commodity. What accumulates in your workspace is what can't be replicated by starting over.

Preferences

Your structure, tone, emphasis

Learned from your edits. Every correction teaches the agent what you actually want — and carries forward to every future run.

Domain knowledge

Research, patterns, and relationships

Accumulated findings from every task run — competitors, market signals, team dynamics. Can't be replicated by switching tools.

Output history

Prior outputs feed better future outputs

Three months of accumulated work means every new output builds on everything that came before. The team compounds.

Platform context

Fresh material every cycle

Slack, Notion, and GitHub keep the workspace current. Agents always work from what's actually happening, not a stale snapshot.

What people describe

Describe the work to YARNNN in plain language. It creates the agents and sets the operation.

Give me a weekly digest from #engineering and #product.

Every Friday, send leadership a status report as a PDF.

Track these three competitors and give me a weekly update.

Before my meetings, generate a prep brief from Slack and Notion.

Research the AI agent market and deliver findings weekly until I say stop.

Summarize my week across all platforms every Friday.

Start with one piece of work.

Describe it to YARNNN. The operation it builds will still be running — and getting better — three months from now.

Describe your work