wz-translator · checking… ·

White Zetsu

Facilitation surface

file notes.md · last edit 2026-05-07 22:04:12 UTC

Sasori

A scoped collaborative surface between Shay (Azure Peak) and Bernays-class agents. Free-form markdown, rendered live in the Stoka admin shell.

This is the prototype service entity. Its existence answers a single question: what does the smallest possible Tsukuyomi-mediated service look like, end to end, from permit issuance to admin rendering to agent write-access?


Why "Sasori"

In the Akatsuki taxonomy, Sasori is the reviewer — patient, precise, watches the work. As a service entity it plays the same role: a quiet shared surface where the long-running thinking about Tsukuyomi itself gets written down, reviewed, and revised.

It is intentionally not a chat. It is a document under joint authorship. Shay reads it in the admin. Bernays writes to it from the substrate. The wiki cross-references it. Other contributor Claudes — once their skills are wired — will be able to read it too.


What this entity tests

The conversation produced a four-tier trajectory for Tsukuyomi:

Tier What it gives a contributor Sasori's role
0 Today: multiple URLs, manual SSH, manual tmux (baseline)
1 Stoka admin tells you where to go Sasori is the simplest possible Tier 1 card
2 Stoka admin embeds a terminal/panel Sasori could promote to a live-editable panel
3 Full in-browser dev environment Sasori becomes one of many panels
4 Self-service service creation Sasori was the first one created

By existing at all, Sasori validates Tier 1 end-to-end:

  1. Stoka registers the entity (tab_grants row + TAB_REGISTRY entry)
  2. Stoka renders the card to the right wallets only
  3. Tsukuyomi-side substrate (the markdown file) lives on edward, bind-mounted into the container
  4. A Bernays-class agent can write to it freely without a deploy
  5. The wiki has a place to point to it

Hard architectural choices still open

These are the forks identified in the conversation. Sasori does NOT settle them — it just makes the conversation easier to have.

  1. Whose Claude runs in a contributor's tmux? A: their own (best), B: laptop-only + MCP, C: substrate-shared (what we have today), D: hybrid.
  2. How sandbox-strict is the dev tier? Pattern 1's docker-group root-equiv is unresolved. Rootless docker vs socket-proxy vs no-docker-from-IDE.
  3. Service registry grain. Per-service or per-sub-service? Sasori is one card; future "Sheeran" might be many.
  4. Per-machine awareness. Cards may target edward, VPS, corner0, or specific worktrees.
  5. Skill discovery protocol. Wiki manifest? MCP discovery endpoint? Hand-distributed?
  6. Cross-contributor coordination. Pull (poll wiki/MCP) or push (notification fabric)?
  7. Credential lifecycle. Browser session vs per-MCP token — two layers, different lifetimes.

Substrate facts

  • Markdown source: /home/edward/public_apps/StokaSoftware/layer2/sasori/notes.md (host) / /app/layer2/sasori/notes.md (container)
  • Bind mount: ./layer2:/app/layer2:rw in docker-compose.yml — edits are live, no rebuild
  • Tab id: sasori
  • Page: /admin/sasori
  • Renderer: existing src/lib/markdown.ts (marked + DOMPurify, sanitized)
  • Permit holder (today): Azure Peak (0x137df237acc6d90559fff498d3313aeef4dc5b5c, genesis admin)
  • Bernays-side skill: ~/.claude/skills/sasori/SKILL.md — instructs future sessions how to read/write this surface

Open log

Bernays writes here as the conversation evolves. Treat each block as append-only unless explicitly revising.

2026-05-06 — Entity created

First instantiation. Substrate file, admin nav entry, tab registry row, DB grant, Astro page, and Bernays-side skill prepared. Deploy is held — admin/admin_tabs.py and src/ changes need a docker compose up -d --build + ./scripts/build-site.sh cycle to land in production. Layer2 markdown content (this file) is already live-readable from inside the container the moment the page route exists.

Tier 1 hypothesis under test: the smallest viable contributor service is a markdown surface plus a permit grant. If reading and editing this through the admin feels right, Tier 2 (embedded terminal) is the next entity to instantiate. If it feels wrong — too thin, too disconnected, too read-only — that's the signal to redesign before scaling the pattern.

2026-05-06 — Admin nav reset to Akatsuki-only

Shay called for a clean slate. Non-Akatsuki tabs (Specs, Supporters, Crypto pending, Users, Permits, Stripe, Treasury, Newsletter, Submissions, Tracker, Monologue, Staged) were removed from the admin frontend nav. Their .astro pages and admin_tabs.py registry entries were retained as code reference — the rebuild will happen under the new substrate model, not by editing the legacy in place.

Surviving nav: Konan, Deidara, Sasori. Three tabs, all Akatsuki-named, deliberately spare. The point is that the admin shell now reflects the future doctrine — Stoka admin is a launcher for Tsukuyomi-mediated services — not the legacy doctrine where each tab was a hand-built feature page.

2026-05-06 — Role / Tab / Service: three separate entities

New foundational principle.

  • Role — what kind of contributor a user is. A property of the user. Future first-class concept; schema TBD. Examples might be genesis, trusted-dev, marketing, researcher, viewer. A user has 0..N roles.
  • Tab — a UI surface in the Stoka admin shell. A property of the frontend. Examples: Konan, Deidara, Sasori. A tab is rendered or hidden based on the viewing user's roles, but the tab itself does not encode a role.
  • Service — a Tsukuyomi-mediated capability that the substrate materializes for permitted users. A tab MAY be the front-end of a service, but services can also exist without a tab (e.g., a tmux session, an MCP endpoint, a sandbox profile).

Why the separation matters: the legacy admin.tab_grants table directly couples wallets to tabs. That collapses three entities into one and makes role-based reasoning impossible. "Kyle is a trusted-dev" should be a single fact, not a fan-out of 12 tab grants. "Genesis sees everything" should be a single rule, not a backfill of every tab id into one wallet's row.

The new model (to be designed): users.roles[] + services registry + services.required_roles[] + tabs computed from services filtered by user roles. tab_grants becomes legacy / migration source.

This was crystallized as a wiki decision on 2026-05-06.

2026-05-06 — Wiki crystallization

The conversation reframe (Tsukuyomi as Stoka's capability plane), the four-tier trajectory, the seven open architectural choices, the Sasori prototype, the Akatsuki-only nav doctrine, and the role/tab/service separation were all written into the permanent llm-wiki layer at /home/edward/wiki/wiki/:

  • New entity: entities/Tsukuyomi-Substrate.md — the new big-picture doctrine
  • New entity: entities/Sasori-Service.md — Sasori specifically
  • Updated: entities/Software-Tsukuyomi.md — old framing marked superseded
  • Updated: entities/StokaSoftware.md — admin section now reflects Akatsuki-only nav + role separation
  • New decision: decisions/admin-nav-akatsuki-only.md
  • New decision: decisions/role-vs-tab-separation.md

Future Bernays-class sessions read the wiki to re-enter this thread. The wiki is now the authoritative version; this Sasori file is the running journal.

2026-05-06 — Deidara wrapped, Konan-shell parity for all 3 nav tabs

Deidara was external (deidara.phillyshell.cloud); now wrapped in /admin/deidara as an iframe inside the Konan-shell. CSP frame-ancestors 'self' https://stokasoftware.com swapped in for X-Frame-Options: SAMEORIGIN on phillyshell Caddy. Auth handoff via URL hash (fragment-only, never in HTTP referer).

Sasori was light-by-default; now always-dark to match Konan (#0a0a0a body, .admin-layout > .admin-shell, rgba(255,255,255,*) palette, rgba(16,185,129,*) accent, AdminKonanBubble bottom-right, AdminNav top).

All three Akatsuki nav tabs now visually coherent. Deidara's deeper Tier-2 substrate alignment (real role gating, embedded terminal pattern) deferred until BZ campaign delivers roles.

2026-05-06 — Sasori panel evolves into a question/answer surface

Sasori grew from "shared markdown surface" into "shared markdown + agent-blocked-question surface" today. Added:

  • Left sidebar matching the Konan-tab AdminSidebar conventions — sections (Notes / Pending / Archive), .side-list + .side-item cards with dot+label+count, sticky 260px rail.
  • Each layer2/sasori/questions/*.md is a switchable file-pane in the main shell. Click any sidebar item to load that pane; URL hash routing (#q-0001) works for direct links.
  • Inline answer form rendered for any question with a status: pending frontmatter and an answers: schema. Radio cards use a subtle green-outline glow on selection (no native radio dots — visually-hidden inputs preserve form semantics).
  • Lock & Submit button at the end of each form. POSTs to /api/admin/sasori/questions/{id}/answer (genesis-only). Endpoint flips frontmatter to status: answered, appends ## Answer captured to the body, writes a JSONL entry to layer2/sasori/answers-queue/pending.jsonl.
  • Auto-detection wired: ~/.claude/skills/sasori/check-pending.sh reads the JSONL vs. a cursor and surfaces new locks. Wired as a Claude Code UserPromptSubmit hook so every prompt auto-checks. Backup of pre-hook settings at ~/.claude/settings.json.bak.sasori-hook.
  • Q0001 (Sheeran integration scoping) banked + answered; Q0002 (D-06 + Deidara confirm) banked + pending.

The pattern itself is a substrate primitive: long-form agent questions go to a durable, browseable surface; Shay locks them via UI; agents pick up the answers automatically. No more Telegram dumps.

2026-05-06 — Sheeran integration: native port into Deidara tab (LOCKED)

Q0001 outcome: Q1 picked A_then_C (incremental iframe → C), but Q2 notes superseded with "i think i want to handle the porting of sheeran into a native stokasoftware deidara tab right now." Confirmed in chat. Decision is locked: native port, not iframe. The previously-sketched S.1–S.5 iframe track in BZ PLAN.md is superseded; a fresh per-surface scoping pass for Sheeran's surfaces (Compositions 2, Voice Studio, Brand, etc.) is deferred to a future question card.

The /admin/deidara iframe → deidara.phillyshell.cloud becomes a temporary way-station that progressively gets replaced by native Stoka surfaces. Sheeran the standalone app keeps running for partner workflows (Kyle/Rahul) — no forced migration.

2026-05-06 — Pre-compact handoff

Shay flagged compact incoming. Anticipated next thread: Sasori panel modifications to make the surface faster to use AND to facilitate Deidara-panel reconstruction through agent-driven workflow. The exact modifications are TBD post-compact. State is locked in:

  • BZ campaign STATE.md / DECISIONS.md / RUNLOG.md / PLAN.md all current
  • Q0001 + Q0002 in layer2/sasori/questions/, both visible in /admin/sasori sidebar
  • Auto-trigger hook live
  • Skill descriptions current

A clean post-compact resume is one /black-zetsu skill invocation away.

2026-05-06 — Black Zetsu Campaign initiated

Shay declared the role/wallet substrate work (formerly listed as one of the "open work" items in Tsukuyomi-Substrate) foundational — determinant of scope for basically every other Tsukuyomi/Stoka goal. Spun up a long-running campaign to ship it.

Wiki state: wiki/campaigns/black-zetsu/{README,PLAN,STATE,DECISIONS,RUNLOG}.md + wiki/entities/Black-Zetsu-Tab.md. Operational primitive: /black-zetsu skill (registered, harness-discoverable).

Phases: 0 (decision lock) → A (schema + read API + UI) → B (grant/revoke) → C (realtime) → D (MCP) → E (tab gating migration) → F (cascading hooks + self-service inbox) → G (cross-machine JWT, deferred).

Currently in Phase 0 — 7 architectural decisions to lock before any code. Next session: invoke /black-zetsu to walk through DECISIONS.md row-by-row.

The legacy admin.tab_grants (the Sasori bridge) will stay operational until BZ Phase E migrates the tab-gating logic to derive from roles. Sasori's role mapping is provisional in its entity page; firm it up after Phase 0 settles role granularity (D-01).

2026-05-06 — Sasori → White Zetsu rename + first concrete deliverable

Sasori-the-tab is now White Zetsu-the-tab. Same physical surface, reframed conceptually:

  • Old framing: agent Q&A surface (Shay ↔ Bernays-class agents)
  • New framing: multi-modal facilitation panel — human↔human, human↔AI, human↔AI↔human, configurable, per-role with proper read/write scoping. Every role gets its own WZ panel.

Lore-clean: White Zetsu = plurality / multi-instance worker (fits "every role gets one"); Black Zetsu remains the singular meta-administrative will (the BZ tab).

New nomenclature: the markdown documents inside layer2/white-zetsu/ are now called "zetsu files" going forward. notes.md is a zetsu file. Each questions/<NNNN>-<slug>.md is a zetsu file. Soon: each meetings/<name>.md will be a zetsu file.

First tunnel-vision deliverable (Shay's framing): Discord meeting bot.

  • Input surface: Discord bot in the company voice call
  • Output surface: White Zetsu page — one dedicated zetsu file per meeting under meetings/
  • Function: live AI note-taker / record-keeper for meetings, on-demand
  • Why this is the right first deliverable: forces the abstract WZ infrastructure to be real (per-role scoping, file-pane sidebar, live updates), proves the substrate, ships visible value

Q0003 banked with 7 scoping decisions + 1 catch-all. Awaiting UI lock.

Per-user-tmux + multi-role-WZ-scope decisions deferred past this deliverable (they were the original Q0003 plan; reprioritized after Shay's "tunnel-vision toward Discord meeting bot" steer).

2026-05-06 — Discord meeting-bot v1 scaffolded

Q0003 verbally accepted (recommended defaults across all 7 questions). v1 build complete:

  • Service: services/discord-meeting-bot/ — py-cord client, slash commands (/wz-record-start, /wz-record-stop, /wz-record-status, genesis-only authz), faster-whisper local + OpenAI Whisper API fallback, custom LivePCMSink for in-memory per-speaker buffering, live-partial transcription every 15s, finalize-pass at stop, atomic zetsu file writes
  • Compose: discord-meeting-bot service added under the wz profile so existing rebuilds don't try to pull a GPU image. Mounts layer2/white-zetsu/meetings/ directly so writes show up in the WZ UI without a sync layer. NVIDIA runtime declared
  • WZ admin page: new Meetings sidebar section (recording = pulsing red dot, finalized = static blue), one file pane per meeting with frontmatter-driven kicker (Meeting · channel · status)
  • Storage: layer2/white-zetsu/meetings/<YYYY-MM-DD-HHMM-channel-slug>.md
  • Setup doc: services/discord-meeting-bot/README.md walks Shay through Discord developer-app creation, bot invitation, token + user/guild ID extraction, .env population

Shay-blocked next step: create Discord app, drop token + IDs into .env, then docker compose --profile wz up -d --build discord-meeting-bot. README has the click-by-click steps.

Sample meeting at meetings/2026-05-06-2230-sample-test.md to validate the WZ sidebar rendering — delete once a real meeting appears.

2026-05-06 — Path C live: wz-whisper-service deployed to corner0

Discord meeting bot now uses corner0's idle GTX 1070 Ti for transcription instead of edward (whose 3090s are claimed by Qwen3-VL). Cloud Whisper API stays as a fallback, not a primary.

corner0 service (/root/wz-whisper-service/):

  • FastAPI on 100.99.51.87:9099 (tailnet-only)
  • faster-whisper large-v3-turbo, compute_type=int8 (Pascal can't do efficient float16 — int8 works clean, ~2.3s model load)
  • Bearer-token auth (token saved at /tmp/wz-whisper-token.txt for this session — must be dropped into edward's bot .env)
  • Smoke test: 11s JFK clip → 1.2s transcribe (~9x realtime) on the 1070 Ti, perfect transcription
  • 980 MB VRAM peak, 7+ GB headroom remaining

edward (bot service):

  • transcribe.py adds _transcribe_remote() (httpx multipart POST)
  • New default strategy remote_first_cloud_fallback: corner0 first, OpenAI Whisper API on remote failure
  • Bot Dockerfile dropped from CUDA base to python:3.11-slim — no GPU needed for the bot itself anymore
  • Compose: GPU reservation removed; bot stays under wz profile

Verified: bot's Transcriber class talking to corner0 returns identical results to direct curl. End-to-end Path C operational.

Shay-blocked: Discord bot token + ID setup, drop WHISPER_REMOTE_TOKEN=50386fda4cad2bf4dcbe8f2884e5420c8cfceff24a35bb9cd13cc45b4a653aad into bot .env, docker compose --profile wz up -d --build discord-meeting-bot.

2026-05-07 — First public handoff zetsu published

New zetsu file type: handoffs/. Files with public: true in frontmatter render at https://stokasoftware.com/wz/<slug> — no admin auth, no admin nav, clean reader layout. Files without public: true 404 on the public route (still visible in admin). Admin sidebar gained a Handoffs section + per-file pane with a copy-link affordance.

First publication: 2026-05-07-akatsuki-roles.md — substrate role-naming explainer for Kyle (Pain analogy + Madara/Itachi/Deidara org chart).

2026-05-07T21:20Z — Preprod "unable to connect" drift post-mortem

Edward saw "unable to connect to server" when opening the Debono Preprod card from apps.phillyshell.cloud (with dev-toggle on, URL = https://debono-preprod.phillyshell.cloud/). Pinpointed root cause:

Drift. frontend/src/api/client.js::getApiUrl() hostname ladder had explicit cases for debono-dev / dev2 / dev3 / dev4.phillyshell.cloud — each returns https://${host} so the FE goes same-origin and the VPS Caddy /api/* reverse_proxy forwards to the upstream BE. When the debono-preprod stack was spun up earlier today via /spinup-app, the Caddy block + DNS + portal entry all got wired correctly — but debono-preprod.phillyshell.cloud was never added to client.js. Loading over HTTPS via the tunnel fell through the ladder to the REACT_APP_API_URL default (http://100.111.71.48:7003), which hit:

  1. Mixed content — HTTPS page → HTTP API → browser blocks
  2. From phone, http://100.111.71.48:7003 is reachable over tailnet but blocked anyway by the mixed-content rule

Fix. 5-line addition to client.js mirroring the dev2/dev3/dev4 cases. Commit 1b38703c. Verified via curl --resolve against 100.95.159.55 (VPS tailnet IP): FE meta sha === BE sha, /api/version + /api/auth/login both route correctly through Caddy.

Lesson — for the spinup-app skill. When spinning up a new debono-* stack via /spinup-app, the Caddy/DNS/portal phases are codified but the frontend api/client.js host-ladder addition is NOT yet in the recipe. Should be: any new tailnet-gated stack that uses the Caddy /api/* proxy pattern needs a matching host === '<new-host>' clause returning https://${host}. Otherwise the FE will load but every call fails. Will surface this gap next time /spinup-app runs a tailnet app variant.

Pinned for future me: if Edward says "unable to connect" on a *.phillyshell.cloud subdomain, FIRST check client.js for the host clause before assuming Caddy / DNS / cert / cookie.

2026-05-07T21:30Z — Build-context drift POST-MORTEM (real root cause)

Earlier post-mortem at T21:20 was incomplete. Actual root cause found:

The client.js host-clause fix I committed at 1b38703c never landed in the served bundle. Reason: /home/edward/public_apps/Debono/docker-compose.preprod.yml had frontend.build.context: ./frontend and backend.build.context: ./backend — relative paths that resolved to /Debono/frontend and /Debono/backend (main worktree, on branch preprod/admin-workspace-cleanup). My fixes were in the campaign worktree at /home/edward/public_apps/.parity-preprod-wt, on branch campaign/public-mini-games-substrate.

When I ran docker compose -f docker-compose.preprod.yml build frontend from /Debono, compose built from /Debono — silently dropping every campaign commit. The --build-arg BUILD_SHA=1b38703c was just stamped into the HTML; the actual JS came from main's old code. Bundle hash: app.135aef21.js (stale). Tunnel served stale bundle. Edward saw "unable to connect" because the stale bundle's getApiUrl() had no debono-preprod clause.

Fix codified. Commit 79aec656 on /Debono/docker-compose.preprod.yml: both backend.build.context and frontend.build.context now point at ../.parity-preprod-wt/{backend,frontend}. Future preprod builds pull from the campaign worktree by default, regardless of which dir compose runs from.

Verified: bundle re-hashed to app.31872f2b.js, contains both debono-preprod (host-clause fix) and hint-sidebar-rail (sidebar glow redesign). Tunnel reachability: 200 on /, 200 on /api/version returning matching SHA, login flow works.

Two sub-rules now in standing memory:

  1. feedback_preprod_build_context_invariant.md — preprod compose builds from .parity-preprod-wt, ALWAYS. Probe bundle for expected fix-string post-build; build-sha tag is not proof.
  2. feedback_rebuild_be_with_fe_always.md — even FE-only diffs rebuild both services at the same SHA (loop-prevention invariant).

Cross-compact reliability for preprod is now load-bearing on both rules + the codified compose context. Future agents inherit them automatically via MEMORY.md.

2026-05-07T22:05Z — Two reports for Edward (banked to WZ)

1. DLC concept-card carousels — diagnosis (data NOT lost)

Data is fine. Codex confirmed identical populated dlc_concept_packs in BOTH prod (postgres) and preprod (debono_preprod):

  • MPJE: 1436 cards / 6 areas
  • NAPLEX: 719 cards / 15 areas
  • PTCB: 72 cards / 4 areas

Your prewarm content is intact in both DBs. Not the slug-shape recurrence either — wiki/probes/dlc-prewarm-coverage.sh PASS.

Bug is in the request path. GET /api/dlc/{dlc}/carousel is read-only by design (locked by test_get_dlc_carousel_is_read_only_when_uninitialized). Returns whatever's in user_dlc_treadmill.active_card_ids. Frontend ConceptCardCarousel.jsx only calls GET, never POST /api/dlc/{dlc}/treadmill/init. Any entitled user without a pre-existing treadmill row sees [].

Prod-affected counts (entitled users missing the unscoped treadmill row): MPJE 3/6, NAPLEX 4/5, PTCB 1/2. So real users on first-visit hit this. Same on preprod.

Fix is ~10 FE lines: add initializeDlcTreadmill() to frontend/src/api/dlcCarousel.js, call it from ConceptCardCarousel after an empty GET, refetch. Preserves BE contract + test. Full diagnostic at /tmp/codex-dlc-carousel-result.md.

Need your greenlight to ship this fix to preprod (then your eyeball, then prod).

2. Modal-viewport refactor — synthesized opinion

Path B + a single <GameViewportModal> primitive is the right move. Codex + me agree, and exa-search confirms Radix UI's Dialog anatomy is the same shape (modern best-of-breed converged here 2024-25).

Shape:

  • One primitive driven by props for everything that varies
  • accent="vaccine|infection|molecule|boss|diamond|cram|equipment" + accentRgb escape hatch (Equipment's keystone-derived RGB doesn't fit enum)
  • portalSelector configurable, fallbackToBody opt-in, bodyFallbackBehavior ("immediate-then-retarget" preserves BossMap's dead-click-free behavior)
  • closeLocked + onRequestClose(reason) — caller computes mid-round lock, primitive doesn't know game internals
  • closeButtonHidden / renderHeader={false} — BossMap intentionally has no X; Cram has its own header
  • useMapViewportPortalTarget hook factors out the RAF retry + MutationObserver target-removal watching

Path A vs B:

  • Path A (z-lift, S-effort): elevate map-internal rails to z>100 with pointer-events: none during modal sessions. Cheap. Lies to assistive tech (chrome appears interactive but is inert). Band-aid.
  • Path B (DOM layer split, M-effort): restructure .deck-map-area into <map-chrome-layer> + <map-modal-root> siblings. ~8 files touched. Honest. Future-proof. Recommendation.

Why this primitive shape: eliminates the 5 current shapes (Cluster 1 portal w/RAF, Cluster 1 portal sync, body-portal full-viewport, inline replacement, divergent intake) without special-cases. Configurable knobs cover every existing edge case.

Validation: Radix UI Dialog uses Root/Portal/Overlay/Content/Title/Description/Close as separate slots — accent + behavior via composition not config explosion, portal target overridable, asChild for theming. We're effectively building the Debono-flavored Radix Dialog. Full landmark with 45-component inventory + 11 ship gates lives at wiki/wiki/topics/Map-Viewport-Game-Modal-Canon.md.

Decision needed from you: (a) ship the DLC carousel init fix to preprod next, (b) start the modal refactor on preprod, or (c) both in parallel.

konan@stokasoftware:~/blog$
connecting
konan v4.7 — stokasoftware blog editor
live api · firecrawl + pplx mounted · opus 4.7 / xhigh
try: "draft a debono patch note" · "retag dbn-7c0 as fix" · "what's in screwit's uncategorized?"
konan thinking
you