# Meeting Facilitation Agent 

**A decision-matrix PRD for research development teams**

---

## How to use this document

This describes a system — a meeting facilitation agent with persistent institutional memory — in generic component terms. At each component, it presents the decisions a team must make before building.

Workflow:

1. Review this document with the build owner, IT partner, and/ or development team.
2. At each **Decision Point**, evaluate the options, select one, and record the choice in the **Your Decision** field.
3. The completed document is the input specification for the build, regardless of who implements it or what tools they use.

The architecture is portable across stacks. Specific tools, models, and vendors are listed only as examples within option tables.

---

## 1. Problem statement

Existing meeting AI tools provide transcription and post-meeting summarization. They do not facilitate the meeting itself, and they do not retain institutional context across meetings: prior decisions, recurring action items, stakeholder histories, project state, or per-person communication norms.

This system addresses both gaps. The Facilitation Layer operates during the meeting to keep time, capture decisions, surface re-surfacing topics, and prompt the host when conversation goes off-track or stalls. The Memory Layer captures meeting content using existing tooling and writes structured records to a persistent store. Subsequent prep packets and recaps draw context from that store.

The system is composed of an agent and a persistent memory store. The memory store holds institutional context. The agent reads from and writes to the store, and runs facilitation logic during meetings. Humans curate the store's contents and confirm the agent's facilitation outputs.

---

## 2. Outcomes

The system produces four outputs on a recurring basis:

- **Prep packet.** Delivered to each attendee before each facilitated meeting. One page per attendee, scoped to that attendee's role and seniority. Contents: meeting purpose, prior decisions on relevant projects, the attendee's open commitments, and topics requiring their input.
- **Live facilitation.** During each facilitated meeting, the agent tracks the agenda, captures decisions, detects re-surfacing topics, and prompts the host when conversation goes off-track or stalls. Output is surfaced to the host (or to the meeting itself, depending on Decision 4).
- **Personalized recap.** Delivered to each attendee within a defined window after the meeting ends. Contents: per-attendee action items, commitments, and meeting outcomes framed for the attendee's role and project involvement. Generated after the post-meeting questionnaire confirms decisions and action item ownership.
- **Memory store updates.** Each processed meeting writes new Decision and Action Item records. Scheduled reconciliation jobs update Person and Project records based on recent activity.

---

## 3. System architecture (generic)

The system has seven logical components. The decisions in Section 4 determine the implementation of each.

```
┌─────────────────┐     ┌──────────────────┐     ┌──────────────────┐     ┌─────────────────┐
│  Capture Layer  │────▶│ Intelligence     │────▶│  Facilitation    │────▶│ Delivery Layer  │
│  (transcripts)  │     │ Layer (model)    │     │  Layer (live)    │     │ (recipients)    │
└─────────────────┘     └────────┬─────────┘     └────────┬─────────┘     └─────────────────┘
                                 │                        │
                                 ▼                        │
                        ┌──────────────────┐              │
                        │  Memory Layer    │◀─────────────┘
                        │  (persistent     │
                        │  schema-driven   │
                        │  store)          │
                        └────────┬─────────┘
                                 │
                                 ▼
                        ┌──────────────────┐
                        │  Orchestrator    │
                        │  (triggers,      │
                        │  routing, state) │
                        └──────────────────┘

  Cross-cutting concern: Data Sensitivity Tiering
  (governs choices in every layer above)
```

Each component is specified below with its purpose, requirements, and decision points.

---

## 4. Components and decision points

### 4.1 Capture Layer

**Purpose.** Records meeting audio and produces a transcript and structured summary readable by downstream components.

**Functional requirements.**
- Captures audio from the team's primary meeting platform.
- Produces a time-stamped transcript with speaker attribution.
- Produces an initial AI-generated summary and proposed action items, or defers this to the Intelligence Layer.
- Exposes captured content to the Orchestrator via an API, connector, file drop, or equivalent integration.
- For Live mode (Section 4.4), produces a transcript stream with latency low enough to support real-time facilitation.
- Conforms to the institution's data governance policy (see Section 5).

**Non-functional requirements.**
- Post-meeting transcripts available within 15 minutes of meeting end.
- Live transcript stream latency: under 10 seconds from speech to text availability when supporting the Facilitation Layer.
- Speaker attribution sufficient for downstream commitment attribution.
- Consent model conforming to institutional policy on AI in meetings.

#### Decision Point 1 — Capture mechanism

| Option | Description | Strengths | Trade-offs |
|---|---|---|---|
| **A. Native AI feature of the meeting platform** | Built-in AI summary/transcript feature of the platform in use (e.g., Zoom AI Companion, Teams Premium intelligent recap, Google Meet summarization) | Bundled cost; data remains with existing vendor; institutional contract may already cover it | Tied to a single platform; quality varies by vendor; programmatic access may be limited; live stream availability depends on vendor |
| **B. Third-party meeting bot** | A bot service (e.g., Recall.ai, Meeting BaaS, Vexa) that joins meetings and streams transcripts | Cross-platform with one integration; supports real-time transcript streaming | Adds a third-party processor to the data flow; per-recording cost; engineering overhead for webhooks and retries |
| **C. Self-hosted speech-to-text** | A speech-to-text service or model (e.g., OpenAI Whisper API, locally-hosted Whisper, AssemblyAI) processing audio captured by the host machine or a companion app | No platform integration required; works regardless of meeting platform; supports IT-bypass companion app architecture (Section 4.4) | Requires audio capture mechanism; per-minute cost (cloud) or compute cost (local); speaker attribution may require additional logic |
| **D. Manual file drop** | Platform-generated transcripts placed in a watched folder by a human or sync rule | No new vendor; works in restricted IT environments | Not real-time; requires a human step; limited metadata; incompatible with Live mode |

**Selection criteria.** Single-platform team with an approved native AI feature: Option A. Cross-platform team or real-time access required: Option B (alone or layered on A). Live facilitation via companion app without platform integration: Option C. Institutional policy prevents A, B, and C: Option D (post-meeting workflows only).

**Your decision:** _________________________________________________

**Your reasoning:** _________________________________________________

---

### 4.2 Intelligence Layer

**Purpose.** Reads transcripts and memory store content, produces structured outputs (Decision and Action Item records) and unstructured outputs (prep packets, recaps, and live facilitation prompts).

**Functional requirements.**
- Accepts a transcript and a context payload of relevant Person, Project, Decision, and Action Item records.
- Produces structured outputs conforming to the schema in Section 4.3.
- Produces per-attendee text outputs varying by role, seniority, and communication style.
- Operates over an API or comparable programmatic interface.
- Supports short-cadence inference (sliding-window prompts every 30–60 seconds) when supporting the Facilitation Layer in Live mode.

**Non-functional requirements.**
- Reasoning quality sufficient to extract decisions and action items at a precision and recall acceptable for human editorial review. Low-confidence outputs flagged with `status: draft`.
- Inference latency under 5 seconds per facilitation prompt to support real-time host interaction.
- Data handling terms compatible with the sensitivity tier defined in Section 5.
- Cost model compatible with the team's meeting volume.

#### Decision Point 2 — Model and access path

| Option | Description | Strengths | Trade-offs |
|---|---|---|---|
| **A. Direct API access to a frontier model** | Paid account with a model provider (e.g., Anthropic, OpenAI, Google); direct API calls | Full programmatic control; latest model versions; supports short-cadence inference | Requires direct billing relationship; institutional policy may prohibit |
| **B. Institutional access to a frontier model** | Access via university enterprise contract, education program, or institutional sandbox | Pre-cleared institutional channel; no individual billing | Programmatic access may be restricted to human-in-the-loop; rate or feature limits may apply; short-cadence inference may be disallowed |
| **C. Productivity-suite embedded AI** | AI built into the institution's licensed productivity suite (e.g., Microsoft 365 Copilot, Google Workspace AI) | Existing data governance; minimal procurement | Limited model choice; programmatic access often constrained; short-cadence inference often unavailable |
| **D. Locally-hosted open model** | Model running on institutional hardware (e.g., Ollama, LM Studio, vLLM) | No external API calls; no per-request cost | Reasoning quality at runnable model sizes is currently below the threshold this system requires; specialized hardware required; operational burden |

**Selection criteria.** Select the highest-quality access compatible with the institution's data policy. Architecture is identical across options; only endpoint and authentication change. Verify provider data handling terms against the sensitivity tier in Section 5. For Live mode, verify the chosen access path permits short-cadence inference at the volumes required.

**Your decision:** _________________________________________________

**Your reasoning:** _________________________________________________

---

### 4.3 Memory Layer

**Purpose.** Persists Person, Project, Meeting, Decision, and Action Item records across meetings.

**Functional requirements.**
- Stores records of five entity types: **Person**, **Project**, **Meeting**, **Decision**, **Action Item**.
- Each record carries structured metadata (typed fields) and unstructured narrative content.
- Records reference each other (e.g., Decision references source Meeting and affected Projects; Action Item references owner Person, source Meeting, and Projects).
- Supports filtering and traversal by type and relationship.
- Distinguishes auto-maintained sections (system-owned) from human-edited sections.
- Preserves history; superseded Decisions remain in the store with forward links to replacements.

**Required schema (storage-agnostic).** The following entities and fields are required regardless of storage technology. Field names are illustrative; the requirement is that each field exists and is queryable.

**Person**
- `name`, `role`, `hierarchy` (e.g., executive / faculty / staff), `email`
- `projects` — references to Project records
- `communication-style` — narrative on framing content for this person
- `last-updated`
- Narrative sections: Person memory; Recent activity; Open action items (auto-maintained)

**Project**
- `name`, `project-type`, `status`, `timeline-start`, `timeline-key-date`
- `people` — references to Person records
- `last-updated`
- Narrative sections: Project memory; Open decisions (auto-maintained); Recent meetings (auto-maintained)

**Meeting**
- `title`, `date`, `duration-minutes`, `platform`
- `attendees` — references to Person records
- `projects` — references to Project records
- `agenda` (with per-item time allocations for Live mode), `desired-outcomes`, `facilitation-objectives`, `status`
- Capture-platform identifier (e.g., meeting UUID) for retrieval
- Narrative sections: Decisions captured; Action items captured; Transcript reference; Canonical recap (auto-maintained); Per-attendee personalized recaps (auto-maintained); Facilitation log (auto-maintained, populated during Live mode)

**Decision**
- `summary`, `date`
- `meeting` — reference to source Meeting
- `projects` — references to Project records
- `people-responsible` — references to Person records
- `status`, `supersedes`, `superseded-by`
- `captured-during`: `live` or `post-meeting`
- Narrative sections: Full text; Context

**Action Item**
- `title`, `description`
- `owner` — reference to Person record (may be null at meeting end pending questionnaire)
- `owner-confirmed`: boolean — set true after post-meeting questionnaire
- `source-meeting` — reference to source Meeting
- `source-decision` — optional reference to Decision
- `projects` — references to Project records
- `due-date`, `status`
- `times-surfaced` — counter incremented on re-discussion without resolution
- `opened-date`, `last-discussed`
- Narrative section: Notes

**Required field functions.**

- `type` field on every record enables filtered queries against the store.
- Bidirectional references propagate updates; a new Decision linked to a Project updates that Project's "Open decisions" view automatically.
- `times-surfaced` counter on Action Items drives the escalation rule defined in Workflow 3.
- `supersedes` / `superseded-by` chain on Decisions resolves "current state" queries without scanning meeting history.
- `facilitation-objectives` on Meeting records steers the Facilitation Layer's per-meeting behavior (e.g., "ensure Mike weighs in before commitment on slot count").
- `owner-confirmed` on Action Items distinguishes provisional ownership captured during the meeting from confirmed ownership established by the post-meeting questionnaire.
- Auto-maintained section markers define the human/system ownership boundary within each record.

#### Decision Point 3 — Storage technology

| Option | Description | Strengths | Trade-offs |
|---|---|---|---|
| **A. Markdown files in a knowledge management tool** | Tools such as Obsidian or Logseq. YAML frontmatter for typed fields, wikilinks for relationships, optional encrypted sync. Files on user-controlled hardware. | Full data control; rich human editing; portable file format; mature AI integration ecosystem | Team distribution requires sync mechanism; limited read-only sharing for external stakeholders |
| **B. Cloud relational/document database** | Hosted database (e.g., Supabase, Notion, Airtable) with the schema modeled as tables or pages | Multi-user access; built-in sharing | Data on vendor infrastructure; per-seat costs |
| **C. Institutional document library with structured templates** | SharePoint or Google Drive folder with files following the schema; AI access via platform API | Stays within institutional tenant; uses tools IT already supports | Slower setup; fewer purpose-built tools; structured-field querying is harder |
| **D. Local folder synced via OneDrive / Google Drive / Dropbox** | Same file structure as Option A using consumer or institutional cloud sync | Low cost; no specialized app required | Limited editing experience; encryption-at-rest depends on sync provider; team distribution complexity |

**Selection criteria.** Select the option compatible with the strictest applicable sensitivity tier (Section 5) that supports the editing workflow of the curation owner. Storage technology has the highest migration cost of any decision in this document.

**Your decision:** _________________________________________________

**Your reasoning:** _________________________________________________

---

### 4.4 Facilitation Layer

**Purpose.** Operates during the meeting to keep time against the agenda, capture decisions, detect re-surfacing topics, capture provisional action item ownership, and prompt the host when conversation goes off-track or stalls.

**Functional requirements.** The Facilitation Layer implements five primitives:

1. **Agenda tracking.** Compares elapsed time on the current agenda item to the per-item time allocation in the Meeting record. Surfaces warnings at configurable thresholds (default: 75%, 100%, 125% of allocated time). Tracks which agenda items have been addressed and which remain.
2. **Decision capture.** Detects when language indicates a decision has been reached; writes a draft Decision record linked to the Meeting; surfaces the captured decision to the host for confirmation.
3. **Re-surfacing detection.** Compares current discussion to existing open Decisions and Action Items in the Memory Layer. When current discussion repeats prior content, surfaces the prior record to the host.
4. **Off-track / stuck detection.** Detects when conversation has drifted from the current agenda item or shows sustained disagreement without convergence. Prompts the host with a structured choice: (a) table for future discussion, or (b) continue with a generated summary of current positions to push toward conclusion.
5. **Action item capture (provisional).** Detects language indicating commitment ("I'll take that," "Jane will draft that"); writes a draft Action Item record with provisional owner. Owner is marked `owner-confirmed: false` until the post-meeting questionnaire confirms or repairs.

**Inputs to the Facilitation Layer (per meeting):**
- Agenda with per-item time allocations
- Participant list with role and hierarchy from the Memory Layer
- Prep document with `desired-outcomes` and `facilitation-objectives` from the Meeting record
- Live transcript stream from the Capture Layer
- Open Decisions and Action Items from the Memory Layer for the relevant Projects

**Output channels (determined by Decision 4b):**
- Host-only surface (companion app on host's second screen, or private DM)
- Meeting-wide surface (chat post visible to all participants)

**Non-functional requirements.**
- End-to-end latency from speech to facilitation surface: under 30 seconds.
- Host can dismiss, accept, or modify any surfaced prompt without disrupting the meeting.
- All surfaced prompts and host responses are logged to the Meeting record's Facilitation log section.
- The behavior of all five primitives is defined in a customizable Skill file (`facilitation-script.md`, see Appendix A) — not hardcoded — so the team can tune cadence, thresholds, prompt wording, and decision criteria without a code change.

#### Decision Point 4a — Facilitation provider

| Option | Description | Strengths | Trade-offs |
|---|---|---|---|
| **A. Platform-native facilitator (Zoom AI Companion)** | Use Zoom AI Companion's built-in facilitation features (agenda tracking, decision capture, action item suggestions) | No build cost; runs inside the platform; data stays with Zoom; no IT integration approval beyond Zoom AI Companion itself | Limited to features Zoom exposes; cannot use the team's `facilitation-objectives` field; cannot reference Memory Layer content; primitives 3 (re-surfacing) and 4 (off-track/stuck) are not supported |
| **B. Platform-native facilitator (Microsoft Copilot 365)** | Use Copilot 365's facilitator capabilities in Teams | No build cost; runs inside the platform; data stays in Microsoft tenant | Requires Copilot 365 license; same limitations as Option A — Memory Layer integration and custom primitives not available |
| **C. Custom-built facilitator** | Build the Facilitation Layer as described above. Reads the live transcript from the Capture Layer, runs the five primitives via the Intelligence Layer, surfaces output via Decision 4b. | Implements all five primitives; integrates with Memory Layer; per-meeting `facilitation-objectives` are honored; customizable via Skill files | Build cost; maintenance burden; transcription source must be configured (Decision 4c) |

**Selection criteria.** Option A or B if the team's primary platform offers the feature, the institution licenses it, and reduced primitive coverage is acceptable. Option C when full primitive coverage and Memory Layer integration are required.

**Your decision (4a):** _________________________________________________

**Your reasoning:** _________________________________________________

---

#### Decision Point 4b — Facilitation surface

*Required only if Decision 4a is Option C.*

| Option | Description | Strengths | Trade-offs |
|---|---|---|---|
| **A. Companion app on host's second screen** | A separate application (web, desktop, or mobile) the host opens during meetings. The agent posts facilitation outputs to the app. The host reads silently and verbalizes to the room when needed. | No integration with the meeting platform required; no IT approval required for platform integration; host fully controls what reaches the room; works on any meeting platform | Requires building/customizing the companion app; host must have a second screen or split-screen setup; meeting participants do not see facilitation outputs directly |
| **B. Meeting chat post** | The agent posts facilitation outputs as messages in the meeting chat, visible to all participants | All participants see facilitation outputs; no separate app required | Requires platform integration with chat-posting permission; requires IT approval; visible facilitation may shift meeting dynamics; participants may interpret AI prompts as authoritative |
| **C. Hybrid** | Some primitives surface to the host only (e.g., off-track/stuck prompts requiring host judgment); others surface to the meeting (e.g., decision capture confirmations) | Routes each primitive to the appropriate audience | Highest build complexity; requires both surfaces; requires per-primitive routing rules |

**Selection criteria.** Option A when IT integration approval is unavailable or when the team prefers facilitation to remain a host-only resource. Option B when integration approval is available and meeting-wide visibility is desired. Option C when the team has capacity for the additional build and wants per-primitive routing.

**Your decision (4b):** _________________________________________________

**Your reasoning:** _________________________________________________

---

#### Decision Point 4c — Live transcription source

*Required only if Decision 4a is Option C.*

| Option | Description | Strengths | Trade-offs |
|---|---|---|---|
| **A. Platform live transcription** | Pull the live transcript stream from the meeting platform (Zoom live transcription API, Teams live captions, Google Meet captions) | Uses platform infrastructure; no additional audio handling; speaker attribution provided | Requires platform integration and IT approval; limited to platforms with exposed live transcript APIs |
| **B. Self-hosted speech-to-text via Whisper or equivalent** | Capture audio on the host machine (or companion app); send to OpenAI Whisper API, AssemblyAI, or similar. Alternatively, run Whisper locally. | No platform integration required; works on any platform; supports IT-bypass companion app architecture | Requires audio capture mechanism on the host machine; per-minute API cost or local compute requirements; speaker attribution must be solved separately (e.g., via diarization) |
| **C. Third-party meeting bot** | Use a meeting bot service (e.g., Recall.ai, Meeting BaaS) for the live transcript stream | Cross-platform with one integration; speaker attribution included | Adds a third-party processor; per-recording cost; bot consent issues; requires institutional approval |

**Selection criteria.** Option A when platform integration approval is available. Option B when IT-bypass companion app architecture is required (Decision 4b Option A). Option C when cross-platform support is required and bot processors are institutionally acceptable.

**Your decision (4c):** _________________________________________________

**Your reasoning:** _________________________________________________

---

### 4.5 Orchestrator

**Purpose.** Triggers workflows on a schedule or event basis, holds operational state, calls the Intelligence Layer, and routes outputs to the Delivery Layer.

**Functional requirements.**
- Executes four workflows: **process-meeting**, **generate-prep-packets**, **post-meeting-questionnaire**, **nightly-reconciliation** (specified in Section 4.7).
- Triggered manually, on a schedule, by webhook from the Capture Layer, or by meeting-end signal from the Facilitation Layer.
- Maintains operational state only — job records, meeting identifiers, processing status. Does not store transcript content, decisions, or person memory.
- Routes outputs to the Delivery Layer.

**Non-functional requirements.**
- Reliability sufficient for unattended operation at the team's meeting volume.
- Operational store contains Tier 1 content only (Section 5).

#### Decision Point 5 — Orchestration approach

| Option | Description | Strengths | Trade-offs |
|---|---|---|---|
| **A. Manual triggering from a desktop AI tool** | Build owner invokes saved workflows from a desktop AI application | Zero infrastructure; fastest to deploy | Requires daily human trigger; single-owner constraint; not compatible with Live facilitation triggering |
| **B. Always-on workstation with scheduled jobs** | Local machine (e.g., mini PC, workstation) running cron or Task Scheduler, invoking the Intelligence Layer programmatically | One-time hardware cost; data stays on premises; can host the Facilitation Layer's companion app | Hardware single point of failure; scripting required |
| **C. Microsoft / Google ecosystem orchestration** | Power Automate + Azure Functions, or Apps Script + Google Cloud Functions | Stays in institutional tenant; uses tools IT supports; clears procurement | Platform-specific learning curve |
| **D. Self-managed cloud** | Application deployed on a serverless platform (e.g., Vercel, Fly, Render, AWS Lambda) with a small operational database | Modern scheduling and webhook support; standard developer tooling | Some IT departments prohibit any third-party cloud touching institutional data, including operational state |

**Selection criteria.** Option A is suitable for proving the workflows and refining prompts and schema. Options B, C, and D are suitable for ongoing unattended operation. Live facilitation requires B, C, or D — not A. Schema and prompts are portable across all four; only the trigger mechanism changes.

**Your decision:** _________________________________________________

**Your reasoning:** _________________________________________________

---

### 4.6 Delivery Layer

**Purpose.** Routes prep packets, recaps, and post-meeting questionnaires to recipients.

**Functional requirements.**
- Delivers personalized output to each named recipient.
- Operates programmatically; invokable by the Orchestrator without human intervention.
- Conforms to the institution's automated communications policy (e.g., institutional mail relay configuration for outbound email).

#### Decision Point 6 — Delivery channel

| Option | Description | Strengths | Trade-offs |
|---|---|---|---|
| **A. Direct email to each attendee** | Programmatic email via institutional mail relay, an email API, or an email connector | Per-recipient delivery; reaches attendees outside any specific tool | Requires programmatic send capability; deliverability depends on institutional mail policy |
| **B. Shared team channel post** | Post recaps to a shared Slack channel, Teams channel, or M365 group | Low setup cost; suitable for teams operating in a single channel | Less per-recipient privacy; recipients outside the channel are not reached; questionnaires require per-recipient routing not supported |
| **C. Memory-store-only delivery** | Recipients open the Memory Layer to read assigned content | No additional delivery infrastructure | Requires every recipient to use the store as a daily-driver tool; questionnaires require per-recipient routing not supported |

**Selection criteria.** Option A when programmatic email send is available. Option B for small teams operating in a single channel (recap delivery only; questionnaires must use Option A). Option C only when all recipients use the Memory Layer directly (recap delivery only).

**Your decision:** _________________________________________________

**Your reasoning:** _________________________________________________

---

### 4.7 Workflow specifications

The Orchestrator executes four workflows. Two of these — **facilitation-script** and **post-meeting-questionnaire** — are implemented as customizable Skill files (markdown documents) so the team can tune behavior without a code change. Skeletons are provided in Appendix A.

**Workflow 1: generate-prep-packets**
1. Input: a date (default: tomorrow).
2. Retrieve the day's meetings (from calendar integration or manual list).
3. For each meeting: look up attendee Person records, project Project records, recent Decisions, open Action Items, and the Meeting record's `facilitation-objectives`.
4. For each attendee: generate a one-page prep packet using the attendee's `role`, `hierarchy`, and `communication-style` fields.
5. Deliver via the Delivery Layer.

**Workflow 2: facilitation-script (live, during meeting)**
1. Input: live transcript stream (Decision 4c), Meeting record, agenda with time allocations, open Decisions and Action Items for relevant Projects.
2. On meeting start: confirm agenda and time allocations are present; initialize agenda tracking timer.
3. On each transcript window (default: every 30–60 seconds, configurable in Skill file):
   - Run agenda tracking primitive; surface time warnings at configured thresholds.
   - Run decision capture primitive; on detection, write draft Decision and surface to host.
   - Run re-surfacing detection primitive; on detection, surface the matching prior record to host.
   - Run off-track/stuck detection primitive; on detection, prompt host with table-vs-continue choice.
   - Run action item capture primitive; on detection, write draft Action Item with provisional owner.
4. On meeting end: write the Facilitation log to the Meeting record; signal the Orchestrator to trigger Workflow 3.

The detection logic, surfacing wording, cadence, and threshold values for each primitive are defined in `facilitation-script.md` (Appendix A.1).

**Workflow 3: post-meeting-questionnaire**
1. Input: Meeting record with provisional Decisions and Action Items captured during the meeting.
2. Generate a per-attendee questionnaire confirming or repairing:
   - Each Decision attributed to that attendee or the meeting at large
   - Each Action Item with provisional `owner` matching that attendee, or where `owner` is null
3. Deliver via the Delivery Layer (email — Decision 6 Option A required).
4. On response: update the corresponding Decision and Action Item records; set `owner-confirmed: true` for confirmed Action Items.
5. After all responses received (or after a configurable timeout, default 24 hours): trigger Workflow 4.

The questionnaire wording, response handling, and timeout behavior are defined in `post-meeting-questionnaire.md` (Appendix A.2).

**Workflow 4: process-meeting (post-questionnaire finalization)**
1. Input: Meeting record with confirmed Decisions and Action Items (output of Workflow 3).
2. Identify attendees by matching transcript speaker labels to existing Person records; flag unmatched attendees for human review.
3. Identify projects by matching meeting agenda and content to existing Project records.
4. Generate a per-attendee personalized recap; write each as a section of the Meeting record.
5. Update the Person memory section of each attendee and the Project memory section of each affected Project.
6. Deliver recaps via the Delivery Layer.

**Workflow 5: nightly-reconciliation**
1. Walk all open Action Items. Mark `completed` if resolved in recent transcripts; increment `times-surfaced` if discussed without resolution; flag as `dropped` candidate if untouched for a configurable threshold (default: 14 days).
2. Walk all Person records. Regenerate the Person memory section based on the trailing 30 days of meetings the person attended.
3. Walk all Project records. Regenerate the Project memory section based on recent Meetings and Decisions.
4. Write a daily digest of changes for human review.

---

## 5. Cross-cutting concern: data sensitivity tiering

The system handles content across three sensitivity tiers. Component decisions in Section 4 must be compatible with the highest tier in scope.

| Tier | Example content | Architectural requirement |
|---|---|---|
| **Tier 1 — Operational** | Scheduling, identifiers, processing status | Permitted in any cloud service. Permitted in the Orchestrator's operational database. |
| **Tier 2 — Institutional / internal** | Decisions, action items, project status | Memory Layer must be in a tenant-controlled or user-controlled store. Live transcripts processed by the Facilitation Layer must use a provider compatible with Tier 2. |
| **Tier 3 — Competitive / personnel** | Sponsor strategy, faculty performance discussions, internal review committee deliberations, pre-submission grant strategy | Capture Layer must be configured to keep processing within institutional contracts. Intelligence Layer must use a provider with non-retention terms verified by the contracting office. Memory Layer must remain on institution-controlled hardware or tenant. Live transcription source (Decision 4c) must meet the same standard. |

**Orchestrator data scope.** The Orchestrator's operational store is restricted to Tier 1 content — job records, meeting identifiers, processing status. Transcript content, decision content, and person memory are stored only in the Memory Layer. This restriction enables third-party cloud hosting of the Orchestrator without elevating the system's overall sensitivity posture.

#### Decision Point 7 — Sensitivity posture

**Highest tier of in-scope content:** _________________________________________________

**Options excluded by this tier in Decisions 1–6:** _________________________________________________

---

## 6. Risks and mitigations

| Risk | Mitigation |
|---|---|
| Decision-extraction accuracy. The Facilitation Layer or Intelligence Layer may capture non-decisions as decisions or omit real decisions. | Live decision captures are surfaced to the host for in-meeting confirmation. Post-meeting questionnaire (Workflow 3) confirms all captured Decisions before recap delivery. |
| Hallucinated action items. | New Action Items written with provisional `owner` and `owner-confirmed: false`. Post-meeting questionnaire confirms or repairs ownership before recap delivery. |
| Off-track/stuck false positives. The Facilitation Layer may prompt the host to table or continue when conversation is productive. | Host fully controls whether to verbalize the prompt to the room. Surfaced prompts are dismissable without action. Threshold tuning lives in `facilitation-script.md` for team adjustment. |
| Memory rot. Schema fields drift, records become stale, action items accumulate. | Workflow 5 (nightly-reconciliation) runs from launch. Quarterly memory audit reviews all Project memory pages and prunes stale content. |
| Consent and culture. Attendees may not be aware that AI is processing or facilitating the meeting. | Meeting invites disclose AI processing and facilitation. Opt-out path documented and honored. When facilitation surface is meeting-wide (Decision 4b Option B), explicit attendee consent is required. |
| Scope creep. Pressure to add task assignment, Gantt charts, sponsor CRM features. | Memory Layer is constrained to memory functions only. Task management, project management, and CRM remain in their existing tools. The Memory Layer may write to those tools as a downstream integration. |

---

## 7. Out of scope (initial build)

- Project management tool integration (writing action items to external task systems). Future scope; possible as a downstream step from the Memory Layer.
- Cross-team memory sharing or institution-wide deployment. Future scope; build for a single team first.
- Multilingual facilitation. Initial build assumes a single working language for the Facilitation Layer's primitives and the post-meeting questionnaire.

---

## 8. Cost drivers (generic)

- **Capture Layer.** Marginal cost of zero when using a native platform AI feature covered by an existing institutional contract. Per-recording cost when using a third-party meeting bot. Per-minute cost when using a self-hosted speech-to-text API.
- **Intelligence Layer.** Variable cost scaling linearly with meetings processed. Per-meeting cost depends on model selection and average transcript length. Live mode (Section 4.4 Option C) adds cost proportional to facilitation cadence and meeting duration.
- **Memory Layer.** Fixed cost. Drivers: sync subscription, database hosting, or storage tier. Independent of meeting volume.
- **Facilitation Layer.** When Option C: build cost is one-time; runtime cost is captured under Capture and Intelligence Layers above. When Option A or B: included in the platform license.
- **Orchestrator and Delivery Layer.** Typically zero or near-zero on free tiers at single-team scale.

---

## 9. Success criteria

The system meets requirements when:

1. Each facilitated meeting produces a per-attendee recap within the agreed delivery window.
2. Prep packets are delivered before each facilitated meeting.
3. During each facilitated meeting, agenda time warnings are surfaced at configured thresholds, decisions are captured for host confirmation, and off-track/stuck prompts are surfaced when applicable.
4. Post-meeting questionnaire response rate exceeds a configured threshold (default: 80% of attendees), and Action Item ownership is confirmed before recap delivery.
5. Memory Layer Project records reflect current project status on spot-check against the team's understanding.
6. "What did we decide about X" is answerable by querying the Memory Layer.
7. Action Items inactive beyond the configured threshold are surfaced for explicit closure or escalation.

---

## 10. Implementation sequence

The components do not need to be built simultaneously. The sequence below minimizes rework.

1. **Memory Layer.** Build the schema and populate Person and Project records by hand for one recurring meeting. Validate the schema against the team's actual content.
2. **Capture Layer and Intelligence Layer.** Connect both. Verify the Intelligence Layer reads transcripts and Memory Layer content.
3. **Workflow 1: generate-prep-packets.** Run manually against the upcoming week's meetings. Iterate on prompts.
4. **Workflow 4: process-meeting (without facilitation inputs).** Run manually against 3–4 prior meetings. Iterate on prompts.
5. **Facilitation Layer (per Decision 4a).** If Option A or B: configure the platform-native feature. If Option C: implement primitives one at a time, in this order — agenda tracking, decision capture, action item capture, re-surfacing detection, off-track/stuck detection.
6. **Workflow 3: post-meeting-questionnaire.** Implement after at least one Facilitation Layer primitive is producing draft records.
7. **Workflow 5: nightly-reconciliation.** Run manually for one week, then schedule.
8. **Orchestrator.** Select an approach (Decision 5) based on observed usage patterns.
9. **Delivery Layer.** Configure per Decision 6.

---

## 11. Decision summary

| Decision | Selection | Reasoning |
|---|---|---|
| 1. Capture mechanism | _____________ | _____________ |
| 2. Intelligence Layer model and access | _____________ | _____________ |
| 3. Memory Layer storage | _____________ | _____________ |
| 4a. Facilitation provider | _____________ | _____________ |
| 4b. Facilitation surface (if 4a = C) | _____________ | _____________ |
| 4c. Live transcription source (if 4a = C) | _____________ | _____________ |
| 5. Orchestrator approach | _____________ | _____________ |
| 6. Delivery channel | _____________ | _____________ |
| 7. Highest sensitivity tier in scope | _____________ | _____________ |

**Build owner:** _________________________________________________

**Memory curation owner:** _________________________________________________

**Target date for first working process-meeting workflow:** _________________________________________________

---

## Appendix A: Customizable Skill files

Two workflows are externalized as markdown Skill files so the team can tune behavior without a code change. The skeletons below define the required structure. The team fills in the bracketed values, prompt text, and threshold numbers.

### A.1 `facilitation-script.md` (skeleton)

````markdown
# Facilitation Script

This file defines the live behavior of the Facilitation Layer. The Orchestrator
loads this file at the start of each facilitated meeting and follows the
instructions below.

## Configuration

- transcript-window-seconds: [30 | 60 | 90]   # how often primitives run
- agenda-warning-thresholds: [0.75, 1.0, 1.25] # fraction of allocated time
- restraint-mode: [host-only | meeting-wide | hybrid]  # mirrors Decision 4b

## Primitive 1 — Agenda tracking

Behavior:
- At meeting start, log the agenda items and per-item time allocations from
  the Meeting record.
- Every transcript-window-seconds, evaluate which agenda item is currently
  active based on the transcript content.
- When elapsed time on the current item crosses each threshold in
  agenda-warning-thresholds, surface a warning to the host.

Surface wording (replace as desired):
- 75%: "Heads up — [item] has used 75% of its allocated time."
- 100%: "Time check — [item] is at its allocated time. Continue or move on?"
- 125%: "[item] is over time. Suggest moving on or extending explicitly."

## Primitive 2 — Decision capture

Behavior:
- Run a detection prompt against the most recent transcript window.
- Detection prompt: "[Insert team's preferred prompt for identifying
  language that indicates a decision has been reached. Examples of
  decision language: 'we'll go with X', 'let's commit to Y', 'agreed on Z'.]"
- On detection, write a draft Decision record with status: draft and
  surface to host for confirmation.

Surface wording: "Decision captured: [decision summary]. Confirm? [Y/N/edit]"

## Primitive 3 — Re-surfacing detection

Behavior:
- For each transcript window, compare current discussion to open Decisions
  and Action Items in the Memory Layer for the meeting's relevant Projects.
- On semantic match, surface the matching prior record.

Surface wording: "This connects to [prior record summary] from [date].
Status: [open | decided | superseded]."

## Primitive 4 — Off-track / stuck detection

Behavior:
- Detect off-track: current discussion has drifted from the active agenda
  item for more than [N] consecutive transcript windows.
- Detect stuck: same topic discussed across [M] consecutive windows
  without convergence (sustained disagreement, repeated positions, no
  movement toward a decision).
- On detection, prompt the host with a structured choice.

Surface wording: "Conversation appears to be [off-track | stuck on
disagreement]. Options: (a) table this for future discussion, or (b)
continue with current positions summarized as: [generated summary].
Which would you like?"

Threshold values:
- off-track-windows: [3]
- stuck-windows: [4]

## Primitive 5 — Action item capture (provisional)

Behavior:
- Run a detection prompt against the most recent transcript window for
  commitment language ("I'll take that," "[Name] will draft X").
- On detection, write a draft Action Item record with provisional owner
  and owner-confirmed: false.

Surface wording: "Action item captured: [description]. Provisional owner:
[Name]. Confirm? [Y/N/edit]"

## Logging

All surfaced prompts and host responses are written to the Meeting
record's "Facilitation log" section in chronological order.
````

### A.2 `post-meeting-questionnaire.md` (skeleton)

````markdown
# Post-Meeting Questionnaire

This file defines the post-meeting confirmation workflow. The Orchestrator
generates and delivers a per-attendee questionnaire after meeting end and
before recap generation.

## Configuration

- delivery-delay-minutes: [5]      # delay after meeting end before sending
- response-timeout-hours: [24]     # before triggering recap with unconfirmed items
- send-channel: [email]            # mirrors Decision 6 (must be Option A)

## Questionnaire structure (per attendee)

For each Decision captured during the meeting where the attendee was
present:

  Question: "During [meeting title] on [date], the system captured this
  decision: [decision summary]. Is this accurate?"
  Response options: [Confirm | Edit (provide correction) | This was not
  a decision]

For each Action Item with provisional owner = this attendee:

  Question: "The system captured an action item assigned to you:
  [description]. Is this accurate?"
  Response options: [Confirm ownership | Reassign to: ___ | This is not
  an action item I committed to]

For each Action Item with owner = null (commitment detected, owner
unclear):

  Question: "The system captured this action item but could not identify
  an owner: [description]. Did you commit to this, or do you know who
  did?"
  Response options: [I'll own it | [Name] owns it | No one committed]

## Response handling

On response:
- For each Confirm: update the corresponding record; set
  owner-confirmed: true on Action Items.
- For each Edit: update the record with the corrected content.
- For each Not a decision / Not an action item: mark the record
  status: rejected.
- For Reassign: update owner and notify the new owner with a
  confirmation question.

## Timeout behavior

If response-timeout-hours elapses without a response from an attendee:
- Log non-response to the Meeting record.
- Trigger Workflow 4 (process-meeting) with unconfirmed items flagged
  in the recap as "Pending confirmation from [Name]."

## Customization notes

Teams should tune the question wording above to match their internal
voice. The structure (one question per provisional record, response
options as listed) is required for the Orchestrator to parse responses
correctly.
````
