KB-6739 rev 17

P3D — B3-F1c-c Directus Flow + Nuxt Endpoint + DOT-Governed Scheduler Design

15 min read Revision 17
p3dbirth-systemb3f1c-cschedulerdirectus-flownuxtdot-governeddesign

P3D — B3-F1c-c Directus Flow + Nuxt Endpoint + DOT-Governed Scheduler Design

Date: 2026-05-13 Author: Opus Status: DESIGN DRAFT — requires probe confirmation + GPT review before execution Prerequisites: B3-F1c-b PASS (function installed), B3-F1b LIVE (soft gate)


1. Architecture overview

┌─────────────────────────────────────────────────────────────────┐
│                    DIRECTUS FLOW (schedule)                      │
│  name: [Birth] Onboarding Full Scan                             │
│  trigger: schedule                                              │
│  cron: from approved cadence policy/config (candidate cadence must be probed/compiled, not treated as truth)               │
│  status: active                                                 │
│                                                                 │
│  Operation 1: HTTP Request                                      │
│    POST → <DISCOVERED_INTERNAL_URL>/<DISCOVERED_OR_REVIEWED_ENDPOINT_PATH>
│    (candidate — probe Phase 1 resolves actual URL + path)       │
└──────────────────────────┬──────────────────────────────────────┘
                           │
                           ▼
┌─────────────────────────────────────────────────────────────────┐
│                    NUXT API ENDPOINT                             │
│  /api/birth/onboarding/full-scan                                │
│  Method: POST (or GET — probe existing pattern)                 │
│                                                                 │
│  Step 1: Check enabled flag (dot_config kill switch)            │
│  Step 2: SELECT public.fn_birth_onboarding_full_scan()          │
│  Step 3: Parse JSONB result                                     │
│  Step 4: If status='complete' or status='dependency_fail',      │
│         write summary only if system_issues shape supports      │
│         the compiled observability pattern                      │
│  Step 5: Return JSONB to caller                                 │
└──────────────────────────┬──────────────────────────────────────┘
                           │
                           ▼
┌─────────────────────────────────────────────────────────────────┐
│                    PG FUNCTION (already installed)               │
│  public.fn_birth_onboarding_full_scan()                         │
│  → validates dependencies (contract, policy, siblings)          │
│  → scans live collection_registry rows (count derived at runtime) │
│  → logs gaps via helper → system_issues                         │
│  → returns JSONB summary                                        │
└─────────────────────────────────────────────────────────────────┘

2. Design question answers

A. Where is the scheduler registry?

Primary execution mechanism: directus_flows table (Directus-native scheduler).

DOT governance registration: dot_tools table — a new row registering this scheduled job as a governed tool, making it discoverable, auditable, and subject to DOT classification.

Registry Role What it stores
directus_flows Execution Flow definition, cron schedule, operation chain
dot_tools Governance Tool registration, candidate classification derived from live DOT taxonomy, DOT origin
dot_config Policy Cadence policy, enabled/disabled flag

Needs probe confirmation: Does [DOT-REG] Count Refresh (6h) have a corresponding dot_tools row? If yes, follow that pattern. If no, this is a gap to address.

B. Where is cadence stored?

Two locations (by design, not drift):

  1. dot_config — governance/policy layer:

    • Key: policy.birth_full_scan.cadence_cron — value: approved cadence expression. 0 */6 * * * may be a candidate only; it is not truth until reviewed and materialized.
    • Key: policy.birth_full_scan.enabled — value: true (kill switch)
    • Changing cadence = update dot_config + update Directus Flow schedule
  2. directus_flows.options — execution layer:

    • The actual cron string Directus uses to trigger
    • Must match dot_config policy value

Why two? dot_config is the governance record (queryable, auditable, policy-driven). Directus Flow is the execution mechanism. Neither alone is sufficient — dot_config can't trigger execution, Directus Flow isn't governed.

Rejected alternative: Directus-only cron with dot_config as reference-only. This creates two unsynchronized truths and is not acceptable for B3-F1c-c.

Recommendation: B3-F1c-c-a must probe whether cadence policy keys already exist. If absent, compile a proposed cadence policy artifact and a Directus Flow seed artifact whose cron value is derived from the same reviewed candidate. B3-F1c-c-b must verify the Directus Flow cron matches the reviewed policy/candidate at execution time. Drift enforcement may be a later health check, but the initial seed must not knowingly create two unsynchronized truths.

C. How does Directus Flow call Nuxt endpoint?

Based on proven pattern ([DOT-REG] Count Refresh (6h)):

  • Operation type: request (HTTP Request)
  • Method: POST (needs probe confirmation — some flows use GET)
  • URL: internal Docker network or localhost

Needs probe confirmation:

  • Exact URL pattern (e.g., http://localhost:3000/api/... or http://web:3000/api/...)
  • Auth headers (e.g., Bearer token from env, or no auth for internal calls)
  • Request body format (empty, or JSON with parameters)
  • Response handling (does Directus Flow log the response body?)

D. How does Nuxt endpoint call PG function?

Two possible patterns (needs probe):

  1. Via Directus SDK — Nuxt already has useDirectus() composable for most DB operations. But SELECT fn_...() is not a standard Directus CRUD operation. Some Directus SDKs support raw SQL via custom endpoints or extensions.

  2. Via direct PG connection — Using pg or postgres npm package directly from Nuxt server route. More explicit, but adds a direct DB dependency outside Directus abstraction.

Needs probe: How do existing Nuxt API endpoints that call PG functions (if any) handle the connection? Check /server/api/ directory for patterns.

E. Where is JSONB result persisted? (Observability)

Recommended: system_issues summary issue + Directus Flow log

Layer What Queryable Durable
PG function return JSONB in memory No (lost after call) No
Nuxt endpoint → system_issues Summary issue row Yes (PG) Yes
Directus Flow log Built-in run log Via Directus API Yes (until purge)
Nuxt application log stdout/file Via Docker logs Semi (log rotation)

Design:

The Nuxt endpoint writes a summary row to system_issues after each successful scan. The following values are CANDIDATES ONLY — actual fields, column names, and vocabulary must be derived from live system_issues shape (probe Phase 4) and existing issue conventions (e.g., how fn_b3f1_log_collection_onboarding_gap writes):

CANDIDATE VALUES (must verify against live schema):
issue_type: BIRTH_FULL_SCAN_RUN_SUMMARY (candidate — verify issue_type column exists and vocabulary)
severity: informational or critical (candidate — verify severity vocabulary from existing rows)
entity_ref: birth_onboarding_full_scan (candidate — verify entity_ref column exists)
description: "Full scan completed: ..." (text summary)
details: full JSONB from function return (candidate — verify JSONB/details column exists)

Fallback if system_issues lacks JSONB/details support:

  • If system_issues has no JSONB column for structured data → compile text-only summary in description field with observability_status=PARTIAL.
  • If system_issues schema cannot safely accommodate ANY summary write (e.g., mandatory columns not mappable) → mark observability_status=BLOCKED_FOR_OBSERVABILITY_DECISION.
  • Do NOT create new tables or columns in this phase.
  • Do NOT claim text-only summary equals structured observability.
  • Text-only summary is acceptable as interim but must be documented as PARTIAL.

Dedup strategy: Each run has a unique run_id. The summary issue uses run_id as part of identity. No dedup needed — each run produces one summary. If run-level dedup is wanted (e.g., don't log if no changes), the Nuxt endpoint can compare with the last summary.

Why not a new table? Adding a system_scan_runs table requires: new migration, collection_registry entry, species mapping, birth trigger, DOT tool registration. That's a full B3-type governance cycle for 1 table. Using existing system_issues avoids all of that while providing queryable, durable observability.

Needs probe: Does system_issues have a details or details_json column for structured data? Or is description text-only?

F. What is rollback?

Rollback for B3-F1c-c removes ONLY scheduler-created artifacts:

REMOVE:
- Directus Flow: [Birth] Onboarding Full Scan
- Directus Operation(s): chained to that flow
- dot_config: policy.birth_full_scan.cadence_cron (if created)
- dot_config: policy.birth_full_scan.enabled (if created)
- dot_tools: tool registration row (if created)
- Nuxt endpoint file: /server/api/birth/onboarding/full-scan.post.ts (if created)

DO NOT REMOVE:
- public.fn_birth_onboarding_full_scan() — B3-F1c-b artifact
- B3-F1b soft gate (helper, gate function, trigger)
- B3-A triggers
- dot_config sibling policy
- system_issues historical data

G. What should be two-pass?

B3-F1c-c-a (probe + compile):
  1. Agent reads existing scheduled flow structure (DOT-REG Count Refresh pattern)
  2. Agent reads existing Nuxt API endpoint patterns
  3. Agent checks FLOWS_ENV_ALLOW_LIST
  4. Agent checks system_issues column shape for summary storage
  5. Agent compiles: Nuxt endpoint code + Directus Flow seed SQL + dot_config policy + dot_tools registration
  6. Agent writes all compiled artifacts to KB
  7. NO execution

GPT review

B3-F1c-c-b (execution — REQUIRES SEPARATE GPT/USER APPROVAL):
  1. Agent creates Nuxt endpoint file
  2. Agent seeds Directus Flow + Operation via PG or API
  3. Agent seeds dot_config policy keys
  4. Agent registers in dot_tools
  5. Agent verifies: flow active, endpoint responds
  6. Manual test call (SELECT fn_birth_onboarding_full_scan() via endpoint) — THIS IS DML-AFFECTING (writes system_issues via helper). Requires SEPARATE GPT/user approval. Not part of automatic execution.
  7. Agent writes execution report to KB
  8. Git commit — requires SEPARATE explicit instruction. Not automatic.

3. DOT governance specifics

dot_tools registration

tool_name: birth-onboarding-full-scan (candidate name; must be checked for conflicts via live dot_tools rows)
tool_type: candidate — must discover closest existing type from live dot_tools semantics. Do not assume 'scheduled_job' exists.
classification: candidate — must derive from live DOT taxonomy probe. Do not name a specific classification until taxonomy is verified.
trigger_type: candidate — must discover closest existing trigger_type from live dot_tools semantics.
executor_ref: candidate — must match discovered Nuxt endpoint path from probe.
cron_schedule: candidate — must derive from reviewed cadence policy/candidate.
_dot_origin: candidate column name — verify column exists in dot_tools schema before using. Proposed value: b3f1c-c.
description: Scheduled full scan of collection_registry birth onboarding gaps. Calls fn_birth_onboarding_full_scan(), logs gaps to system_issues.

Needs probe: Exact column names in dot_tools (probe found cron_schedule, trigger_type, script_path — need full schema).

dot_config policy keys

Key Value Purpose
policy.birth_full_scan.enabled true Kill switch — endpoint checks before executing
policy.birth_full_scan.cadence_cron <approved cadence expression> Governance reference — actual cron in Directus Flow must match reviewed policy/candidate

4. Open questions requiring probe

# Question Why needed Probe method
1 [DOT-REG] Count Refresh (6h) — exact flow + operation structure Pattern reference for our flow SELECT * FROM directus_flows WHERE name LIKE '%Count Refresh%' + operations
2 Is that flow registered in dot_tools? Governance pattern precedent SELECT * FROM dot_tools WHERE tool_name LIKE '%count%refresh%'
3 Nuxt API endpoint pattern for PG calls Code pattern for our endpoint ls /opt/incomex/web/server/api/ + read 1 example
4 FLOWS_ENV_ALLOW_LIST What env vars are available to flows Check docker-compose or Directus env
5 URL pattern for Directus → Nuxt HTTP requests Internal Docker URL vs localhost Read operation config from existing flow
6 system_issues column shape (details/details_json) Can we store JSONB summary? SELECT column_name, data_type FROM information_schema.columns WHERE table_name='system_issues'
7 dot_tools full schema Correct column names for registration SELECT column_name FROM information_schema.columns WHERE table_name='dot_tools'

5. Risks and mitigations

Risk Impact Mitigation
Nuxt endpoint auth mismatch Directus Flow can't call endpoint Probe existing pattern first
Directus Flow doesn't log response body Observability gap system_issues summary provides backup
Cadence drift (dot_config vs Directus Flow) Governance record ≠ actual schedule Initial seed must derive Directus Flow cron from reviewed policy/candidate. Drift enforcement is future health check. No unsynchronized creation allowed.
system_issues doesn't have JSONB column Can't store structured summary Use description text with observability_status=PARTIAL, or mark BLOCKED_FOR_OBSERVABILITY_DECISION. Do not create a new table or column in this phase.
Container restart loses Directus Flow Flow must be seeded via migration or SQL Include in rollback/recovery documentation

6. What this design does NOT cover

  • Modifying fn_birth_onboarding_full_scan() — already installed, no changes
  • Hard gate (B3-F2) — separate phase
  • Phase 5C2 migration — separate phase
  • UI for viewing scan results — separate phase
  • Alerting/notification on critical gaps — separate phase

B3-F1c-c Scheduler Design DRAFT | Opus | 2026-05-13

Back to Knowledge Hub knowledge/dev/laws/dieu44-trien-khai/design/p3d-birth-system-b3f1c-c-directus-nuxt-dot-scheduler-design.md