CONTENTENGINE

Architecture Decision Record — Strategy, Roadmap & Justification — v1.0 — February 2026

Source: claude.ai/chat/f7a01dae-3d5b-40d7-b3de-d37b88505625

📋 PRD 🧑‍💻 Human Guide 🏗️ ADR

This ADR captures the WHY behind every architectural decision in the ContentEngine PRD. The PRD stays prescriptive (WHAT to build). The Human Guide stays practical (HOW to do human parts). This document provides narrative justification so decisions can be revisited without re-deriving from first principles.


1. Why n8n Workflows Instead of Custom Backend

1.1 The Problem

ContentEngine needs to orchestrate 6 automated processes: scraping, reply generation, digest delivery, reply posting, post scoring, and performance tracking. The conventional approach would be a FastAPI or Express backend with cron jobs. But we're building for a single operator (Jason) during Brad sprint crunch — every hour spent on infrastructure is an hour not spent on revenue-generating expert deployment.

RequirementCustom Backendn8n Workflows
Build time2-3 weeks3 sessions (~1 week)
Maintenance burdenServer management, updates, monitoringAlready running at n8n.masterymade.com
Reusability for expertsDeploy per-expert backend instanceDuplicate workflow, swap credentials
Debug visibilityLogs, custom monitoringVisual execution history, node-by-node
Modification by non-devsRequires code changesVisual editor, drag-and-drop

1.2 The Decision

1.3 Alternatives Rejected

2. Why Apify Instead of X API Pro ($100/mo)

2.1 The Problem

ContentEngine needs to read tweets from ~20 target accounts daily. X API Free tier is write-only. X API Pro ($100/mo) allows 10,000 reads/month. Apify scrapers provide equivalent data at $0.40/1,000 tweets. The cost difference is 50-100x.

2.2 The Decision

2.3 Alternatives Rejected

2.4 Apify Risk Mitigation

If Apify scraping breaks (X blocks the actor, or Apify raises prices), the fallback chain is:

  1. Switch to alternate Apify actor (multiple tweet scrapers on the platform)
  2. Add Chrome Extension scraping as supplementary data source (Phase 4)
  3. Upgrade to X API Pro ($100/mo) — the math still works if ContentEngine is deployed to 3+ paying experts

3. Why Telegram for Approval Instead of a Dashboard

3.1 The Problem

The operator needs to review ~30-60 reply candidates daily and approve the best ones. This could be done via a web dashboard (Next.js), a mobile app, email, or a messaging bot.

3.2 The Decision

3.3 Alternatives Rejected

4. Why Anthropic Sonnet Instead of GPT-4 or Local Models

4.1 The Problem

Reply generation and post scoring require an LLM that's good at: understanding social context, matching a writing voice, structured JSON output, and working within short-form constraints (280 chars). Cost matters because we're making 150+ API calls/day.

4.2 The Decision

4.3 Alternatives Rejected

5. Why 3 Reply Types (Not Free-Form Generation)

5.1 The Problem

LLMs generating "a reply" tend to produce generic engagement bait. The operator needs strategically diverse options that serve different purposes.

5.2 The Decision

5.3 Alternatives Rejected

6. Cost-Benefit Analysis

6.1 Build vs Buy Comparison

ScenarioBuild CostMonthly CostOwns Stack?Multi-Expert?
CreatorBuddy subscription$0$49/moNoNo (per-seat)
ContentEngine (this build)~$15 in API tokens~$12-20/moYesYes (workflow duplication)
Full CreatorBuddy clone (all 8 features)~$80 in tokens + 3 weeks~$20-35/moYesYes
CreatorBuddy + X API Pro (official data)$0$149/moNoNo

6.2 Break-Even Analysis

ContentEngine costs ~$15/mo to run. If deployed as a MasteryOS content module for experts at $29/mo per expert:

Marginal cost per expert is ~$12-15/mo (additional Apify + Anthropic usage). The n8n instance and Supabase free tier are shared overhead. Break-even on the build cost ($15) happens in the first month of personal use vs the CreatorBuddy subscription ($49).

6.3 Cost Scaling Model

Costs scale linearly with number of experts but sublinearly with volume per expert. The fixed costs (n8n hosting, Supabase) are already covered. Variable costs (Apify, Anthropic) scale per-call. At 30+ experts, Supabase will exceed free tier ($25/mo) and n8n may need more resources. Cross that bridge when revenue justifies it.

Key Insight: The real value isn't saving $37/mo vs CreatorBuddy. It's that ContentEngine becomes a deployable MasteryOS module with zero additional build work per expert — just credential swaps and voice profile updates. That's the asymmetric leverage play.

7. Third-Party Dependency Risks

7.1 Apify Tweet Scraper

7.2 X API Free Tier

8. Known Limitations & Risks

RiskSeverityMitigationStatus
Apify scraper blocked by XMEDIUMMultiple scraper actors available. X API Pro as paid fallback.Accepted
X API Free tier deprecatedLOWUpgrade to Basic ($200/mo) or use browser automation.Accepted
AI-generated reply causes brand damageHIGHHuman-in-the-loop approval required for all replies. No auto-posting in Phase 1.Mitigated
Anthropic API pricing increaseLOWSwap to GPT-4o-mini or local models. Prompt structure is model-agnostic.Accepted
X rate limits exceeded (50 posts/day)LOWReply approval workflow caps at 50/day. Operator unlikely to approve >30.Mitigated
Supabase free tier storage exceededLOWUpgrade to $25/mo Pro tier. Won't hit limits for 6+ months at current volume.Accepted
Telegram bot approval UX frictionMEDIUMOptimize digest format iteratively. Build web dashboard in Phase 4 if needed.Accepted
Reply quality degrades over timeMEDIUMWeekly performance tracker (WF-06) monitors reply effectiveness. Update voice profile and prompts based on data.Mitigated
Operator forgets to approve replies dailyMEDIUMTelegram digest is a push notification. Add a "snooze/remind later" option if needed.Accepted

9. Future Roadmap (The "Not Now" List)

9.1 Phase 2 — Post-Validation ~60 DAYS

After the reply engine proves it drives follower growth and the system has been running reliably for 60 days.

FeatureWhat It DoesEstimated Effort
Content RAGpgvector embeddings on own post history. RAG-powered content coach that knows what works for you. Query: "What topics get the most replies?" → answer grounded in your data.1 Ralph session + Supabase pgvector setup
History AnalyzerDashboard view of own posts sorted by engagement, clustered by topic. Requires Apify self-scrape to bootstrap own history.1 Ralph session
Own Account IngestionApify scrape of operator's own account for initial history load. Alternative: X data export (user-requested archive).1 n8n workflow modification
Advanced Voice ProfileAuto-extract voice characteristics from top posts using LLM analysis. Replace manual style_notes with computed embeddings.1 Ralph session

9.2 Phase 3 — Scale Optimization ~90 DAYS

After deploying for first expert (Brad) and validating the module works for someone other than Jason.

FeatureWhat It DoesTrigger
Brain Dump ToolFree-form text input → LLM generates posts, threads, article outlines in operator voice. n8n webhook endpoint.Voice profile validated by 60 days of data
Content ComposerIdea storage CRUD + AI repurposing. Supabase table + n8n webhook for "generate from idea".After Brain Dump validates generation quality
Auto-Approve for Low-RiskAuto-post "Agree + Extend" replies for operators with >90% approval rate over 30 days. Contrarian replies always require approval.After 90 days of approval data per operator
Multi-PlatformLinkedIn and Threads adapters using same prompt structure, different posting APIs. Apify has scrapers for both.After X-only version proves ROI for 2+ experts

9.3 Phase 4 — Product Features POST-REVENUE

Only after ContentEngine is generating revenue from 5+ paying experts.

FeatureWhat It DoesTrigger
Chrome ExtensionSave posts from X feed, repurpose in operator voice. Manifest V3 extension → Supabase → n8n.3+ experts actively using content module
Web Dashboard (Next.js)Full SaaS UI replacing Telegram approval. Auth, per-expert views, analytics dashboard.10+ experts or direct customer demand
Multi-Tenant ArchitectureSupabase RLS, per-expert credential vaults, shared n8n instance with isolated workflows.When Telegram + Supabase dashboard hits limits
Account Research ToolDeep analysis of any X account — top posts, engagement patterns, audience overlap. On-demand Apify scrape + LLM analysis.When experts request competitive intelligence
White-Label SaaSCustom branding per expert. Their audience pays for access to a ContentEngine instance tuned to the expert's methodology.After revenue validates the model at 10+ experts

10. Document Family

DocumentAudiencePurposeWhen Written
PRD (CONTENTENGINE-PRD.html)AI coding agentWHAT to build. Pure execution spec. Zero narrative.Before build
Human Guide (CONTENTENGINE-GUIDE.html)Human operatorHOW to do the human parts. Paint-by-numbers.Before build
ADR (this document)Future team / reviewersWHY each decision was made. Strategy + justification.Before build
Post-Mortem (CONTENTENGINE-POSTMORTEM.html)Knowledge baseWhat happened vs plan. Lessons learned. What to do differently.After build

The Post-Mortem closes the learning loop. After ContentEngine has been running for 30 days, document: actual vs estimated costs, what broke during build, what changed from the PRD, and what to do differently for the next expert deployment.


END OF ADR — Companion to CONTENTENGINE-PRD.html and CONTENTENGINE-GUIDE.html