Builder Spec (PRD) — v1.0 — February 2026
NOT FOR HUMAN READING — See CONTENTENGINE-GUIDE.html for human steps
Source: claude.ai/chat/f7a01dae-3d5b-40d7-b3de-d37b88505625
This PRD instructs Claude Code to build an automated X/Twitter content engine consisting of n8n workflows, Supabase tables, and LLM integrations that scrape target accounts via Apify, generate engagement replies, and score draft posts against X's published algorithm weights. The system operates in two layers: Layer 1 (n8n automation) handles data ingestion and reply generation; Layer 2 (LLM API) handles content scoring and voice-matched generation.
Layer 1 — Data & Automation (n8n)
Layer 2 — Intelligence (LLM API)
Shared Infrastructure
All automation runs on existing n8n instance. Supabase provides database. No VPS, no Docker, no custom backend required. LLM calls made directly from n8n HTTP Request nodes.
| Component | Purpose | Port |
|---|---|---|
| n8n (existing) | All workflow automation — scraping, reply gen, posting, digest | 443 (n8n.masterymade.com) |
| Supabase (free tier) | Database for targets, scraped posts, replies, voice profile | N/A (hosted) |
| Apify | X/Twitter data scraping — timelines, profiles, engagement | N/A (API) |
| Anthropic API | Reply generation, post scoring, content optimization | N/A (API) |
| X API Free | Posting replies and original content (write-only) | N/A (API) |
| Telegram Bot API | Daily digest notifications and approval workflow | N/A (API) |
| Parameter | Value |
|---|---|
| Automation Platform | n8n (self-hosted at n8n.masterymade.com, existing) |
| Database | Supabase Free Tier (Postgres 15 + pgvector) |
| Scraping Service | Apify — Tweet Scraper V2 (apidojo/tweet-scraper) |
| LLM Provider | Anthropic API — claude-sonnet-4-20250514 |
| X API Tier | Free (write-only: POST tweets, POST replies) |
| Notification | Telegram Bot API |
| Embedding Model | text-embedding-3-small (OpenAI) — Phase 2 only |
| Service | Monthly Allocation | Notes |
|---|---|---|
| Supabase Free Tier | 500MB database, 1GB storage, 2GB transfer | Sufficient for ~50K posts + metadata |
| Apify | ~3,000 tweets/month scraped | $0.40/1K = ~$1.20/mo at personal volume |
| Anthropic API | ~150 reply generations/day × 30 = 4,500/mo | ~$8-15/mo at Sonnet pricing |
| X API Free | 1,500 posts/month (write) | Sufficient for 50 replies/day |
| Telegram | Unlimited | Free |
No model cascade. Single LLM provider (Anthropic Sonnet) for all generation tasks. Routing is workflow-based via n8n, not model-based.
| Trigger | n8n Workflow | LLM Call | Output |
|---|---|---|---|
| Daily cron (6 AM CT) | WF-01: Scrape & Ingest | None | New posts in Supabase |
| WF-01 completion | WF-02: Reply Generation | Sonnet — reply prompt | Reply candidates in Supabase |
| WF-02 completion | WF-03: Daily Digest | None | Telegram message to operator |
| Telegram callback | WF-04: Post Approved Replies | None | Replies posted via X API |
| Webhook (manual) | WF-05: Post Scorer | Sonnet — scoring prompt | JSON score + alternatives |
| Weekly cron (Sunday 6 PM) | WF-06: Performance Tracker | Sonnet — analysis prompt | Weekly digest via Telegram |
Not applicable. No containers. All execution runs within existing n8n instance. Supabase is hosted SaaS. No Docker required.
Standard security practices apply. No PII handling beyond X public data. All API credentials stored in n8n's built-in credential store (encrypted at rest). Supabase Row Level Security not required for single-user deployment — add RLS in Phase 4 for multi-tenant.
| File | Purpose |
|---|---|
| WF-01-scrape-ingest.json | n8n workflow: Apify trigger → parse tweets → upsert Supabase |
| WF-02-reply-generation.json | n8n workflow: Read new posts → LLM generate replies → store candidates |
| WF-03-daily-digest.json | n8n workflow: Query pending replies → format → send Telegram |
| WF-04-post-replies.json | n8n workflow: Telegram callback → post approved replies via X API |
| WF-05-post-scorer.json | n8n workflow: Webhook input → LLM score → return JSON |
| WF-06-weekly-tracker.json | n8n workflow: Aggregate metrics → LLM analysis → Telegram digest |
| supabase-schema.sql | Complete database schema for Supabase |
| prompts/reply-gen.md | System prompt for reply generation |
| prompts/post-scorer.md | System prompt for algorithm scoring |
| prompts/voice-profile.md | Voice profile template (operator fills with top posts) |
target_accounts table for all active targetsscraped_posts table (dedup on x_post_id)status='new'status='new' AND replies_generated=falsevoice_profiles tablereply_candidates table with status='pending'replies_generated=truescored_drafts tableSYSTEM: You generate engaging X/Twitter replies for a strategic engagement campaign.
RULES:
- Every reply must add VALUE (insight, data, experience, contrarian take)
- Never generic ("Great post!", "So true!", "This 🔥")
- Never sycophantic — speak as a peer, not a fan
- Match the operator's voice profile below
- Keep replies under 280 characters
- Generate exactly 3 options: (1) Agreement + extension, (2) Contrarian/question, (3) Personal experience/data
VOICE PROFILE:
{{voice_profile}}
TARGET POST:
Author: @{{target_username}}
Text: {{post_text}}
Engagement: {{likes}} likes, {{replies}} replies, {{retweets}} RTs
Generate 3 reply options as JSON array:
[{"type": "agree_extend", "text": "..."}, {"type": "contrarian", "text": "..."}, {"type": "experience", "text": "..."}]
SYSTEM: You are an X/Twitter algorithm expert. Score posts against X's published ranking algorithm weights.
X ALGORITHM WEIGHTS (from open-source codebase):
1. Reply Spark (weight 75): Probability of generating replies + author engagement
2. Conversation Depth (13.5): Will people click into conversation thread?
3. Profile Pull (12.0): Will readers visit the author's profile?
4. Conversation Dwell (10.0): Will people spend 2+ minutes on this?
5. Retweet Trigger (1.0): Is this shareable/quotable?
6. Favorite Factor (0.5): Is this likeable?
7. Hook Strength (custom): Does first line stop the scroll?
8. Format Score (custom): Proper line breaks, length, media potential?
9. Readability (custom): Clear, conversational, no "shout" patterns?
SCORING: Rate each metric 1-10. Weight the final score per algorithm weights above.
POST TO ANALYZE:
{{draft_text}}
Return JSON:
{
"scores": {"reply_spark": N, "conversation_depth": N, ...},
"weighted_total": N,
"improvements": ["specific change 1", "specific change 2", "specific change 3"],
"alternatives": ["optimized version 1", "optimized version 2"]
}
-- CONTENTENGINE Schema v1.0
-- Run in Supabase SQL Editor
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
-- Target accounts for Reply Engine
CREATE TABLE target_accounts (
id UUID DEFAULT uuid_generate_v4() PRIMARY KEY,
x_username TEXT NOT NULL UNIQUE,
display_name TEXT,
follower_count INT,
list_name TEXT DEFAULT 'default', -- group targets by list
priority INT DEFAULT 5, -- 1=highest, 10=lowest
active BOOLEAN DEFAULT true,
notes TEXT, -- why this target matters
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ DEFAULT NOW()
);
-- Scraped posts from target accounts
CREATE TABLE scraped_posts (
id UUID DEFAULT uuid_generate_v4() PRIMARY KEY,
x_post_id TEXT NOT NULL UNIQUE,
author_username TEXT NOT NULL,
content TEXT NOT NULL,
likes INT DEFAULT 0,
replies INT DEFAULT 0,
retweets INT DEFAULT 0,
quotes INT DEFAULT 0,
bookmarks INT DEFAULT 0,
impressions INT,
posted_at TIMESTAMPTZ,
scraped_at TIMESTAMPTZ DEFAULT NOW(),
replies_generated BOOLEAN DEFAULT false,
status TEXT DEFAULT 'new', -- new, processed, skipped
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE INDEX idx_scraped_status ON scraped_posts(status);
CREATE INDEX idx_scraped_author ON scraped_posts(author_username);
-- Reply candidates (LLM-generated)
CREATE TABLE reply_candidates (
id UUID DEFAULT uuid_generate_v4() PRIMARY KEY,
scraped_post_id UUID REFERENCES scraped_posts(id),
target_username TEXT NOT NULL,
target_post_text TEXT, -- denormalized for digest readability
reply_type TEXT NOT NULL, -- agree_extend, contrarian, experience
reply_text TEXT NOT NULL,
status TEXT DEFAULT 'pending', -- pending, approved, rejected, posted, failed
posted_at TIMESTAMPTZ,
reply_x_post_id TEXT, -- X post ID after posting
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE INDEX idx_replies_status ON reply_candidates(status);
-- Reply performance tracking
CREATE TABLE reply_performance (
id UUID DEFAULT uuid_generate_v4() PRIMARY KEY,
reply_candidate_id UUID REFERENCES reply_candidates(id),
impressions INT DEFAULT 0,
likes INT DEFAULT 0,
replies INT DEFAULT 0,
profile_visits INT DEFAULT 0,
follower_delta INT DEFAULT 0, -- followers gained from this reply
measured_at TIMESTAMPTZ DEFAULT NOW(),
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Scored drafts (post scorer history)
CREATE TABLE scored_drafts (
id UUID DEFAULT uuid_generate_v4() PRIMARY KEY,
draft_text TEXT NOT NULL,
scores JSONB NOT NULL, -- {reply_spark: 7, conversation_depth: 5, ...}
weighted_total DECIMAL(5,2),
improvements JSONB, -- ["improvement 1", "improvement 2", ...]
alternatives JSONB, -- ["alt version 1", "alt version 2"]
final_posted_text TEXT, -- what actually got posted (if any)
final_post_id TEXT, -- X post ID if posted
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Voice profile for operator
CREATE TABLE voice_profiles (
id UUID DEFAULT uuid_generate_v4() PRIMARY KEY,
profile_name TEXT DEFAULT 'default',
top_posts JSONB NOT NULL, -- array of 10-20 best performing posts as few-shot examples
style_notes TEXT, -- "direct, stoic, uses metaphors, never uses emojis"
tone_keywords JSONB, -- ["operator", "stoic", "leveraged", "no-fluff"]
avoid_patterns JSONB, -- ["exclamation marks", "emoji", "hashtags"]
active BOOLEAN DEFAULT true,
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ DEFAULT NOW()
);
-- Phase 2: Own post history (populated by Apify self-scrape)
-- CREATE TABLE own_posts ( ... );
-- Phase 2: Embeddings column on scraped_posts and own_posts
-- ALTER TABLE scraped_posts ADD COLUMN embedding vector(1536);
-- Phase 2: Content ideas
-- CREATE TABLE content_ideas ( ... );
| # | Trigger | What Human Does | Blocked Until |
|---|---|---|---|
| 1 | Need Supabase project | Go to supabase.com → New Project → copy project URL and anon key | Schema can be applied |
| 2 | Need Apify account + API token | Go to apify.com → Sign up → Settings → Integrations → Copy API token | Scraping workflow can call Apify |
| 3 | Need X API Free tier app | Go to developer.x.com → Create App → Generate OAuth 2.0 tokens (read+write) | Reply posting workflow can call X API |
| 4 | Need Telegram Bot | Message @BotFather on Telegram → /newbot → copy bot token → get chat_id | Digest workflow can send notifications |
| 5 | Need Anthropic API key | Go to console.anthropic.com → API Keys → Create Key → copy | LLM generation workflows can run |
| 6 | Schema ready for Supabase | Open Supabase SQL Editor → paste supabase-schema.sql → Run | All workflows can read/write data |
| 7 | n8n credentials setup | In n8n: create credentials for Supabase, Apify, Anthropic, X API, Telegram | Workflows can be imported and activated |
| 8 | Target accounts populated | Insert 15-20 target X accounts into Supabase target_accounts table | Scraping workflow has targets |
| 9 | Voice profile populated | Insert top 10-20 performing posts into voice_profiles table | Reply generation uses operator voice |
| 10 | End-to-end test | Manually trigger WF-01, verify scrape → replies → digest → post cycle | System is production-ready |
Prerequisites: None. This is the first phase.
supabase-schema.sql file with all tables, indexes, and commentsprompts/reply-gen.md system promptprompts/post-scorer.md system promptprompts/voice-profile.md template with instructions for operatorSELECT table_name FROM information_schema.tables WHERE table_schema='public' returns 6 tables.Prerequisites: Phase 1 complete. Human has completed Checkpoints 1-2, 6-7.
https://api.apify.com/v2/acts/apidojo~tweet-scraper/runs?token=${APIFY_TOKEN}{"searchTerms": ["from:${username}"], "maxItems": 20, "sort": "Latest"}x_post_id conflictPrerequisites: Phase 2 complete. Checkpoint 5 (Anthropic key) done.
https://api.anthropic.com/v1/messages with model claude-sonnet-4-20250514Prerequisites: Phase 3 complete. Checkpoint 4 (Telegram bot) done.
https://api.twitter.com/2/tweets with {"text": reply_text, "reply": {"in_reply_to_tweet_id": target_post_id}}Prerequisites: Phase 1 complete (prompts + Anthropic key).
https://n8n.masterymade.com/webhook/score-postcurl -X POST https://n8n.masterymade.com/webhook/score-post -H "Content-Type: application/json" -d '{"text":"test post"}' returns valid score JSON.Prerequisites: Phases 2-5 complete. System has been running for 48+ hours.
| Phase | Feature | Trigger to Build |
|---|---|---|
| Phase 2 60 DAYS | Content RAG — pgvector embeddings on own post history, RAG-powered content coach chat | After 200+ own posts with engagement data |
| Phase 2 | History Analyzer — Dashboard view of own posts sorted by engagement, topic clusters | After Content RAG is live |
| Phase 3 90 DAYS | Brain Dump tool — Free-form text → LLM generates posts, threads, article outlines in operator voice | After voice profile is validated by 60 days of reply data |
| Phase 3 | Content Composer — Idea storage CRUD + AI repurposing with voice profile | After Brain Dump validates generation quality |
| Phase 4 MULTI-EXPERT | Chrome Extension — Save posts from X, repurpose in voice | After deploying for first expert (Brad) |
| Phase 4 | Multi-platform — LinkedIn, Threads, Bluesky adapters using same RAG pipeline | After X-only version proves ROI |
| Phase 4 | White-label per expert — Supabase RLS, per-expert voice profiles, multi-tenant | After 3+ experts onboarded |
| Phase 4 | FastAPI backend + Next.js frontend — Full SaaS UI replacing Telegram+Supabase dashboard | After 10+ experts or direct customer demand |
Not applicable. No file sync required. All workflows imported directly into n8n via JSON import. SQL run directly in Supabase SQL Editor.
# === CONTENTENGINE .env.template ===
# === SUPABASE ===
SUPABASE_URL= # https://xxxxx.supabase.co — from Supabase dashboard
SUPABASE_ANON_KEY= # eyJ... — from Supabase Settings > API
SUPABASE_SERVICE_KEY= # eyJ... — from Supabase Settings > API (service role)
# === APIFY ===
APIFY_TOKEN= # apify_api_xxxxx — from Apify Settings > Integrations
# === ANTHROPIC ===
ANTHROPIC_API_KEY= # sk-ant-xxxxx — from console.anthropic.com
# === X / TWITTER ===
X_BEARER_TOKEN= # AAAA... — from developer.x.com app
X_API_KEY= # Consumer key
X_API_SECRET= # Consumer secret
X_ACCESS_TOKEN= # OAuth access token (user-level)
X_ACCESS_SECRET= # OAuth access token secret
# === TELEGRAM ===
TELEGRAM_BOT_TOKEN= # 123456:ABC-xxxxx — from @BotFather
TELEGRAM_CHAT_ID= # Your personal chat ID — from @userinfobot
| Item | Cost |
|---|---|
| n8n (existing instance) | $0 (already running) |
| Supabase Free Tier | $0 |
| X API Free Tier | $0 |
| Telegram Bot | $0 |
| Infrastructure Total | $0/mo |
| Service | Usage | Unit Cost | Monthly Est. |
|---|---|---|---|
| Apify scraping | ~3,000 tweets/mo | $0.40/1K tweets | $1.20 |
| Anthropic (reply gen) | ~4,500 calls/mo | ~$0.002/call | $9.00 |
| Anthropic (scoring) | ~150 calls/mo | ~$0.003/call | $0.45 |
| Anthropic (weekly digest) | ~4 calls/mo | ~$0.005/call | $0.02 |
| Variable Total | ~$10-12/mo | ||
END OF PRD — Companion to CONTENTENGINE-GUIDE.html and CONTENTENGINE-ADR.html