Reveal Live's research links flow has friction that slows down the live demo moment:
Input friction:
Users must paste full URLs — but at a networking event you often only know "@username" or "FirstName LastName" not https://linkedin.com/in/john-smith-a1b2c3
Only one URL can be added per social platform — but prospects often have multiple presences (LinkedIn + podcast + company page + Substack)
There's no camera/OCR path — the prospect's profile is literally on the screen in front of you (their phone or yours) but you have to manually type/paste the URL
Data waste:
We do deep research (LinkedIn agent, Company agent, Social agent running in parallel) but the extracted intelligence is used once for page generation and discarded
There's no ICP library — if we later want to email this prospect, build a follow-up page, or reference their profile in a different template, we start from scratch
The psychographic deep-dive (dreams/fears/suspicions, communication style, implied needs) is computed but not persisted as a reusable contact profile
What's broken today:
| Component | Status | Location |
|---|---|---|
| Research links input | Working | Reveal Live wizard — URL text fields per platform |
| LinkedIn agent | Working | Extracts profile data from LinkedIn URL |
| Company agent | Working | Extracts company data from company URL |
| Social agent | Working | Extracts social presence from social URLs |
| Agent swarm (parallel) | Working | All three agents run concurrently |
| CRM profiles | Working | components/reveal/CRMProfileEditor.tsx — manual entry |
| ICP profiles | Working | components/reveal/ICPProfileEditor.tsx — manual entry |
| Screenshot-to-text | Working | LinkedIn intel extraction from screenshots |
| Component | Impact |
|---|---|
| Username-to-URL resolution | Users must find and paste full URLs manually |
| Multi-URL per platform | Can't add LinkedIn + LinkedIn company page + podcast |
| Camera/OCR for profile capture | Must manually type what's visible on screen |
| Persistent contact/ICP library | Research intelligence discarded after run |
| Cross-run profile reuse | Start from scratch on every new run for same prospect |
| Email funnel template injection | Can't pipe saved profiles into future templates |
The research link input field accepts EITHER format:
Full URL: https://linkedin.com/in/john-smith-a1b2c3
Username: @johnsmith
Name only: John Smith
Handle: linkedin.com/in/john-smith
Resolution logic (LLM-assisted):
async function resolveResearchInput(
input: string, platform: string
): Promise<string> {
// 1. Already a full URL? Return as-is.
if (input.startsWith('http://') || input.startsWith('https://')) {
return normalizeUrl(input, platform)
}
// 2. Partial URL (e.g., "linkedin.com/in/john")? Prepend https://
if (input.includes('.com/') || input.includes('.io/')) {
return 'https://' + input
}
// 3. Username or name — LLM resolves to full URL
// LinkedIn: @johnsmith → https://linkedin.com/in/johnsmith
// Twitter/X: @johnsmith → https://x.com/johnsmith
// Instagram: johnsmith → https://instagram.com/johnsmith
return await llmResolveProfileUrl(input, platform)
}
UI behavior:
Paste URL, @username, or full nameCurrent: One URL field per platform (LinkedIn, Company, Social).
New: Dynamic list — add as many URLs as needed, tagged by type.
Auto-tagging: When a URL is added, auto-detect the platform:
linkedin.com/in/* → LinkedIn (Personal)linkedin.com/company/* → LinkedIn (Company)x.com/* or twitter.com/* → X/Twitterinstagram.com/* → Instagram*.substack.com → SubstackTwo capture modes:
Mode A: Screenshot capture (user's phone)
The user takes a screenshot of the prospect's profile on their own phone, then uploads it.
Mode B: Live camera capture (of client's phone/screen)
User points their phone camera at the prospect's phone/laptop showing their profile.
getUserMedia API (camera access, already used for QR scanning)async function extractProfileFromImage(
imageBase64: string
): Promise<ResearchLink[]> {
const response = await llm.analyze({
model: 'claude-sonnet-4-6',
messages: [{
role: 'user',
content: [
{
type: 'image',
source: { type: 'base64', data: imageBase64 }
},
{
type: 'text',
text: `Extract all visible profile information:
- Full name
- Username/handle
- Profile URL (if visible)
- Platform (LinkedIn, X, Instagram, etc.)
- Company name (if visible)
- Job title (if visible)
Return as JSON array of
{ url, platform, name, title, company }`
}
]
}]
})
return parseExtractedLinks(response)
}
| Agent | Extracts |
|---|---|
| LinkedIn Agent | Name, title, company, headline, about summary, experience, skills, education, recommendations |
| Company Agent | Company name, industry, size, description, products/services, recent news, team |
| Social Agent | Social profiles, posting frequency, content themes, engagement style, audience |
After the agent swarm extracts raw data, a psychographic synthesis pass should run:
Extracted Intelligence (raw agent output) → Psychographic Synthesis (LLM pass) →
ICP Contact Profile:
• Implied dreams (what they want but won't say)
• Implied fears (what keeps them up at night)
• Implied suspicions (what they've been burned by before)
• Communication style (formal/casual, data-driven/story-driven)
• Decision-making pattern (consensus/authority, fast/deliberate)
• Trigger phrases (language that resonates with their worldview)
• Anti-patterns (language that will make them tune out)
• DO/DON'T rules for engaging this specific person
The CRM Profile Editor (CRMProfileEditor.tsx) captures dreams/fears/suspicions/DO/DON'T manually. The new flow auto-generates this from research data and saves it persistently.
New table: contact_profiles
CREATE TABLE IF NOT EXISTS contact_profiles (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES auth.users(id),
team_id UUID REFERENCES teams(id),
-- Identity
full_name TEXT NOT NULL,
email TEXT,
company TEXT,
title TEXT,
linkedin_url TEXT,
profile_image_url TEXT,
-- Research links (all URLs used for research)
research_links JSONB DEFAULT '[]',
-- Format: [{ url, platform, added_at, extracted_data_summary }]
-- Raw agent extraction (full output, for re-processing)
raw_extraction JSONB,
-- Psychographic profile (the synthesized intelligence)
psychographic JSONB,
-- Format: {
-- implied_dreams: [],
-- implied_fears: [],
-- implied_suspicions: [],
-- communication_style: "",
-- decision_pattern: "",
-- trigger_phrases: [],
-- anti_patterns: [],
-- do_rules: [],
-- dont_rules: [],
-- summary: ""
-- }
-- Usage tracking
run_ids TEXT[] DEFAULT '{}',
page_slugs TEXT[] DEFAULT '{}',
last_researched_at TIMESTAMPTZ,
-- Metadata
tags TEXT[] DEFAULT '{}',
notes TEXT,
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- Indexes for lookup
CREATE INDEX idx_contact_profiles_user
ON contact_profiles(user_id);
CREATE INDEX idx_contact_profiles_team
ON contact_profiles(team_id);
CREATE INDEX idx_contact_profiles_name
ON contact_profiles(full_name);
CREATE INDEX idx_contact_profiles_email
ON contact_profiles(email);
| Trigger | Action |
|---|---|
| Reveal Live run completes | Auto-create or update contact profile from agent extraction |
| User manually creates CRM profile | Save as contact profile |
| User imports from CSV/spreadsheet | Batch create contact profiles |
| Re-run for same contact | Update existing profile with new research data |
linkedin_url first, then email, then full_name + company fuzzy match. If match found, update rather than create duplicate.New section in Reveal dashboard (or sidebar):
When starting a new Reveal run, the user can:
This means:
The contact library enables new template types that reference the saved profile:
| Template / Feature | Uses Profile For |
|---|---|
| Reveal page (existing) | Full psychographic-aware content generation |
| Avatar steelman review (new) | Pre-publish copy review as the prospect — line-by-line reaction, edit suggestions |
| Email funnel (future) | Personalized email sequence using dreams/fears/trigger phrases |
| Follow-up page (future) | Post-meeting summary using saved research + new transcript |
| Proposal (future) | Tailored proposal using company data + communication style |
| Conference recap (future) | Multi-contact summary from event |
The page IS the demo. If a single line doesn't land with the prospect, credibility breaks. The psychographic profile gives us everything we need to simulate how the prospect will actually read the page — before it goes live.
After page generation and before publish, the system spins up an avatar editor — an LLM persona constructed from the contact's psychographic profile. This avatar reads the proposed page copy as the prospect would, and provides:
The avatar is built from the saved psychographic profile:
Contact Profile (from ICP Library)
├── implied_dreams → What they want to hear (lean into)
├── implied_fears → What makes them defensive (avoid)
├── implied_suspicions → What they'll fact-check first (substantiate)
├── communication_style → How they process information
├── decision_pattern → How they evaluate proposals
├── trigger_phrases → Language that resonates
├── anti_patterns → Language that repels
├── do_rules → Engagement guidelines
└── dont_rules → Known dealbreakers
System prompt for avatar:
You are {full_name}, {title} at {company}.
Your communication style: {communication_style}
Your decision-making pattern: {decision_pattern}
Things you care about deeply: {implied_dreams}
Things that make you skeptical: {implied_suspicions}
Things you've been burned by before: {implied_fears}
Language that resonates with you: {trigger_phrases}
Language that turns you off: {anti_patterns}
Read the following page that was created for you. React honestly.
For each section, note:
- Does this feel like it was written FOR you or AT you?
- What would you challenge?
- What makes you want to keep reading?
- What makes you want to close the tab?
Be specific. Quote the exact lines that work or don't work.
Without a saved profile, the avatar is constructed from whatever the agent swarm extracted during this run — good but shallow. With a saved profile that's been enriched across multiple interactions, the avatar is more accurate:
The contact library makes the avatar smarter over time. Each interaction adds signal.
The audit engine (lib/reveal/audit.ts) already does factual steelman — verifying claims against web sources and scoring confidence. The avatar editor is the psychographic steelman — verifying that the copy will land with this specific person. They're complementary:
| Layer | What It Checks | Source of Truth |
|---|---|---|
| Quote verification | Are quotes accurate? | Source transcript |
| Factual audit | Are claims true? | Web search + LLM verification |
| Avatar review | Will this person believe it? | Psychographic profile |
Three layers of steelman before publish.
Zero hallucination + factually verified + personally calibrated.
Immediate UX win
Files: Research links component in Reveal Live wizard
No schema changes.
Effort: 3-4 hours
Files: Research links component + new extraction utility
No schema changes.
Effort: 2-3 hours
contact_profiles table (migration)run_ids and page_slugs for cross-referenceFiles: New migration, new lib/contacts.ts, update run/route.ts
Effort: 3-4 hours
Files: New components/reveal/ContactLibrary.tsx, dashboard integration
Effort: 4-5 hours
Files: New components/reveal/AvatarReview.tsx, update publish flow
Effort: 4-5 hours
Files: Wizard updates, new template
Effort: 4-6 hours (template writing is the bulk)
| Excluded | Why |
|---|---|
| CRM integration (HubSpot/Salesforce sync) | Build the contact library first, integrate later when API demand exists |
| Bulk import from LinkedIn Sales Navigator | API restrictions make this unreliable |
| Auto-refresh profiles on schedule | Profiles don't change fast enough — re-research on demand is sufficient |