🔒 Internal Documentation
This is a comprehensive technical wiki documenting the complete LPA (Layered Prompt Architecture) storyboarding system - including methodology, commands, costs, and production examples.
Full Wiki Contains:
- Complete 7-layer LPA architecture with detailed examples
-
4-stage workflow chain with full implementation:
-
→ Stage 1: Video extraction
Extract frames from reference videos using extract_video_frames command (MOV/MP4/GIF/WEBM support) -
→ Stage 2: AI scene analysis
GPT-4 Vision or Claude analyze each frame and automatically extract structured 7-layer LPA prompts -
→ Stage 3: Narrative creation
Transform extracted LPA frames into story JSON with temporal consistency validation -
→ Stage 4: Image generation
Generate consistent image sequences with generate_lpa_story (edwin-avatar-v4 or nano-banana)
-
→ Stage 1: Video extraction
- The Narrative Archaeologist methodology (frame-to-story workflow)
- Vision AI automated LPA extraction with Claude/GPT-4
- Management commands reference (4 commands with full usage)
- Crystal Heist case study: 19 scenes, $0.25, production workflow
- Cost & performance analysis, optimization strategies
- Best practices, gotchas, and future roadmap
Two Ways to Start
The workflow supports both reference-based and AI-brainstormed approaches:
Draw inspiration from existing videos, films, or style experiments. Extract frames → analyze with AI → build story → generate. Perfect when you have visual references - including your own previously generated videos when you want to create more content with the same characters or expand within the same universe.
Collaborate with AI to define look, feel, and overall story narrative entirely with text - before any image generation. AI agent builds a complete frame of reference around your concept, then structures it into 8-layer LPA prompts. Craft story JSON, validate consistency, then generate. Turn concept to storyboard in minutes without reference footage. Lower cost, faster iteration.
LPA Workflow Extensions
The workflow supports powerful extensions beyond the core pipeline:
Story Brainstorming Mode
Create complete LPA stories from pure imagination - no reference footage needed. Interactive brainstorming workflow for narrative arc, character development, and scene-by-scene LPA construction.
Transform existing LPA stories into entirely different visual styles while preserving spatial choreography. Cast VFX scenes into documentary realism, or realistic scenes into absurdist satire - all automated. Read the full guide.
From Concept to Consistent Storyboard
Both paths converge into the same production-ready output
PATH 1: REFERENCE-BASED
PATH 2: AI-BRAINSTORMED
FINAL OUTPUT
This content is restricted to preserve methodology and intellectual property.
Get in Touch About This WorkflowWho Is This Intended For?
Access is granted to partners who can exchange knowledge at a similar level
Technical Partners
Developers, AI engineers, pipeline architects building in the AI content generation space
Creative Partners
Brand strategists, art directors, visual storytellers, prompt engineers with unique aesthetic approaches
The Currency of Access
I do not sell access to this as premium content. If you want to learn how I do these things, your currency is showing me something I don't know how to do.
If you don't have that yet - work your way up to that first. Build your skills, create your own breakthroughs, document your learnings.
Then let's exchange knowledge as equals.
Frequently Asked Questions
What is LPA (Layered Prompt Architecture)?
Is this a new system or just structured prompting?
What is LPA (Layered Prompt Architecture)?
Is this a new system or just structured prompting?
TL;DR: It's JSON prompting with battle-tested structure.
In simple terms, LPA is structured JSON prompting - organizing image generation instructions into distinct categories (subject, composition, environment, lighting, color, camera, quality) rather than one massive unstructured prompt.
Why call it "Layered Prompt Architecture" instead of just "JSON prompting"?
Because I've run extensive evaluations on better ways to structure prompts:
- Grouping common subjects for consistency
- Understanding where grouping produces worse results (e.g., VFX descriptions creating 3D-looking scenes when you want realistic scenes with 3D characters)
- Building automations to generate, compile, and intelligently group prompt elements
- Validating temporal consistency across multi-scene narratives
I see it as a "layered architecture" because each layer serves a specific purpose, and the order matters for generation quality. But yes - I didn't invent a new system. It's structured prompting refined through production use (19-scene stories with character consistency).
Do I need to be logged in to see the full workflow?
What's behind the authentication gate?
Do I need to be logged in to see the full workflow?
What's behind the authentication gate?
Yes. The complete internal documentation (command examples, implementation details, cost breakdowns, and future enhancements) is visible only to authenticated users.
This wiki serves as my exhausted-self documentation - a comprehensive reference when I'm burned out and need to remember how everything works. The public overview provides context, but the full methodology and tooling details are proprietary.
Interested in learning more? Contact me to discuss AI consulting or custom implementations.
Can I use LPA for my own image generation projects?
Is this methodology proprietary?
Can I use LPA for my own image generation projects?
Is this methodology proprietary?
Absolutely. The concept of structured prompting is universal - you can organize prompts however makes sense for your use case.
What makes this implementation valuable is:
- Battle-tested 7-layer structure optimized for narrative consistency
- Automated workflow tools (frame extraction → analysis → story creation → generation)
- Temporal consistency validation preventing jarring transitions
- Production examples (19-scene Crystal Heist story)
Want help implementing this for your project? Let's talk about consulting or custom tooling development.
What are the costs for running this workflow?
API costs, tool expenses, etc.
What are the costs for running this workflow?
API costs, tool expenses, etc.
Vision Analysis (Stage 2): $0.0075/image with GPT-4 Vision (recommended)
20 video frames analyzed = $0.15
Image Generation (Stage 4): $0.011/scene (edwin-avatar-v4) or $0.035/scene (nano-banana)
19-scene story with v4 = $0.209
Total: Crystal Heist Example
- • 5 reference frames analyzed: $0.04
- • 19 scenes generated (v4): $0.21
- • Total: $0.25
Compare to hours of manual prompt writing and inconsistent results.
What is temporal consistency validation?
Why does it matter for storyboards?
What is temporal consistency validation?
Why does it matter for storyboards?
Temporal consistency validation ensures your story makes logical sense across scenes - preventing jarring transitions that break viewer immersion.
✓ GOOD: Weather & Time Progression
- Scene 5: Light rain begins (4:15 PM, overcast)
- Scene 10: Heavy rain (4:45 PM, dark clouds)
- Scene 15: Rain subsides, wet surfaces (5:30 PM, clearing)
✗ BAD: Inconsistent Transitions
- Scene 5: Heavy snow (3:00 PM)
- Scene 6: Bright sunshine, dry (3:15 PM) ← 15 minutes later?
- Scene 7: Character suddenly in different clothes
✓ GOOD: Character Continuity Across Scenes
- Scene 4: Sarah (25, blonde, red jacket) at coffee shop
- Scene 40: Sarah (same appearance) waves hello at park
- Scene 50: Elderly man (70s, gray hair) from Scene 2 reappears
✗ BAD: Character Identity Confusion
- Scene 4: Young woman in 20s, professional attire
- Scene 40: Same name but now elderly (60s+) ← No time jump explanation
- Scene 45: Character from Scene 10 now has different hair color/style
✓ GOOD: Text & Sign Consistency
- Scene 8: Store sign reads "Joe's Coffee - Open 6AM-9PM"
- Scene 22: Same store, same sign text (revisited location)
- Scene 35: Newspaper headline consistent with story timeline
✗ BAD: Text & Sign Inconsistencies
- Scene 8: Store sign "Joe's Coffee"
- Scene 22: Same location but sign now reads "Mike's Diner" ← Different business
- Scene 30: Clock shows 3:00 PM but dialogue says "good morning"
What the validation checks:
- Weather progression: Rain can intensify or subside, but not disappear instantly
- Time flow: Timestamps must move forward (no time travel without explanation)
- Character state: Wet → dry requires time/explanation; injuries persist; age doesn't jump
- Character reappearance: Anyone from Scene 4 can appear in Scene 40, but appearance must match
- Environmental continuity: Indoor/outdoor transitions make sense; locations remain consistent
- Text & signage: Store names, signs, clocks, newspapers must be contextually accurate
Why it matters: Automated validation catches continuity errors before image generation, saving API costs and preventing unusable storyboards. Characters can reappear scenes later (Scene 4 → Scene 40), but their appearance, age, and identifying details must remain consistent throughout the narrative.