& Technical tangents
Documenting my journey with AI experiments
The only way to stay on the cutting edge of AI is to actually explore AI and work with it. Below you'll find me rambling about that stuff in focused deep dives, that may lead nowhere other than getting better at product building with AI.
Let me be your guide • Start here
I don't know much, but I am happy to teach you what I know
Product & Personal branding with AI
Live case study: Training image models, generating videos, and testing what it takes to build a consistent AI brand avatar. From LoRA training to multi-angle generation and maintaining visual consistency.
AI Image Storyboarding
Why I storyboard with images ($0.001-$0.055 each) before committing to expensive video generation ($0.15-$0.40/second). Budget-conscious pre-production workflow with real examples.
LPA Studio
Personal Workflow
Character IPs, scene development, and product placement through visual editing
LPA Studio
Personal WorkflowCharacter IPs, scene development, and product placement through visual editing
What is LPA Studio?
LPA Studio is my personal workspace for managing character IP creation, scene development, and product placement. It's built around the visual Scene Editor - a tool for editing AI-generated images through box annotations. Instead of writing complex prompts, you draw boxes directly on images and describe changes in plain language.
Think of it like a creative studio where you can: develop character IPs with consistent appearance across scenes, build narrative sequences with temporal continuity, place products naturally in environments, and iterate on visual concepts through annotation-driven refinement.
Want the full details? See video demos and complete feature breakdown
What You Can Manage
Character IPs
Create and maintain character consistency across multiple scenes, poses, and environments
Scene Development
Build narrative sequences with temporal consistency - weather, time, character state progression
Product Placement
Integrate products naturally into scenes with proper lighting, perspective, and environmental context
See It In Action
Watch how simple annotations transform complex scenes - from adding explosions to changing entire environments
Scene Editor Workflow
The Scene Editor is the core tool within LPA Studio - here's how the visual annotation system works:
Draw Annotation Boxes
Click and drag to draw boxes around areas you want to modify. Each box becomes an editable annotation with specific instructions.
Choose Action Type
Select from Replace, Add, Remove, or Modify - each optimized for different transformation types.
Describe Your Changes
Type what you want in plain language. The system translates your instructions into structured prompts automatically.
Generate & Compare
Click generate to create variations. Compare results side-by-side and iterate with new annotations.
Key Features
Visual Feedback
See exactly where changes will happen with color-coded boxes and numbered annotations
Multiple Annotations
Add up to 10 annotations per image - change multiple elements in one generation
Variation Management
Generate multiple variations, switch between them, and continue refining any version
State Persistence
Annotations are saved automatically - come back later and continue where you left off
Powered by LPA
Behind the scenes, the Scene Editor translates your visual annotations into Layered Prompt Architecture (LPA) - a structured JSON system for precise AI image generation. You get the benefits of structured prompting without needing to know the technical details. Learn more about LPA below.
What is LPA?
v0.1 Experimental
Understanding structured JSON prompting for consistent AI images
What is LPA?
v0.1 ExperimentalUnderstanding structured JSON prompting for consistent AI images
What is LPA?
Layered Prompt Architecture (LPA) is my approach to structured JSON prompting - organizing AI image instructions into separate categories rather than writing one massive unstructured prompt.
Instead of: "Generate a professional photo of a person in a modern office"
You break it down into organized sections: who's in the scene, where they're positioned, what the environment looks like, how the lighting works, color palette, camera settings, and quality standards.
Why Structure Matters
"Generate a professional photo of a person in a modern office with good lighting and professional quality"
Vague "good lighting"
No composition specs
Mixed concerns
L1: TOK male, standing, looking at camera
L2: centered, full body, 1/3 of frame
L3: modern office, glass walls, concrete
L4: natural window light 5000K, soft shadows
Explicit light source + temp
Clear spatial positioning
Separated concerns
Visual comparison examples
Unstructured Prompt
"a person in a retro futuristic Tokyo office from the 90s with old computers and good lighting"
Structured prompts consistently deliver 90-100% character likeness. Unstructured prompts drift unpredictably across scenes. The data is clear: separation works.
Unstructured prompts are like giving vague directions: "Drive to the store" - which store? Which route? What if there's traffic?
Structured prompts are like using GPS with specific waypoints. Each category (layer) handles one aspect without interfering with others. When you say "the lighting should be warm," it doesn't accidentally change the character's appearance or the camera angle.
This is v0.1 - But Getting More Confident
After generating 15,000+ images in October 2025 alone, the 7-layer structure is proving itself. While I'm still refining edge cases and discovering optimizations, the core separation principle consistently delivers 90-100% character consistency across multi-scene stories. Some layers might evolve, but the fundamental approach is settling into a reliable production system.
The 7 Categories (Current Structure)
These 7 layers emerged from testing - they might evolve as I discover better ways to organize prompts:
| Layer | What It Controls | Why Separate? |
|---|---|---|
| Layer 1 | Who/What (Identity) | Keep character pure - no mixed attributes |
| Layer 2 | Where in Frame | Composition doesn't affect identity |
| Layer 3 | The Scene (Environment) | Background detail independent of subject |
| Layer 4 | Light Sources | Lighting won't alter composition |
| Layer 5 | Palette (Color) | Color grading separate from lighting |
| Layer 6 | Camera Settings | Technical specs don't change scene |
| Layer 7 | Quality Standards | Realism tags preserve all other layers |
What 15,000+ Images Taught Me: By separating these concerns into distinct categories, each instruction doesn't accidentally interfere with the others. Lighting instructions don't mess up character appearance. Camera settings don't accidentally change the environment. This isn't theory anymore - it's a repeatable production system that delivers consistent results across thousands of generations.
What I'm Using This For
Story Sequences: Testing character consistency across 19+ scenes with different settings, times of day, and weather conditions.
Style Transformations: Taking existing scenes and recreating them in different visual styles while keeping composition intact.
Reference-Based Generation: Analyzing video frames or photos and generating new images based on that analysis.
Learning What Works: Each generation teaches me something new about how AI models interpret structured vs. unstructured instructions.
Visual Scene Editor
The Scene Editor bridges the gap between LPA's structured JSON and intuitive visual editing. Instead of writing complex prompts, you draw boxes directly on images to specify what should change. Each annotation gets mapped to the appropriate LPA layer automatically.
Draw & Annotate
Box an area, describe the change: "add explosion here," "remove this object," "replace with neon sign"
LPA Translation
Your visual edits get translated into proper LPA layers - environment changes go to Layer 3, lighting to Layer 4, etc.
Why This Matters: The Scene Editor makes LPA accessible without requiring JSON knowledge. Draw what you want changed, and the system handles the structured prompt construction behind the scenes. Same precision, visual workflow.
In Plain English: LPA is my approach to organizing AI image prompts through structured JSON categories instead of unstructured text blocks. After 15,000+ images in production, the results are clear: separation of concerns produces dramatically more consistent, controllable results. Version 0.1, but the core principle is proven.
LPA Casting System
Transform existing LPA stories into entirely different visual styles while preserving spatial choreography. Cast VFX scenes into National Geographic photorealism, or realistic scenes into absurdist satire - all automated.
Character Isolation System
Build persistent character libraries with dual 3D render + realistic portrait versions. Create once, reference by number across unlimited scenes.
Character Examples
Full details inside
LPA Data Model Architecture
From JSON chaos to type-safe clarity: A four-level hierarchy for managing complex AI image generation workflows with casting styles, scene variations, and generation metadata.