Mountain landscape contemplation
Edwin Genego
Developer ramblings

& Technical tangents

Documenting my journey with AI experiments

The only way to stay on the cutting edge of AI is to actually explore AI and work with it. Below you'll find me rambling about that stuff in focused deep dives, that may lead nowhere other than getting better at product building with AI.

Let me be your guide • Start here

I don't know much, but I am happy to teach you what I know

LPA Studio

Personal Workflow

Character IPs, scene development, and product placement through visual editing

What is LPA Studio?

LPA Studio is my personal workspace for managing character IP creation, scene development, and product placement. It's built around the visual Scene Editor - a tool for editing AI-generated images through box annotations. Instead of writing complex prompts, you draw boxes directly on images and describe changes in plain language.

Think of it like a creative studio where you can: develop character IPs with consistent appearance across scenes, build narrative sequences with temporal continuity, place products naturally in environments, and iterate on visual concepts through annotation-driven refinement.

Want the full details? See video demos and complete feature breakdown

Read Post

What You Can Manage

Character IPs

Create and maintain character consistency across multiple scenes, poses, and environments

Scene Development

Build narrative sequences with temporal consistency - weather, time, character state progression

Product Placement

Integrate products naturally into scenes with proper lighting, perspective, and environmental context

See It In Action

Watch how simple annotations transform complex scenes - from adding explosions to changing entire environments

Scene Editor Workflow

The Scene Editor is the core tool within LPA Studio - here's how the visual annotation system works:

1
Draw Annotation Boxes

Click and drag to draw boxes around areas you want to modify. Each box becomes an editable annotation with specific instructions.

2
Choose Action Type

Select from Replace, Add, Remove, or Modify - each optimized for different transformation types.

3
Describe Your Changes

Type what you want in plain language. The system translates your instructions into structured prompts automatically.

4
Generate & Compare

Click generate to create variations. Compare results side-by-side and iterate with new annotations.

Key Features

Visual Feedback

See exactly where changes will happen with color-coded boxes and numbered annotations

Multiple Annotations

Add up to 10 annotations per image - change multiple elements in one generation

Variation Management

Generate multiple variations, switch between them, and continue refining any version

State Persistence

Annotations are saved automatically - come back later and continue where you left off

Powered by LPA

Behind the scenes, the Scene Editor translates your visual annotations into Layered Prompt Architecture (LPA) - a structured JSON system for precise AI image generation. You get the benefits of structured prompting without needing to know the technical details. Learn more about LPA below.

What is LPA?

v0.1 Experimental

Understanding structured JSON prompting for consistent AI images

What is LPA?

Layered Prompt Architecture (LPA) is my approach to structured JSON prompting - organizing AI image instructions into separate categories rather than writing one massive unstructured prompt.

Instead of: "Generate a professional photo of a person in a modern office"

You break it down into organized sections: who's in the scene, where they're positioned, what the environment looks like, how the lighting works, color palette, camera settings, and quality standards.

Why Structure Matters

Unstructured Prompt

"Generate a professional photo of a person in a modern office with good lighting and professional quality"

Vague "good lighting"

No composition specs

Mixed concerns

Structured (LPA)

L1: TOK male, standing, looking at camera

L2: centered, full body, 1/3 of frame

L3: modern office, glass walls, concrete

L4: natural window light 5000K, soft shadows

Explicit light source + temp

Clear spatial positioning

Separated concerns

Visual Results
Coming Soon

Visual comparison examples

Unstructured prompt result - vague Neo-Tokyo office

Unstructured Prompt

"a person in a retro futuristic Tokyo office from the 90s with old computers and good lighting"

vague details mixed concerns no structure
15,000+ images (Oct 2025)

Structured prompts consistently deliver 90-100% character likeness. Unstructured prompts drift unpredictably across scenes. The data is clear: separation works.

Unstructured prompts are like giving vague directions: "Drive to the store" - which store? Which route? What if there's traffic?

Structured prompts are like using GPS with specific waypoints. Each category (layer) handles one aspect without interfering with others. When you say "the lighting should be warm," it doesn't accidentally change the character's appearance or the camera angle.

This is v0.1 - But Getting More Confident

After generating 15,000+ images in October 2025 alone, the 7-layer structure is proving itself. While I'm still refining edge cases and discovering optimizations, the core separation principle consistently delivers 90-100% character consistency across multi-scene stories. Some layers might evolve, but the fundamental approach is settling into a reliable production system.

The 7 Categories (Current Structure)

These 7 layers emerged from testing - they might evolve as I discover better ways to organize prompts:

Layer What It Controls Why Separate?
Layer 1 Who/What (Identity) Keep character pure - no mixed attributes
Layer 2 Where in Frame Composition doesn't affect identity
Layer 3 The Scene (Environment) Background detail independent of subject
Layer 4 Light Sources Lighting won't alter composition
Layer 5 Palette (Color) Color grading separate from lighting
Layer 6 Camera Settings Technical specs don't change scene
Layer 7 Quality Standards Realism tags preserve all other layers

What 15,000+ Images Taught Me: By separating these concerns into distinct categories, each instruction doesn't accidentally interfere with the others. Lighting instructions don't mess up character appearance. Camera settings don't accidentally change the environment. This isn't theory anymore - it's a repeatable production system that delivers consistent results across thousands of generations.

What I'm Using This For

JSON Story
7 Layers
Consistent Images

Story Sequences: Testing character consistency across 19+ scenes with different settings, times of day, and weather conditions.

Style Transformations: Taking existing scenes and recreating them in different visual styles while keeping composition intact.

Reference-Based Generation: Analyzing video frames or photos and generating new images based on that analysis.

Learning What Works: Each generation teaches me something new about how AI models interpret structured vs. unstructured instructions.

Visual Scene Editor

The Scene Editor bridges the gap between LPA's structured JSON and intuitive visual editing. Instead of writing complex prompts, you draw boxes directly on images to specify what should change. Each annotation gets mapped to the appropriate LPA layer automatically.

Draw & Annotate

Box an area, describe the change: "add explosion here," "remove this object," "replace with neon sign"

LPA Translation

Your visual edits get translated into proper LPA layers - environment changes go to Layer 3, lighting to Layer 4, etc.

Why This Matters: The Scene Editor makes LPA accessible without requiring JSON knowledge. Draw what you want changed, and the system handles the structured prompt construction behind the scenes. Same precision, visual workflow.

In Plain English: LPA is my approach to organizing AI image prompts through structured JSON categories instead of unstructured text blocks. After 15,000+ images in production, the results are clear: separation of concerns produces dramatically more consistent, controllable results. Version 0.1, but the core principle is proven.

LPA Extensions
Active R&D
31/10/2025

LPA Casting System

Transform existing LPA stories into entirely different visual styles while preserving spatial choreography. Cast VFX scenes into National Geographic photorealism, or realistic scenes into absurdist satire - all automated.

Scene-by-scene LPA layer remixing
Preserve character identity + spatial composition
VFX → Photorealistic or Realistic → Absurdist
Prerequisites: LPA Workflow outputs, LPA Data Model
Runs after generate_lpa_story (remixes existing scenes)
cast_lpa_story --target-style
Feudal Japan Universe - Garden Bridge Neo-Tokyo 1999 Universe - 7-Eleven
New
10/31/2025

Character Isolation System

Build persistent character libraries with dual 3D render + realistic portrait versions. Create once, reference by number across unlimited scenes.

Two versions per character: 3D + realistic
Auto-numbered sequential scenes
$0.070/character, reuse infinitely

Character Examples

Full details inside

Characters used by: LPA Workflow, LPA Casting (via Project library)
Architecture
11/01/2025

LPA Data Model Architecture

From JSON chaos to type-safe clarity: A four-level hierarchy for managing complex AI image generation workflows with casting styles, scene variations, and generation metadata.

4-level hierarchy: Project → Style → Variation → Layers
Pydantic validation with JSON-first philosophy
60% less duplication via base + overrides
Powers all LPA workflows with type-safe structure
📦 Project
├── 🎨 CastingStyle
│ ├── Base LPA Layers
│ └── Scene Variations
└── Generated Scenes
🔵 Foundation - Powers ALL LPA workflows (read this first)
Both feed into generate_lpa_story pipeline
$ welcome --user=returning
Language:
English English
Welcome

Great to see you back! Your continued interest means a lot. I'd love to hear what brings you here today - feel free to reach out at .

New Opportunity

Now exploring: /character-ip-co-creation - Partnership options include 10-20% referral fee, qualified introductions, or 50% equity partnership