StableGen: Inside the Blender Add-on That Turns Words into 360° Textures

In one sentence—StableGen wires a ComfyUI server to Blender so you can texture entire scenes from natural-language prompts and bake the result to normal UV maps without ever leaving the viewport.


What This Article Answers

  1. What exactly is StableGen and which daily texturing pains does it remove?
  2. How do you go from a blank Blender file to a baked, export-ready texture in less than 15 minutes?
  3. How does the add-on guarantee multi-view consistency, geometry fidelity and style control at the same time?
  4. Where will it probably break, and what is the fastest recovery sequence?

1. The Pain Points StableGen Eliminates

Summary: Hand-painted PBR, seam-fixing, endless Photoshop round-trips and object-by-object texturing are the four chores the tool attacks first.

Manual Misery StableGen Counter-move Real-world Pay-off (taken from supplied showcases)
UV-splitting + seam hunting Automatic camera-project & weighted blend 87-mesh subway entrance received cohesive stone cladding in one pass
Style drift across objects Single prompt (or reference image) drives every visible mesh Pontiac GTO body, rims and seats all inherited the same matte-black vibe
Perspective slip (side-view does not match front) Sequential mode + visibility mask re-uses previous views Anime head’s cyan braid stayed continuous from ear to neck
Client “make it bluer” at 11 pm Refine/Img2Img re-styles existing texture in 2–3 min Helmet plume colour changed without re-painting metal parts

Author’s reflection
I used to budget one full day per hero asset for hand-painted wear. The first time I pressed “Generate” on a 12-camera sedan and got a bake-ready 4 K map in six minutes, my internal estimator died on the spot.


2. Ten-Minute Architecture Overview

Summary: Blender talks JSON to ComfyUI; heavy diffusion runs on the server, Blender only handles projection, blending and real-time preview.

Blender (StableGen panel)  <--HTTP/JSON-->  ComfyUI server  <--SDXL/FLUX-->  GPU
  • You keep working in the viewport; ComfyUI can live on the same box or a remote workstation
  • Generated images stream back as ordinary Blender Image data-blocks and are hot-swapped into material nodes
  • All parameters (prompts, ControlNet weights, IPAdapter image) are stored in the .blend file, so re-opening the scene later will replay the exact workflow

3. Install & First Launch Checklist

Summary: Three layers—ComfyUI, dependencies, Blender add-on—must be satisfied in order; the supplied installer.py script automates the middle layer.

Step Purpose Command / Action Typical Trip-wire
1. Grab ComfyUI Provide inference runtime git clone https://github.com/comfyanonymous/ComfyUI.git Forgot to launch main.py once to verify CUDA
2. Run installer Download custom nodes + core models python installer.py <ComfyUI_path> pick “Recommended” Git not in PATH on Windows → script dies silently
3. Install add-on Bring UI into Blender Edit > Preferences > Add-ons > Install… choose StableGen.zip ZIP extracted one folder too deep → panel missing
4. Fill paths Tell Blender where to save and whom to call Output dir, server address (default 127.0.0.1:8188) Forgetting to enable System > Network > Online Access blocks localhost calls

Author’s reflection
I keep a portable ComfyUI on an external NVMe; moving between desktop and laptop only needs one path change in the add-on prefs.


4. UI Walk-Through: Ten Buttons You Really Use

Summary: Most day-to-day work happens inside a single N-panel tab; master these controls and you can ignore the rest for weeks.

  1. Generate / Cancel – big orange button; becomes red “Cancel” while ComfyUI is busy
  2. Bake Textures – converts stacked projection materials into ordinary UV textures for export
  3. Add Cameras – ring or sphere array; interactive radius adjustment before confirming
  4. Collect Camera Prompts – lets you type per-view hints (“close-up rust on door handle”)
  5. Preset Selector – Default, Characters, Quick Draft plus your own saved JSON configs
  6. Target Objects – All Visible or Selected only; prevents background stage from receiving texture
  7. Generation Mode – Separate / Sequential / Grid / Refine / UV-Inpaint (see next section)
  8. ControlNet Stack – Depth, Canny, Normal; each gets its own weight, start/end step
  9. IPAdapter – drop an image to guide style; strength 0–1 and step range available
  10. Export GIF/MP4 – spins the active object 360°; handy for ArtStation posts

5. Generation Modes Explained with Scenarios

Summary: Pick the mode that matches your tolerance for speed vs. consistency; each creates a different ComfyUI workflow under the hood.

Mode Best Use-case Speed Consistency Trick Showcase Example
Separate Quick look-dev slap ★★★★☆ None (every view independent) Default cube “marble” test
Sequential Hero asset, final quality ★★☆☆☆ In-paints new views using previous ones + visibility mask Anime head colour-consistent braid
Grid Entire scene preview ★★★★★ Single pass; optional 2-step refinement Subway station stone look
Refine Change style without losing detail ★★★☆☆ Img2Img on existing texture Helmet plume turned red on request
UV-Inpaint Fill un-seen islands after unwrap ★★☆☆☆ Context from surrounding texels Chair bottom finished without re-render

Author’s reflection
Sequential feels like watching a painter walk around the model—each new stroke respects what was already there. The first time I saw the mask update live in the viewport I finally “got” why multi-view consistency is a solved problem here.


6. Geometry Control: Three ControlNets at Once

Summary: You can stack Depth, Canny and Normal simultaneously; the add-on merges their influence maps before sending to ComfyUI.

  • Depth – good for large-scale displacement, rock cliffs, car tyres
  • Canny – keeps hard edges crisp, perfect for panel gaps, engravings
  • Normal – prevents fine surface detail from swimming away

Typical recipe for a hard-surface vehicle:

Depth 0.6 (start 0, end 0.4)  
Canny 0.4 (start 0, end 0.6)  
Normal 0.3 (full range)  

If ornament starts to feel “stuck on” instead of “built in”, lower Depth and raise Normal.


7. Style Control: IPAdapter Without the Voodoo

Summary: Drop any square image into the IPAdapter slot; CLIP vision encoders convert it into style tokens that survive even when prompt is only two words.

  • Strength 0.3–0.5 = subtle mood
  • 0.6–0.8 = obvious brush or colour transfer
  • 1.0+ = almost a texture clone; micro-details may overpower geometry

Pro tip: leave End step at 0.5 so the second half of denoising can re-focus on your text prompt—keeps logos readable.


8. Baking & Export: From Procedural Projection to Game-Ready UV

Summary: The Bake operator unwraps a new UV map, renders all accumulated projections into one image and rebuilds a vanilla Principled BSDF—no special nodes left.

Workflow (also printed in the panel tool-tip):

  1. Select meshes → Bake Textures → choose resolution (1–4 K) and margin pixels
  2. Wait for progress bar; new image named <object>_baked appears in the file
  3. File → Export → FBX; enable “Embed Textures” or copy the *_baked.png manually
  4. Unity/UE import: assign the baked image to Base Color; you’re done

Author’s reflection
I still do a quick “shade-smooth + autosmooth 30°” before baking; it prevents faceted gradients on curved fenders without adding geometry.


9. Output Folder Structure (Keep Your House in Order)

<your_output_dir>/
  <scene_name>/
    2025-10-27T14-33-02/
      generated/          # raw view renders
      controlnet/         # depth, canny, normal passes
      baked/              # stand-alone Bake button result
      generated_baked/    # if “Bake while Generate” was on
      inpaint/            # masks & context for Sequential
      uv_inpaint/         # UV space masks
      misc/               # temp files
      prompt.json         # exact ComfyUI API payload for replay

Delete older time-stamps freely; the JSON file lets you re-create any version in ComfyUI standalone.


10. Troubleshooting Top 8 (Console Messages Translated)

Summary: Most errors appear either in Blender’s System Console or the ComfyUI terminal—look there first.

Symptom Root Cause Fast Fix
Generate button greyed out Output path empty Add-on prefs → pick any writable folder
“Connection refused” ComfyUI not running python main.py in <ComfyUI>
“Model not found: sdxl_base_1.0” Forgot installer step Re-run installer.py or place safetensors in ComfyUI/models/checkpoints
GPU OOM at 4 K VRAM < 8 GB Enable Auto-Rescale or drop to 2 K preview
Texture invisible Viewport still Solid Switch to Rendered shading, Cycles
Seams after bake UV margin too small Re-bake with 8–16 px margin
IPAdapter ignored Image not square Crop to 1:1 otherwise CLIP vision chokes
All meshes textured Target = All Visible Switch to Selected before pressing Generate

11. Author’s Short Reflection on Presets

I mocked the built-in presets until a junior teammate produced a better car paint in “Quick Draft” than I managed with 20 manual parameters. The takeaway: start from presets, add nuance only when the image tells you to.


Action Checklist / Implementation Steps

  • [ ] Install ComfyUI and launch once to verify CUDA
  • [ ] Run python installer.py <path> choosing “Recommended”
  • [ ] Install StableGen.zip in Blender 4.2+ and point add-on to output folder
  • [ ] Test default cube → Add Cameras 6 → Generate → visible in Rendered view
  • [ ] Move to real asset: Apply modifiers, set Target = Selected, choose preset
  • [ ] Grid mode for look-dev, Sequential for finals
  • [ ] Bake at export resolution, save PNG, embed in FBX
  • [ ] Store the time-stamped JSON if client may want re-runs

One-page Overview

StableGen couples Blender to a ComfyUI back-end so artists can texture entire scenes with text prompts while keeping multi-view consistency. Cameras are auto-placed, projections are blended with visibility masks, and results can be baked to standard UV textures for game engines. Depth, Canny and Normal ControlNets lock texture to geometry; IPAdapter injects reference-image style. Five generation modes trade speed against consistency. Installation needs ComfyUI, an installer script for dependencies, and a small Blender add-on. Common pitfalls are missing paths, OOM, and un-applied modifiers—all quick to fix. The whole flow from blank mesh to export-ready 4 K texture can run in under 15 minutes on 8 GB VRAM.


FAQ

  1. Can I use my existing SD 1.5 checkpoints?
    No—StableGen communicates via SDXL/FLUX nodes; use SDXL safetensors files.

  2. Does the add-on work on macOS?
    Blender side yes, but ComfyUI needs CUDA or MPS; many ControlNet extensions still require NVIDIA.

  3. Is internet access mandatory?
    Only for the initial dependency download; generation runs fully local.

  4. How many cameras are “enough”?
    For fist-sized props 8–10, for cars 12–16, for buildings 20–24; keep adjacent angle < 45 °.

  5. Can I feed my own UV layout instead of auto-unwrap?
    Bake respects existing UVs; projection phase always uses automatic spherical unwrap internally.

  6. Why is my baked PNG black?
    You likely baked with no lights; add an HDRI or tick “Emit” pass in shader temporarily.

  7. Is the generated texture tileable?
    Not by default—use the UV-Inpaint mode on seams or bake and run an external tile maker.

  8. Can the remote ComfyUI server live in the cloud?
    Yes, open the port and set the public IP in add-on prefs; watch bandwidth if you send 4 K images.