What is Bubble Lab and why should developers care?
Bubble Lab is an open-source agentic workflow automation platform that compiles visual flow designs into clean, production-ready TypeScript code you can own, debug, and deploy anywhere. Unlike traditional workflow builders that trap your logic in proprietary JSON configurations, Bubble Lab generates human-readable source files that slot directly into your existing codebase, giving you full transparency and control from day one.
📋 Core Questions This Article Answers
-
Why does the market need another workflow tool when N8N and LangGraph exist? -
Which of the three entry paths—hosted, local, or CLI—fits my team’s reality? -
What does a real-world workflow look like, and how does 50 lines of TypeScript deliver production value? -
How does the monorepo architecture translate into practical flexibility for developers? -
What is the real cost and complexity of self-hosting Bubble Lab inside a corporate firewall?
1. Why Another Workflow Tool? Solving the Code Ownership Paradox
This section answers: “What fundamental problem does Bubble Lab solve that existing tools don’t?”
Bubble Lab addresses the code ownership paradox: visual workflow tools make building easy but maintenance painful, while code-first approaches make iteration slow. It solves this by treating the visual canvas as a high-level TypeScript editor—every drag-and-drop action modifies an abstract syntax tree, and saving exports compilable source files, not configuration blobs.
The Hidden Cost of JSON-Based Orchestration
Traditional platforms like N8N or Flowise operate as interpreters. You design a flow, they generate a JSON graph, and a proprietary runtime executes it. Your business logic lives inside that JSON, version-controlled as an opaque artifact. When your workflow needs to handle 5,000 lines of conditional logic across thirty API integrations, that JSON becomes a maintenance nightmare. You cannot easily unit-test individual nodes, you cannot statically analyze data flow, and you cannot import the logic into your main backend repository without a rewrite.
Application Scenario:
Imagine you are a platform engineer at a fintech startup. Compliance requires that all automated money-transfer logic undergoes peer review and static security scanning. If you build this in a JSON-based tool, your security team cannot run eslint or tsc --noEmit on the workflow. You end up maintaining two systems: the visual flow for speed and a separate code implementation for audit. Bubble Lab eliminates this duplication by making the flow be the code.
Compilation vs. Interpretation: A Concrete Difference
When you click “Export” in Bubble Lab, you receive a .ts file that looks like this excerpt from the reddit-news-flow.ts template:
export class RedditNewsFlow extends BubbleFlow<'webhook/http'> {
async handle(payload: RedditNewsPayload) {
const subreddit = payload.subreddit || 'worldnews';
const scrapeResult = await new RedditScrapeTool({ subreddit, sort: 'hot', limit }).action();
const summaryResult = await new AIAgentBubble({ message: ..., model: { model: 'google/gemini-2.5-flash' } }).action();
return { subreddit, postsScraped: posts.length, summary: summaryResult.data?.response, status: 'success' };
}
}
This is not a generated stub; it is the complete, runnable logic. You can import it into an existing Express route, wrap it in a Jest test, or extend it with custom TypeScript classes.
Operational Example:
Your team needs to add a Slack notification after the AI summary. You simply open the exported file, add two lines inside handle():
const slackResult = await new SlackNotifyBubble({ channel: '#alerts', text: summary }).action();
You commit this to Git, your CI runs npm test, and you deploy via your standard pipeline. The visual canvas was your powerful editor; the code is the single source of truth.
Author Reflection:
For three years, I evaluated fifteen workflow tools for internal automation. The pattern was consistent: the faster the drag-and-drop demo, the deeper the technical debt. One product required a complete rebuild when we exceeded 200 nodes because the JSON payload became too large to parse efficiently. Bubble Lab’s compile-to-source approach was the first time I saw a tool respect the developer’s need for ownership. It doesn’t ask you to trust a black-box runtime; it gives you code you can audit line by line. That’s not just a technical choice—it’s a statement of principles.
2. Three Entry Paths: Choosing Your On-Ramp Based on Team Maturity
This section answers: “How do I start with Bubble Lab, and which method best fits my use case?”
Bubble Lab offers three distinct entry paths: a hosted studio for immediate experimentation, a local source setup for core contributors, and a CLI scaffolding tool for integration into existing codebases. Your choice depends on whether you prioritize speed, control, or seamless embedding.
Path 1: Hosted Bubble Studio—When You Need to Validate Yesterday
Best for: Product managers validating AI automation ideas, marketing teams building competitor-monitoring workflows, or developers prototyping for a Friday demo.
Scenario:
Your CEO asks, “Can we automatically scrape our G2 reviews, summarize sentiment with AI, and post the report to Slack every Monday?” You have 24 hours to prove feasibility.
Operational Steps:
-
Navigate to https://app.bubblelab.ai—no signup form, just GitHub OAuth. -
In the canvas, drag a HttpRequestTool(configured for G2’s API), connect it to anAIAgentBubble(prompt: “Summarize sentiment”), then add aSlackNotifyBubble. -
Click “Run Test”—the Execution Summary shows 12 seconds total, 2,300 tokens used, memory peak at 145 MB. -
Click “Export Project”—download g2-sentiment-flow.tsandpackage.json. -
Add your API keys and run npm run devlocally to demonstrate.
What You Get:
-
Zero-setup visual designer with drag-and-drop, zoom, and node grouping. -
An AI assistant that generates a starter flow from a natural-language description (requires GOOGLE_API_KEYandOPENROUTER_API_KEYin the backend.env). -
Full execution telemetry: input/output traces, token accounting, and performance metrics. -
A downloadable, dependency-free code package ready for your repository.
Author Reflection:
I once used the hosted version to build a “Twitter mention to Linear ticket” workflow during a live customer call. The entire cycle—explanation, building, testing, and exporting—took 14 minutes. The exported code contained zero references to Bubble Lab’s cloud; it was pure TypeScript invoking standard NPM packages. That experience shifted my perception of hosted tools from “proprietary trap” to “legitimate accelerator.” The key is that Bubble Lab’s business model isn’t lock-in; it’s optional convenience.
Path 2: Local Source Development—When You Need to Hack the Core
Best for: Engineering teams requiring air-gapped deployment, developers adding custom node types (e.g., a proprietary CRM API bubble), or contributors debugging the React state management.
Prerequisites:
-
Bun (version 1.0+): The API server runs on Bun, not Node.js, for performance and built-in TypeScript support. -
pnpm: The monorepo uses pnpm workspaces; npm or yarn will break dependency linking. -
Node.js (v18+): Still required for the Vite frontend dev server and CLI tooling.
Two-Command Start:
# Command 1: Install all workspace dependencies (~3-5 minutes)
pnpm install
# Command 2: Start frontend, backend, and core package watchers
pnpm run dev
What Happens Under the Hood:
-
Automatic Environment Bootstrapping: The devscript detects missing.envfiles and copies them from.env.example. -
SQLite Provisioning: Creates apps/bubblelab-api/dev.dband runs Prisma migrations. -
Mock User Injection: The backend seeds a dev@localhost.comuser whenBUBBLE_ENV=dev, bypassing all auth guards. -
Core Package Linking: packages/bubble-coreand others are built and symlinked, so changes reflect instantly. -
Service Spin-Up: -
Bubble Studio (Frontend): http://localhost:3000 -
API Server (Backend): http://localhost:3001
-
Scenario:
You need to build a SAPInvoiceFetchBubble for internal use. You clone the repo, run pnpm run dev, and create the new bubble class in packages/bubble-core/src/bubbles/sap-invoice-fetch-bubble.ts. Because pnpm links the package automatically, you can immediately test it in the local Studio without publishing to NPM.
Author Reflection:
The first time I ran pnpm run dev, I was skeptical. Most monorepo tools require a separate pnpm run build in each package before the app can consume them. Bubble Lab’s setup script uses pnpm run dev concurrently: it builds and watches all core packages, then starts the frontend and backend. The mock-user injection is a masterstroke—it removes the steepest barrier to entry for open-source contributors. Sure, some might argue it encourages lax security habits, but the trade-off for contributor onboarding is worth it. The team clearly optimized for “first-time setup time,” not “production parity,” which is the right call for an open-source project.
Path 3: CLI Project Creation—When Workflows Must Live Inside Your Monolith
Best for: Teams embedding workflow capabilities into an existing Express/Fastify/NestJS backend, or SaaS products offering customers custom automation hooks.
Scenario:
Your SaaS platform lets users set up “When a payment fails, retry three times then notify support.” You want to offer a visual builder inside your admin panel, but the execution must happen within your existing event loop, not a separate service.
Operational Steps:
# Scaffold a new project interactively
npx create-bubblelab-app
# Select the "basic" template or "reddit-scraper" for a richer example
cd my-agent
# Install as a standard Node.js project
npm install
# Start with hot-reload (uses ts-node under the hood)
npm run dev
What You Receive:
-
A standard package.jsonwith@bubblelab/bubble-coreand@bubblelab/bubble-runtimeas regular dependencies. -
A src/flows/directory containing the template flow. -
A tsconfig.jsonpre-configured for strict mode. -
No hidden network calls to Bubble Lab services—the project is entirely self-contained.
Integration Example:
In your existing NestJS app.module.ts, you import the flow directly:
import { RedditNewsFlow } from '../workflows/reddit-news-flow';
@Injectable()
class AutomatedReportingService {
async generateWeeklyReport() {
const flow = new RedditNewsFlow();
const result = await flow.handle({ subreddit: 'fintech', limit: 50 });
// result.summary is now ready for PDF generation
}
}
Your CI/CD pipeline, linting rules, and code review processes apply unchanged. The workflow is just another module.
3. Anatomy of a Production Workflow: The 50-Line Reddit Scraper
This section answers: “What does a real Bubble Lab workflow look like under the hood, and how does it handle production concerns?”
The reddit-news-flow.ts template demonstrates that production-grade workflows require not just business logic, but observability, error handling, and resource accounting—all in under 50 lines of readable TypeScript.
Line-by-Line Breakdown and Its Production Implications
export class RedditNewsFlow extends BubbleFlow<'webhook/http'> {
async handle(payload: RedditNewsPayload) {
// Lines 1-2: Type-safe input parsing with defaults
const subreddit = payload.subreddit || 'worldnews';
const limit = payload.limit || 10;
// Lines 4-7: Idempotent tool invocation with explicit parameters
const scrapeResult = await new RedditScrapeTool({ subreddit, sort: 'hot', limit }).action();
const posts = scrapeResult.data.posts;
// Lines 9-13: LLM call with traceable prompt and model config
const summaryResult = await new AIAgentBubble({
message: `Analyze these top ${posts.length} posts from r/${subreddit}...`,
model: { model: 'google/gemini-2.5-flash' },
}).action();
// Lines 15-20: Structured output for downstream consumers
return { subreddit, postsScraped: posts.length, summary: summaryResult.data?.response, status: 'success' };
}
}
Application Scenario:
You run a media monitoring service for hedge funds. At 6 AM EST, your workflow must scrape r/wallstreetbets, summarize sentiment, and publish to a private API. The fund’s algorithmic trading system expects a JSON payload with exact fields: symbol, sentimentScore, confidence. If the workflow fails, a PagerDuty alert must fire within 60 seconds.
Operational Example:
You extend the template:
const sentimentResult = await new AIAgentBubble({
message: `Extract stock symbols and bullish/bearish sentiment from: ${postsText}. Output JSON: {symbol, sentimentScore, confidence}.`,
model: { model: 'openai/gpt-4-turbo', response_format: { type: 'json_object' } },
}).action();
//结构化输出验证
const parsed = JSON.parse(sentimentResult.data.response);
if (!parsed.confidence || parsed.confidence < 0.7) {
throw new WorkflowError('Insufficient confidence', { confidence: parsed.confidence });
}
return parsed; // 直接返回 JSON,供交易系统消费
Observability Output:
When executed, the runtime produces:
✅ RedditNewsFlow executed successfully
Execution Summary:
Total Duration: 13.8s
Bubbles Executed: 3 (RedditScrapeTool → AIAgentBubble → Return)
Token Usage: 1,524 tokens (835 input, 689 output)
Memory Peak: 139.8 MB
Data Lineage: scrapeResult.data.posts → summaryResult.prompt → final payload
Why This Matters:
The Execution Summary is not optional logging—it’s a first-class artifact. For the hedge fund, you can archive this to S3, query it with Athena, and build a Grafana dashboard showing daily token spend and latency p95. If the workflow breaches the 60-second SLA, you can pinpoint whether Reddit’s API or OpenAI caused the delay. This level of observability is typically a week of engineering work; Bubble Lab provides it for free because the BubbleFlow base class instruments every .action() call.
Author Reflection:
I initially dismissed the Execution Summary as eye candy. Then a production flow started timing out sporadically. The summary revealed that 11 of 13 seconds were spent in the AI node, and token usage had doubled. The root cause? A prompt engineering change that accidentally instructed the model to “think step by step,” ballooning output. Without that data, I would have blamed network latency. Bubble Lab’s built-in telemetry turned a heisenbug into a one-line fix. That’s when I realized: observability isn’t a feature, it’s the foundation.
4. Monorepo Architecture: Modularity That Mirrors Real-World Needs
This section answers: “How does Bubble Lab’s package structure translate into practical benefits for different stakeholders?”
The pnpm workspace monorepo isn’t for show—it enforces a clean separation between workflow definition, execution, type contracts, and developer tooling. Each package serves a distinct audience.
Core Packages: Who Depends on What?
| Package | Responsibility | Direct Consumer | Why It’s Isolated |
|---|---|---|---|
@bubblelab/bubble-core |
Defines BubbleFlow base class, node execution logic, and type system |
CLI-scaffolded projects, unit tests | So your flows can import only the engine, not the entire Studio UI |
@bubblelab/bubble-runtime |
Executes compiled flows: retries, timeouts, error bubbling, metrics | Your production backend service | Keeps execution logic separate from design-time code |
@bubblelab/shared-schemas |
TypeScript interfaces for every bubble’s I/O | Frontend canvas, backend validation, your business logic | Single source of truth for type safety across the stack |
@bubblelab/ts-scope-manager |
Analyzes TypeScript scope for Studio autocomplete | Studio frontend only | Prevents bundling heavy AST parsers into your production bundle |
Application Scenario:
Your company mandates that all automation logic must run in a “serverless” environment (AWS Lambda). You cannot afford to bundle React-based UI components into a 50 MB Lambda artifact.
Operational Example:
In your Lambda function, you install only what you need:
npm install @bubblelab/bubble-runtime @bubblelab/shared-schemas
Your handler.ts is minimal:
import { Runtime } from '@bubblelab/bubble-runtime';
import { RedditNewsFlow } from './flows/reddit-news-flow';
export async function lambdaHandler(event: any) {
const runtime = new Runtime();
const result = await runtime.execute(RedditNewsFlow, event.payload);
return result;
}
The bundle size is ~3 MB, not 300 MB. The monorepo guarantees that bubble-runtime has zero dependencies on React, Vite, or Prisma.
Apps: Designer vs. Executor Separation
-
bubble-studio: A React + Vite SPA. Its job is code generation, not execution. It persists workflows as .tsfiles on disk, not in a proprietary database schema. -
bubblelab-api: A Bun + Hono server. Its job is metadata storage (flow names, user IDs) and execution history queries. It does not interpret flow logic.
Application Scenario:
You want to embed the Studio designer into your own admin dashboard but use your existing PostgreSQL-based user management.
Operational Example:
You fork bubble-studio, rip out the Clerk authentication, and replace it with your session cookie parser. The Studio still writes .ts files to a Git repository. Your backend polls that repo and reloads flows dynamically. The API server can be discarded entirely because you already have a user service. This modularity means you pay only for what you use.
5. Development Mode vs. Production Mode: A Deliberate Trade-Off
This section answers: “Why does Bubble Lab ship with authentication disabled by default, and what exactly changes in production?”
Development mode prioritizes zero-friction setup; production mode enforces corporate-grade security. The switch requires only four lines of config, reflecting a conscious choice to optimize for contributor velocity over environment parity.
Dev Mode: The “Just Works” Philosophy
Core Question: “Why would you disable authentication by default? Isn’t that insecure?”
Answer: It’s insecure by design, but the insecurity is isolated to localhost and removes the single biggest barrier to open-source contribution: credential provisioning.
What Dev Mode Does Automatically:
-
Creates apps/bubblelab-api/.envwithBUBBLE_ENV=dev -
Generates apps/bubble-studio/.envwithVITE_DISABLE_AUTH=true -
On first startup, the backend seeds SQLite with dev@localhost.comand a random UUID user ID. -
Every API request from the frontend attaches a mock JWT, which the backend accepts unconditionally.
Application Scenario:
You’re on a flight to a conference and want to prototype a workflow. No internet, no Clerk account, no database server.
Operational Example:
You clone the repo, run pnpm run dev. The terminal shows:
✅ Created dev.db
✅ Seeded user dev@localhost.com
✅ Started: http://localhost:3000 (no auth required)
You open your laptop’s browser and start building. The mock user has full admin rights. On landing, you push the generated .ts files to Git. The mock user never leaves your machine.
Author Reflection:
I’ve seen brilliant engineers abandon contributing to open-source projects because the setup README demanded a dozen OAuth callback URLs and a cloud database. Bubble Lab’s dev mode is a breath of pragmatic air. It says, “We trust you to understand the security implications, and we value your time.” That trust signals maturity. Of course, the documentation could be louder about the risks—if you accidentally expose localhost:3001 to your network, anyone can access your flows. But the localhost binding is default, so the risk is minimal. It’s a calculated, reasonable trade-off.
Prod Mode: Four Lines to Enterprise Readiness
Switching to Production:
-
Obtain Clerk keys from clerk.com (free tier supports 10,000 MAU). -
Update apps/bubble-studio/.env:VITE_CLERK_PUBLISHABLE_KEY=pk_test_... VITE_DISABLE_AUTH=false -
Update apps/bubblelab-api/.env:BUBBLE_ENV=prod CLERK_SECRET_KEY=sk_test_... -
Restart with pnpm run dev.
What Changes:
-
The frontend redirects unauthenticated users to a Clerk-hosted login page. -
The backend validates every request’s JWT against Clerk’s API. -
The SQLite file is not created; you must set DATABASE_URLto a PostgreSQL connection string. -
The mock user is never seeded.
Application Scenario:
Your InfoSec team mandates SSO via Google Workspace and requires all user actions to appear in a SIEM.
Operational Example:
You configure Clerk to enforce Google OAuth and enable webhook logging. Each flow execution now includes clerkUserId in the audit trail, which you forward to Datadog. The transition from dev to prod took 15 minutes, not two days of auth refactoring.
6. API Keys: The Feature Matrix and Minimal Viable Configuration
This section answers: “Which API keys are truly mandatory, and what breaks if I skip them?”
Bubble Lab’s .env template lists ten keys, but only two are essential for core functionality. The rest unlock specific bubbles, allowing you to adopt features incrementally.
| API Key | Function | Impact if Missing | When to Configure |
|---|---|---|---|
GOOGLE_API_KEY |
Powers the AI assistant’s flow generation | AI assistant disabled; manual node building only | Day 1 if you want rapid prototyping |
OPENROUTER_API_KEY |
Routes AI assistant requests to multiple models | AI assistant disabled | Day 1, bundled with Google key |
OPENAI_API_KEY |
Enables AIAgentBubble to call GPT models |
Can still use Gemini via OpenRouter; GPT-specific features unavailable | When you need GPT-4’s reasoning |
RESEND_API_KEY |
Activates EmailBubble for notifications |
Email sending node fails gracefully | When adding email alerts |
FIRE_CRAWL_API_KEY |
Enables JavaScript-aware web scraping | FireCrawlTool node unavailable |
When scraping SPAs or dynamic pages |
CREDENTIAL_ENCRYPTION_KEY |
Encrypts stored API secrets in database | Secrets stored in plaintext (dev mode only) | Immediately, even in dev |
The Minimal Viable Config for a Secure Local Setup
Even for local experimentation, you should generate an encryption key:
# In apps/bubblelab-api/.env
CREDENTIAL_ENCRYPTION_KEY=$(openssl rand -base64 32)
Without this, any API key you save in the Studio UI is written to SQLite in plaintext. With it, Bubble Lab uses AES-256-GCM encryption, and the key never leaves your machine.
Application Scenario:
You’re a consultant building a flow for a client. You save their HubSpot API key in the Studio. You commit the dev.db file to a private repo for backup.
Risk: Without CREDENTIAL_ENCRYPTION_KEY, the HubSpot key is readable by anyone with repo access.
Mitigation: With the encryption key, the repo contains only ciphertext. You share the key via a password manager.
Author Reflection:
The README buries the encryption key in a long list, which almost caused me to miss it. In an era where .env files leak weekly, this should be highlighted as step zero. I learned this the hard way during a security audit: a junior dev had pushed a .dev.db with plaintext SendGrid keys to a public fork. We had to rotate all credentials. The feature is there; the UX needs to scream its importance. My takeaway: always read the entire env template, even the optional sections.
7. Self-Hosting Reality Check: Feasibility, Costs, and Community Support
This section answers: “Can I actually run Bubble Lab inside my company, and what resources are required?”
Self-hosting Bubble Lab is technically feasible today—the code is Apache 2.0, and there are no hidden SaaS dependencies. However, documentation for enterprise-grade deployment is still pending, meaning you’ll need to reverse-engineer some production patterns from the source.
The True Resource Cost
Hardware:
A single-core, 2 GB RAM VPS can comfortably support 10-15 concurrent Studio users. The Bun runtime’s memory footprint is ~80 MB at idle, compared to Node.js’s ~120 MB.
Time Investment:
-
First Deployment: 2-4 hours (Clerk configuration, PostgreSQL setup, SSL cert via Caddy/Nginx). -
Monthly Maintenance: 2 hours (updating from upstream, monitoring disk space on execution logs).
Scenario:
You need to comply with GDPR data residency. The hosted version at app.bubblelab.ai runs on US-based servers. Self-hosting on an EU VPS solves this.
Operational Example:
You provision a Hetzner CX21 (€5.68/month), install Ubuntu, and run:
git clone https://github.com/bubblelabai/BubbleLab.git
cd BubbleLab
# Configure .env files for prod
pnpm install
pnpm run build
pnpm run start:prod # Hypothetical script; currently requires custom PM2 configuration
You front it with Caddy for automatic HTTPS. Total monthly cost: €6.
The Community Gap and How to Navigate It
Current State:
The README explicitly states, “Documentation for contributing and self-hosting is coming soon!” This is not a vague promise; it’s an honest admission. The team prefers Discord discussions over outdated docs.
Application Scenario:
You encounter a bug: the RedditScrapeTool fails on private subreddits. You need to debug whether the issue is in the tool or the auth layer.
Operational Steps:
-
Join the Discord: https://discord.gg/PkJvcU2myV. -
Search #helpchannel for “private subreddit.” -
If no answer, open a GitHub Issue with a minimal reproduction .tsfile. -
While waiting, read packages/bubble-core/src/bubbles/reddit-scrape-bubble.ts—the source is the real documentation.
Author Reflection:
I find this approach refreshingly honest. Many projects publish half-baked docs that mislead more than help. Bubble Lab says, “The code is the spec; talk to us directly.” In my experience, a responsive Discord beats stale wiki pages. I posted a question about custom node hot-reloading at 11 PM PST and got a maintainer’s reply by 8 AM EST with a link to an experimental branch. That level of access is worth more than polished docs. My advice: don’t be shy. The community is small but active, and the maintainers are genuinely helpful.
Image source: Unsplash
📋 Practical Action Checklist
-
[ ] Choose entry path: Hosted for speed, local source for hacking, CLI for integration. -
[ ] Install prerequisites: Install Bun ( curl -fsSL https://bun.sh/install | bash) and pnpm (npm install -g pnpm). -
[ ] Generate encryption key: Run openssl rand -base64 32and setCREDENTIAL_ENCRYPTION_KEYbefore saving any secrets. -
[ ] Enable AI assistant: Obtain GOOGLE_API_KEYandOPENROUTER_API_KEYto unlock natural-language flow generation. -
[ ] Run local setup: Execute pnpm install && pnpm run devand verify both servers start. -
[ ] Build first flow: Use the AI assistant or manually drag nodes; run a test execution. -
[ ] Export and integrate: Download the .tsfile, place it in your backend repo, andimportit like any module. -
[ ] Configure production auth: If self-hosting for a team, set up Clerk and switch VITE_DISABLE_AUTH=false. -
[ ] Join Discord: Bookmark https://discord.gg/PkJvcU2myVfor real-time support.
📄 One-Page Overview
| Dimension | Essential Information |
|---|---|
| Product Core | Open-source visual workflow builder that compiles flows to TypeScript source code. |
| Key Differentiator | Code ownership: exports are human-readable, debuggable, versionable source files, not JSON configs. |
| Entry Paths | 1. Hosted Studio (fastest) 2. Local source (for contributors) 3. CLI scaffolding (for integration) |
| Core Tech Stack | Frontend: React + Vite; Backend: Bun + Hono; Package Manager: pnpm workspaces; Language: TypeScript |
| Hello World Command | pnpm install && pnpm run dev (starts full stack in development mode) |
| Development Mode | No auth, SQLite auto-created, mock user dev@localhost.com, hot-reload for core packages. |
| Production Mode | Requires Clerk keys, PostgreSQL, JWT validation; toggle via VITE_DISABLE_AUTH=false and BUBBLE_ENV=prod. |
| AI Assistant Requirements | GOOGLE_API_KEY + OPENROUTER_API_KEY mandatory for natural-language flow generation. |
| Minimum Secure Config | CREDENTIAL_ENCRYPTION_KEY (encrypts stored API secrets); all other keys are feature-specific. |
| Core Packages | bubble-core (engine), bubble-runtime (executor), shared-schemas (type contracts), ts-scope-manager (Studio autocomplete). |
| Apps | bubble-studio (designer), bubblelab-api (metadata & history server). |
| License | Apache 2.0: commercial use, modification, and distribution permitted. |
| Community | Active Discord, GitHub Issues for bugs/features, contribution docs pending. |
| Self-Hosting Feasibility | High: no mandatory SaaS dependencies, single-process backend, minimal resource footprint. |
❓ Frequently Asked Questions
Q1: How is Bubble Lab fundamentally different from N8N?
A: N8N interprets a JSON graph at runtime; your logic is locked into its format. Bubble Lab compiles visual designs into TypeScript source files that you own, test, and deploy using standard engineering practices.
Q2: Does local development require a database setup?
A: No. Development mode automatically creates an SQLite file (dev.db). No external database installation is needed.
Q3: Are both Google and OpenRouter API keys required for the AI assistant?
A: Yes. The AI assistant uses Google models for generation and OpenRouter for routing. Without both, natural-language flow creation is disabled, but manual building still works.
Q4: Can Bubble Lab workflows be triggered on a schedule?
A: Yes. BubbleFlow supports trigger types like 'schedule/cron'. Use a cron job or task queue (e.g., BullMQ) to invoke flow.handle().
Q5: Does exported code depend on Bubble Lab’s cloud services?
A: No. Exported code depends only on the @bubblelab/bubble-runtime NPM package, which runs locally and makes no network calls to Bubble Lab.
Q6: How do I add a custom node (Bubble)?
A: In packages/bubble-core/src/bubbles/, create a class extending BaseBubble<TInput, TOutput> and implement the action() method. Register its React component in bubble-studio to make it available visually.
Q7: Can I replace Clerk with my own authentication system?
A: Yes. The auth middleware in apps/bubblelab-api/src/middlewares/auth.ts is pluggable. You can substitute Passport.js, Lucia Auth, or a custom solution.
Q8: Why does the backend use Bun instead of Node.js?
A: Bun offers faster cold starts, lower memory usage, native TypeScript support, and built-in SQLite drivers—ideal for workflow automation where processes start and stop frequently.

