LLM Council: Leverage Collective Wisdom from Multiple LLMs

llmcouncil

Instead of relying on a single LLM provider—like OpenAI GPT 5.1, Google Gemini 3.0 Pro, Anthropic Claude Sonnet 4.5, or xAI Grok 4—what if you could gather them into your own “LLM Council”? This repo introduces a simple, local web app that works like ChatGPT but with a twist: it uses OpenRouter to send your query to multiple LLMs, lets them review and rank each other’s outputs, and finally lets a “Chairman LLM” craft a polished final response.

How It Works: The 3-Stage Process

When you submit a query, here’s what unfolds behind the scenes:

  1. Stage 1: Initial Perspectives
    Your question is sent to each LLM in the council individually. All responses are collected and displayed in a tabbed view, so you can easily check out each LLM’s take one by one.

  2. Stage 2: Peer Review
    Each LLM then gets to see the responses from the others—with identities anonymized to ensure unbiased judgment. The LLMs are asked to rank these outputs based on accuracy and insight.

  3. Stage 3: Final Synthesis
    A designated Chairman LLM takes all the individual responses and compiles them into a single, comprehensive final answer to present to you.

A Note on the Project’s Vibe

This project was 99% “vibe-coded” as a fun Saturday hack! I built it to explore and compare multiple LLMs while reading books with AI. It’s handy to see side-by-side responses and cross-LLM feedback, but I won’t be providing ongoing support. Think of it as a starting point—feel free to tweak it with your LLM of choice to fit your needs.

Setup Guide

1. Install Dependencies

The project uses uv for Python package management.

Backend:

uv sync

Frontend:

cd frontend
npm install
cd ..

2. Configure Your API Key

Create a .env file in the project root and add your OpenRouter API key:

OPENROUTER_API_KEY=sk-or-v1-...

Get your API key from openrouter.ai. Don’t forget to add credits or set up automatic top-ups to keep things running.

3. Customize the Council (Optional)

Edit backend/config.py to tweak which LLMs are in your council and who serves as Chairman:

COUNCIL_MODELS = [
    "openai/gpt-5.1",
    "google/gemini-3-pro-preview",
    "anthropic/claude-sonnet-4.5",
    "x-ai/grok-4",
]

CHAIRMAN_MODEL = "google/gemini-3-pro-preview"

Running the App

Option 1: Use the Start Script

./start.sh

Option 2: Run Manually

  • Terminal 1 (Backend):

    uv run python -m backend.main
    
  • Terminal 2 (Frontend):

    cd frontend
    npm run dev
    

Then open http://localhost:5173 in your browser to start using the app.

Tech Stack

  • Backend: FastAPI (Python 3.10+), async httpx, OpenRouter API
  • Frontend: React + Vite, react-markdown for rendering
  • Storage: JSON files in data/conversations/
  • Package Management: uv (Python), npm (JavaScript)