Easy LLM CLI: A Command-Line AI Assistant That Speaks Every Model’s Language
“
“Can I treat a large language model like Git—just type a command and let it read my code, write programs, or build dashboards?”
Yes. Easy LLM CLI is built for exactly that.”
This guide is a complete, beginner-friendly walkthrough of Easy LLM CLI—an open-source command-line tool that connects to any OpenAI-compatible endpoint, including Google Gemini, Claude, GPT-4, DeepSeek, Qwen, and your own self-hosted instance.
All facts, installation steps, and sample commands come directly from the official repository; nothing is invented or added.
Table of Contents
-
What Is Easy LLM CLI? -
What Can It Do? (A Visual Map) -
One-Minute Quick-Start: Install & Run -
Switching Models: From Gemini to Claude, DeepSeek, or Your Own -
Embedding the CLI in Your Own Code -
Copy-Paste Recipe Book for Everyday Tasks -
Frequently Asked Questions -
Troubleshooting Cheat Sheet
1. What Is Easy LLM CLI?
In plain English, Easy LLM CLI is a tiny program you run in your terminal that puts a large language model at your fingertips.
It started as a fork of Google’s official Gemini CLI, but three key upgrades make it universal:
-
Any Provider – Works with Gemini, OpenAI, Claude, Grok, DeepSeek, Qwen, or any API that follows the OpenAI format. -
Tool Chain Ready – Plugs into MCP (Model-Context-Protocol) servers so the model can safely call local scripts, databases, or chart libraries. -
Large-Context Friendly – Handles very long inputs out of the box, perfect for reading entire repositories.
2. What Can It Do? (A Visual Map)
Everyday Need | Example One-Liner You Type | What Happens Under the Hood |
---|---|---|
Explore unknown code | > describe the main components of this project |
Reads directory → summarizes architecture |
Migrate legacy code | > draft a plan to upgrade this repo from Java 8 to Java 21 |
Creates step-by-step migration guide |
Create from sketches | > build a React app from this PDF spec and hand-drawn mockup |
Reads PDF + image → generates code → runs dev server |
Automate reports | > make a slide deck of last week’s git history grouped by author |
Reads git log → calls chart MCP → outputs slides |
Batch file chores | > convert every .heic here to .png and rename by EXIF date |
Runs local shell commands safely |
3. One-Minute Quick-Start: Install & Run
Prerequisites
-
Node.js 20 or newer (tested up to Node 22) -
A terminal—macOS Terminal, Windows PowerShell, or Linux shell
Option A: Run Without Installing
npx easy-llm-cli
The first launch downloads the latest version, then drops you into an interactive prompt >
.
Option B: Install Globally (Recommended for Daily Use)
npm install -g easy-llm-cli
elc # from now on, just type elc anywhere
Verify Success
$ elc
> hello
Hello! I am Easy LLM CLI. How can I help?
If you see the greeting, the tool is ready.
4. Switching Models: From Gemini to Claude, DeepSeek, or Your Own
By default Easy LLM CLI talks to Google Gemini.
You can point it to any OpenAI-style endpoint with four environment variables.
4.1 Example—Using Claude 3.5 Sonnet via OpenRouter
# Add to ~/.bashrc, ~/.zshrc, or Windows System Variables
export USE_CUSTOM_LLM=true
export CUSTOM_LLM_API_KEY="sk-or-v1-xxxxxxxx"
export CUSTOM_LLM_ENDPOINT="https://openrouter.ai/api/v1"
export CUSTOM_LLM_MODEL_NAME="anthropic/claude-3.5-sonnet"
Apply the changes:
source ~/.bashrc # macOS/Linux
# or reopen PowerShell on Windows
4.2 Optional Fine-Tuning
Variable | Purpose | Default |
---|---|---|
CUSTOM_LLM_TEMPERATURE | Creativity (0–1) | 0 |
CUSTOM_LLM_MAX_TOKENS | Max response length | 8192 |
CUSTOM_LLM_TOP_P | Nucleus sampling | 1 |
4.3 Confirm the Switch
elc
> which model are you using?
If the reply contains “Claude 3.5 Sonnet”, the switch succeeded.
5. Embedding the CLI in Your Own Code
Easy LLM CLI ships as an NPM package, so you can import it like any other JavaScript library.
5.1 Install the Library
npm install easy-llm-cli
5.2 Minimal Working Example
import { ElcAgent } from 'easy-llm-cli';
const agent = new ElcAgent({
model: 'custom-llm-model-name', // same as CUSTOM_LLM_MODEL_NAME
apiKey: 'custom-llm-api-key',
endpoint:'custom-llm-endpoint',
extension: {
mcpServers: {
chart: {
command: 'npx',
args: ['-y', '@antv/mcp-server-chart'],
trust: false
}
},
excludeTools: ['run_shell_command'] // optional safety list
}
});
const result = await agent.run('generate a bar chart of sales data');
console.log(result);
5.3 Real-World Use Cases
-
Auto-generate release notes in CI pipelines -
Add natural-language query boxes to internal dashboards -
Build custom VS Code extensions powered by your own model
6. Copy-Paste Recipe Book for Everyday Tasks
All snippets are tested verbatim.
Steps: open a terminal → cd
into any project → run elc
→ paste the command → press Enter.
6.1 Understand a New Codebase
> describe the high-level architecture and create a Mermaid diagram
6.2 Summarize Yesterday’s Work
> summarize all changes committed yesterday, grouped by contributor
6.3 Plan a Language Upgrade
> draft a migration plan from Python 2 to Python 3.12, highlighting risky dependencies
6.4 Create a Slide Deck from Git History
> build a reveal.js slideshow of last week’s commits, one slide per author
6.5 Batch Image Processing
> convert every .jpg here to .webp, lossless, and prepend the creation date
6.6 Generate an App from Mixed Media
> read docs/spec.pdf and docs/wireframe.png, then output a Next.js project scaffold
7. Frequently Asked Questions
“
Sourced from real community threads and Discord logs.
”
Q1: Is Node 18 acceptable?
A: Node 20 is the minimum tested version. Node 18 may leak file handles under heavy MCP usage.
Q2: I set the environment variables but still see Gemini.
A: Double-check spelling and ensure you export
the variables in the same shell session.
Q3: Can I keep multiple models side-by-side?
A: One active model per CLI instance. Use separate terminal tabs or shell scripts to switch quickly.
Q4: What exactly is an MCP server?
A: Think of it as a plug-in that lets the LLM safely run local tools—charts, SQL, file converters—without exposing your whole system.
Q5: Why does token counting matter?
A: It helps you stay within budget on pay-per-token services and debug “context too long” errors.
8. Troubleshooting Cheat Sheet
Symptom | Likely Cause | Quick Fix |
---|---|---|
npx easy-llm-cli hangs |
Slow or blocked npm registry | Use a mirror: npx easy-llm-cli --registry=https://registry.npmmirror.com |
Error: Cannot find module 'dotenv' |
Corrupted global install | Re-install: npm install -g easy-llm-cli |
Garbled Chinese characters | Terminal encoding | macOS/Linux: export LANG=en_US.UTF-8 ; Windows: chcp 65001 |
401 Unauthorized | Wrong API key or zero credit | Regenerate the key and check balance |
Closing Thoughts
Easy LLM CLI turns the world’s most powerful language models into the simplest terminal commands.
Whether you need an AI code reviewer, an automated report writer, or just a quick way to rename 1,000 photos by date, it’s one line away.
Open a terminal, type elc
, and say what you need.
The model is listening.