Daili Code: An Open-Source AI Agent CLI Compatible with Multiple LLMs

An open-source AI Agent CLI compatible with multiple Large Language Models (LLMs), forked from Gemini ClI.
This repository contains Daili Code, a forked version of Gemini ClI. It is a command-line AI tool that connects to your tools, understands your code, and accelerates your workflow. It supports multiple LLM providers, including Gemini, OpenAI, and any custom LLM API that follows the OpenAI API format.
What Can You Do with Daili Code?
With Daili Code, you can enjoy a wide range of benefits:
1. Query and Edit Large Codebases
Leverage the advanced capabilities of LLMs to query and edit large codebases. It supports large context windows, allowing you to handle complex coding tasks more efficiently. For example, you can ask Daili Code to analyze the structure of a large codebase and identify potential areas for optimization.
2. Generate New Applications from PDFs or Sketches
Take advantage of the multi-modal capabilities of Daili Code to generate new applications from PDFs or sketches. This can be a great way to quickly prototype new ideas or convert existing documents into functional applications. For instance, if you have a PDF with a design for a mobile app, you can use Daili Code to generate the initial code for that app.
3. Automate Operational Tasks
Automate various operational tasks, such as querying Pull Requests or handling complex rebase operations. This can save you a significant amount of time and effort, especially when dealing with large projects. For example, you can set up Daili Code to automatically query for new Pull Requests and notify you when there are changes that need your attention.
4. Connect New Features with Tools and MCP Servers
Use tools and MCP servers to connect new features to Daili Code. This allows you to extend the functionality of the tool and integrate it with your existing workflow. For instance, you can connect Daili Code to a project management tool to automatically update tasks based on the progress of your code development.
5. Configure and Use Your Preferred LLM Provider
Through simple environment variable configuration, you can easily use your preferred LLM provider. This gives you the flexibility to choose the LLM that best suits your needs and preferences. For example, if you prefer to use a specific model from a particular provider, you can configure Daili Code to use that model.
6. Seamlessly Switch Between Different LLM Providers
Switch between different LLM providers without changing your workflow. This is particularly useful if you want to compare the performance of different models or if you need to use a different provider for specific tasks. For instance, you can switch from using a Google Gemini model to an OpenAI model with just a few simple configuration changes.
Test Results of Different Models
This solution has been tested on multiple dimensions (whether it has a thinking process, can complete simple tasks, has tool invocation capabilities, has multi-modal capabilities, has complex task capabilities, and can count tokens) for various providers, models, and locally deployed models. Here are the test results:
Model | Thinking Process | Simple Task | Tool Invocation | MCP Invocation | Complex Task | Multi-modal | Token Count |
---|---|---|---|---|---|---|---|
【Google】Gemini-2.5-pro | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
【OpenRouter】Claude Sonnet 4 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
【OpenRouter】Gpt-4.1 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
【OpenRouter】Grok-4 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
【Volcengine】Doubao-Seed-1.6 | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
【Bailian】Qwen3-Plus | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
【Moonshot】kimi-k2 | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ |
【Volcengine】DeepSeek-R1 | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ |
【Siliconflow】DeepSeek-R1 | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ |
【Volcengine】Doubao-1.5-Pro | ❌ | ✅ | ✅ | ✅ | ⚠️ | ❌ | ✅ |
【Volcengine】DeepSeek-V3 | ❌ | ✅ | ✅ | ✅ | ⚠️ | ❌ | ✅ |
【Bailian】Qwen3-235b-a22b | ✅ | ✅ | ✅ | ✅ | ⚠️ | ❌ | ✅ |
【vLLM】Qwen2.5-7B-Instruct | ❌ | ✅ | ✅ | ✅ | ⚠️ | ❌ | ✅ |
【vLLM】DeepSeek-R1–32B | ✅ | ✅ | ✅ | ✅ | ⚠️ | ❌ | ✅ |
【Ollama】Qwen2.5-7B-Instruct | ❌ | ✅ | ✅ | ✅ | ⚠️ | ❌ | ✅ |
Getting Started
Prerequisites
Make sure you have installed Node.js version 20 or a higher version. Node.js is a JavaScript runtime that allows you to run JavaScript code outside of a web browser. It is essential for running the Daili Code CLI.
Running the CLI
You can run the Daili Code CLI in two ways:
Method 1: Using npx
In your terminal, execute the following command:
npx daili-code
The npx
command is a tool that comes with npm
(Node Package Manager). It allows you to run packages without having to install them globally. When you run npx daili-code
, it will download the latest version of the Daili Code package and run it.
Method 2: Installing Globally
Alternatively, you can install the Daili Code package globally using the following commands:
npm install -g daili-code
dlc
The npm install -g daili-code
command installs the Daili Code package globally on your system. This means that you can run the dlc
command from anywhere in your terminal.
Custom LLM Configuration
Daili Code’s command-line tool supports connecting to any LLM API that is compatible with OpenAI. You can use the following environment variables to configure your preferred LLM:
# Enable custom LLM support
export USE_CUSTOM_LLM=true
export CUSTOM_LLM_API_KEY="your-api-key" # Your LLM provider API key
export CUSTOM_LLM_ENDPOINT="https://api.your-llm-provider.com/v1" # API endpoint
export CUSTOM_LLM_MODEL_NAME="your-model-name" # Model name
# Optional parameters
export CUSTOM_LLM_TEMPERATURE=0.7 # Temperature parameter (default value: 0)
export CUSTOM_LLM_MAX_TOKENS=8192 # Maximum number of tokens (default value: 8192)
export CUSTOM_LLM_TOP_P=1 # Top P parameter (default value: 1)
When these variables are set, the Daili Code command-line tool will use your custom LLM instead of the default Gemini model. The USE_CUSTOM_LLM
variable enables the use of a custom LLM. The CUSTOM_LLM_API_KEY
is the API key provided by your LLM provider, which is used to authenticate your requests. The CUSTOM_LLM_ENDPOINT
is the URL of the API endpoint for your LLM provider. The CUSTOM_LLM_MODEL_NAME
is the name of the model you want to use. The optional parameters, such as CUSTOM_LLM_TEMPERATURE
, CUSTOM_LLM_MAX_TOKENS
, and CUSTOM_LLM_TOP_P
, allow you to fine-tune the behavior of the LLM.
Examples
Starting a New Project
Once the CLI is running, you can interact with Gemini from the shell. You can start a project from a new directory like this:
cd new-project/
dlc
> Help me write a Discord bot that can answer questions using the FAQ.md file I will provide.
In this example, you first navigate to a new project directory using the cd
command. Then you run the dlc
command to start the Daili Code CLI. Finally, you give a command to the CLI to write a Discord bot that uses a specific FAQ file to answer questions.
Working on an Existing Project
Or you can work on an existing project:
git clone https://github.com/nearmetips/DailiCode.git
cd daili-code
dlc
> Summarize all the changes from yesterday for me.
Here, you first clone an existing project from a GitHub repository using the git clone
command. Then you navigate to the project directory using the cd
command. After that, you start the Daili Code CLI with the dlc
command and ask it to summarize the changes made to the project the previous day.
Running in Code
Daili Code also supports being directly introduced and used in code through NPM. Here is an example:
import { ElcAgent } from 'daili-code';
const agent = new ElcAgent({
model: 'custom-llm-model-name',
apiKey: 'custom-llm-api-key',
endpoint: 'custom-llm-endpoint',
extension: {
mcpServers: {
chart: {
command: 'npx',
args: ['-y', '@antv/mcp-server-chart'],
trust: false
}
},
excludeTools: ['run_shell_command']
}
});
const result = await agent.run('Please help me generate a bar chart of sales data.');
console.log(result);
In this code, you first import the ElcAgent
class from the daili-code
package. Then you create an instance of the ElcAgent
class, passing in the necessary configuration options such as the model name, API key, and endpoint. You also configure some extensions, such as the MCP server for generating charts and the tools to exclude. Finally, you call the run
method of the agent with a specific task and log the result to the console.
Viewing API Call Documentation
For detailed documentation on API calls, please refer to Programmatic API. This documentation provides in-depth information on how to use the Daili Code API in your code, including the available methods, parameters, and return values.
Next Steps
Contributing or Building from Source
Learn how to contribute code or build from source. This guide provides instructions on how to contribute to the Daili Code project, including how to set up a development environment, make changes to the code, and submit pull requests.
Exploring CLI Commands
Explore the available CLI commands. These commands allow you to perform various tasks using the Daili Code CLI, such as interacting with LLMs, managing projects, and running custom scripts.
Troubleshooting
If you encounter any problems, please refer to the troubleshooting guide. This guide provides solutions to common issues that you may encounter when using Daili Code, such as installation problems, API errors, and configuration issues.
Reading the Full Documentation
For more comprehensive documentation, please refer to the full documentation. This documentation covers all aspects of Daili Code, including its features, installation, configuration, and usage.
Getting Inspiration from Popular Tasks
Check out some popular tasks for more inspiration. These tasks demonstrate the various ways you can use Daili Code to solve real-world problems, such as exploring new codebases, processing existing code, automating workflows, and interacting with systems.
Troubleshooting
If you run into any issues, refer to the troubleshooting guide. This guide will help you diagnose and fix problems that you may encounter while using Daili Code. It includes step-by-step instructions and solutions for common issues.
Popular Tasks
Exploring New Codebases
First, enter an existing or newly cloned repository, and then run dlc
. You can then ask questions like:
> Describe the main components of this system architecture.
This question allows you to understand the overall structure of the codebase and the key components that make it up. It can help you get familiar with a new codebase quickly.
> What security mechanisms are in operation?
This question helps you identify the security measures implemented in the codebase, such as authentication, authorization, and encryption. It is important for ensuring the security of your applications.
Processing Existing Code
You can also use Daili Code to process existing code. For example:
> Implement a first draft for GitHub issue number 123.
This command asks Daili Code to generate an initial implementation for a specific GitHub issue. It can save you time and effort in writing the code from scratch.
> Help me migrate this codebase to the latest version of Java. First, create a plan.
This command requests Daili Code to create a migration plan for upgrading the codebase to the latest version of Java. A well-planned migration can minimize the risk of errors and ensure a smooth transition.
Automating Workflows
Use the MCP server to integrate local system tools with enterprise collaboration suites. For instance:
> Create a slide presentation showing the git history of the past 7 days, grouped by feature and team member.
This command uses Daili Code to automate the creation of a slide presentation based on the git history. It can save you time in gathering and presenting data.
> Create a full-screen web application for a wall display that shows our most interacted-with GitHub issues.
This command asks Daili Code to generate a web application that displays the most interacted-with GitHub issues. It can help you keep track of important issues in your project.
Interacting with the System
You can also use Daili Code to interact with the system. For example:
> Convert all images in this directory to the png format and rename them according to the date in the exif data.
This command uses Daili Code to automate the conversion and renaming of images in a directory. It can save you time in managing your image files.
> Organize my PDF invoices by the month of expenditure.
This command asks Daili Code to sort your PDF invoices based on the month of expenditure. It can help you keep your financial records organized.
This project is based on https://github.com/ConardLi/easy-llm-cli and https://github.com/google-gemini/gemini-cli. These are the original repositories that Daili Code was forked from. By building on these existing projects, Daili Code can leverage their features and functionality while adding its own unique capabilities.
In conclusion, Daili Code is a powerful and flexible AI Agent CLI that offers a wide range of features and benefits. Whether you are a developer looking to accelerate your workflow, a data scientist working with large codebases, or an IT professional managing complex projects, Daili Code can help you achieve your goals more efficiently. With its support for multiple LLMs, easy configuration, and rich set of commands, Daili Code is a valuable tool in your development toolkit. So why not give it a try and see how it can transform your work?