CLI Proxy API: Seamlessly Integrate CLI Models into Your Applications

In today’s fast-changing world of technology, artificial intelligence (AI) is everywhere, shaping how we build smart apps and improve daily tasks. For developers, tapping into AI’s power often means wrestling with complex tools or command-line setups. That’s where the CLI Proxy API comes in—a handy tool that lets you bring the strengths of CLI models into your projects using a simple API interface. No more being stuck with just a command line! This guide walks you through what the CLI Proxy API offers, how to set it up, and how to use it effectively in your work.

Technology Development Background

What is the CLI Proxy API?

At its core, the CLI Proxy API is a bridge—a proxy server that connects CLI models to an API setup you can use with platforms like OpenAI, Gemini, and Claude. Instead of typing commands into a terminal, you can call these models through standard API requests. This opens up a world of possibilities, whether you’re creating a chatbot, automating tasks, or building something more advanced.

Here’s what makes it stand out:

  • Works with Many Platforms: It supports API endpoints for OpenAI, Gemini, and Claude, so you can switch between them easily.
  • Flexible Responses: Choose between streaming responses (great for real-time updates) or non-streaming ones, depending on your needs.
  • Smart Tools: It allows function calls and tool integration, making your interactions with models more dynamic.
  • Handles More Than Text: Beyond text, it supports image inputs, broadening what you can do.
  • Multiple Accounts: It manages several accounts at once, balancing the load to keep things running smoothly.

Whether you’re keeping it simple or tackling complex projects, the CLI Proxy API has your back.

How to Install the CLI Proxy API

Getting started is easy, but you’ll need a couple of things in place first. Let’s break it down.

What You Need Before Starting

Before diving in, make sure you have:

  • Go Language Setup: You’ll need Go version 1.24 or higher installed. Think of Go as the toolkit that builds the CLI Proxy API.
  • Google Account: A Google account with access to CLI models is essential for logging in and using the system.

Installing Step-by-Step

The CLI Proxy API is built from source code, which you’ll download and set up yourself. Here’s how:

  1. Get the Code
    Open your terminal (the command-line window on your computer) and type:

    git clone https://github.com/luispater/CLIProxyAPI.git
    cd CLIProxyAPI
    

    This grabs the project files from GitHub and moves you into the project folder.

  2. Build the Program
    Now, turn the code into a working app by running:

    go build -o cli-proxy-api ./cmd/server
    

    After this, you’ll see a new file called cli-proxy-api in the folder. That’s the program you’ll use.

It’s like putting together a puzzle—two steps, and you’re done! If something goes wrong (like Go not working), double-check that Go is installed and set up correctly on your system.

How to Use the CLI Proxy API: Basic Steps

Once it’s installed, you’re ready to put it to work. Using it involves three main steps: logging in, starting the server, and making API calls.

Step 1: Log In

You need to sign in with your Google account to prove you’re allowed to use the CLI models. In your terminal, run:

./cli-proxy-api --login

A login page will pop up in your browser. Follow the steps there to sign in. If you’re using an older version of Gemini Code, you might need to add a project ID like this:

./cli-proxy-api --login --project_id <your_project_id>

Once you’re logged in, the system saves your login details for next time.

Step 2: Start the Server

With login done, start the proxy server by running:

./cli-proxy-api

It’ll launch on port 8317 by default (think of ports like channels on a TV). If everything’s working, you’ll see some startup messages in the terminal showing it’s ready.

Step 3: Make API Calls

Now the fun part—talking to the API! It offers a few key endpoints (ways to interact with it). Here are two common ones:

  • See Available Models
    To check which models you can use, send this request:

    GET http://localhost:8317/v1/models
    

    You’ll get back a list, like gemini-2.5-pro or gemini-2.5-flash.

  • Start a Chat
    To chat with a model, send a POST request to:

    POST http://localhost:8317/v1/chat/completions
    

    Here’s an example of what to send:

    {
      "model": "gemini-2.5-pro",
      "messages": [
        {
          "role": "user",
          "content": "Hello, how are you?"
        }
      ],
      "stream": true
    }
    

    Setting "stream": true means you’ll get replies in real time, perfect for live chats. If you don’t need that, change it to false.

With these steps, you’re up and running, ready to use CLI models in your projects.

Adding the CLI Proxy API to Your Code

One of the best things about the CLI Proxy API is how easily it fits into your existing tools, especially if you use OpenAI-compatible libraries. Let’s look at how to do this in Python and JavaScript.

Using It in Python

If you like Python, you can use the OpenAI library. Here’s a simple example:

from openai import OpenAI

client = OpenAI(
    api_key="dummy",  # Any value works here—it’s not actually checked
    base_url="http://localhost:8317/v1"
)

response = client.chat.completions.create(
    model="gemini-2.5-pro",
    messages=[
        {"role": "user", "content": "Hello, how are you?"}
    ]
)

print(response.choices[0].message.content)

Run this, and it’ll show the model’s reply. The api_key part is just a placeholder—the CLI Proxy API doesn’t use it.

Using It in JavaScript

For web developers, JavaScript works just as well. Try this:

import OpenAI from 'openai';

const openai = new OpenAI({
  apiKey: 'dummy', // Placeholder, not actually needed
  baseURL: 'http://localhost:8317/v1',
});

const response = await openai.chat.completions.create({
  model: 'gemini-2.5-pro',
  messages: [
    { role: 'user', content: 'Hello, how are you?' }
  ],
});

console.log(response.choices[0].message.content);

This code waits for the model’s answer and prints it. Whether you’re coding in Python or JavaScript, it’s straightforward and quick to set up.

Code Development Scene

Customizing the CLI Proxy API

Want to tweak how it works? You can adjust settings using a configuration file. By default, it looks for a file called config.yaml in the project folder. To use a different file, start the server like this:

./cli-proxy-api --config /path/to/your/config.yaml

What Can You Change?

The configuration file lets you set things like:

  • port: The channel the server uses (default is 8317).
  • auth-dir: Where login tokens are saved (default is ~/.cli-proxy-api).
  • proxy-url: An optional proxy address, like socks5://user:pass@192.168.1.1:1080/. Leave it blank if you don’t need one.
  • debug: Set to true for extra logs to help troubleshoot.
  • api-keys: A list of keys for securing API requests.
  • generative-language-api-key: Keys for Gemini AIStudio access.
  • quota-exceeded: Rules for what happens when you hit usage limits, like switching projects or models.

Sample Configuration

Here’s an example config file:

port: 8317
auth-dir: "~/.cli-proxy-api"
debug: false
proxy-url: ""
quota-exceeded:
   switch-project: true
   switch-preview-model: true
api-keys:
  - "your-api-key-1"
  - "your-api-key-2"
generative-language-api-key:
  - "AIzaSy...01"
  - "AIzaSy...02"
  - "AIzaSy...03"
  - "AIzaSy...04"

This setup switches projects or models when limits are hit and uses multiple API keys. Change it to fit your needs.

How Login and Keys Work

  • Login Folder: The auth-dir stores your Google login tokens as JSON files, letting you use multiple accounts.
  • API Keys: If you set api-keys, add Authorization: Bearer your-api-key-1 to your API request headers.
  • Gemini Keys: The generative-language-api-key is for direct Gemini AIStudio calls.

With the right settings, the CLI Proxy API runs just the way you want it.

Trying Out Advanced Features

Beyond the basics, the CLI Proxy API has some neat extras to explore.

Managing Multiple Accounts with Load Balancing

If you’ve got several accounts, you can spread the work across them for better reliability. After starting the server, set this environment variable:

export CODE_ASSIST_ENDPOINT="http://127.0.0.1:8317"

The server will rotate through your accounts automatically. For now, this only works locally (127.0.0.1) for safety.

Running It with Docker

Prefer containers? You can run the CLI Proxy API with Docker. Here’s how:

  • Log In:

    docker run --rm -p 8085:8085 -v /path/to/your/config.yaml:/CLIProxyAPI/config.yaml -v /path/to/your/auth-dir:/root/.cli-proxy-api eceasy/cli-proxy-api:latest /CLIProxyAPI/CLIProxyAPI --login
    
  • Start the Server:

    docker run --rm -p 8317:8317 -v /path/to/your/config.yaml:/CLIProxyAPI/config.yaml -v /path/to/your/auth-dir:/root/.cli-proxy-api eceasy/cli-proxy-api:latest
    

Docker makes it simple to deploy and manage, especially for teams or live systems.

Joining the Community and Helping Out

The CLI Proxy API is open-source, meaning anyone can help make it better. Want to pitch in? Here’s how to contribute:

  1. Fork the project on GitHub.
  2. Create a new branch for your idea:

    git checkout -b feature/amazing-feature
    
  3. Save your changes:

    git commit -m 'Add some amazing feature'
    
  4. Send your work to GitHub:

    git push origin feature/amazing-feature
    
  5. Open a Pull Request on the project page.

It’s licensed under the MIT license—check the LICENSE file for details. Contributing is a great way to sharpen your skills and connect with others who love tech.

Wrapping Up: Start Your AI Adventure

The CLI Proxy API is a straightforward, powerful way to bring CLI models into your projects. From setting it up to tweaking it to coding with it, this guide covers everything you need to get going. Whether you’re building something smart or improving what you’ve got, it’s a tool worth having.

We hope this sparks your curiosity to dive into tech and try the CLI Proxy API for yourself. Got questions or ideas? Share them with the community!

AI Technology Application Scene

Why This Matters for Developers

For anyone building apps or tools, the CLI Proxy API saves time and effort. It takes the complexity of CLI models and wraps it in an easy-to-use package. You don’t need to be an expert in command lines or AI—just follow the steps, and you’re ready to roll. Plus, its flexibility means it grows with your projects, whether you’re starting small or aiming big.

Tips for Success

  • Test as You Go: Try out each step (login, server start, API calls) to catch issues early.
  • Keep It Simple: Start with basic calls before jumping into advanced features like load balancing.
  • Check the Docs: The GitHub page has more details if you get stuck.

Real-World Uses

Imagine building a chatbot that answers questions in real time, a tool that analyzes images, or an app that automates boring tasks. The CLI Proxy API makes all of that possible without forcing you to rewrite everything from scratch. It’s like a shortcut to smarter software.


This guide is just the beginning. The CLI Proxy API is yours to explore—take it, use it, and see where it leads you in the world of AI and development!