How to Deploy CoPaw on a Linux Server: A Complete Guide from Installation to Public Access

This guide walks through the full process of deploying CoPaw on a Linux server (tested on OpenCloudOS / CentOS-compatible environments), covering installation, dependency troubleshooting, Nginx reverse proxy configuration, and running the service as a background daemon. If you have basic Linux experience, this guide is for you.


Table of Contents


What Is CoPaw?

CoPaw is a self-hosted AI assistant platform that runs on your local machine or cloud server and exposes a web interface for interacting with AI models. It’s built on the Python ecosystem, uses uv for virtual environment management, and relies on ChromaDB for vector storage — making it a lightweight option for teams or individual developers who want to run private AI workflows on their own infrastructure.

If you’ve been looking for an alternative to fully managed AI services and want control over your own data and runtime environment, CoPaw is worth exploring.


Installing CoPaw

The official installation is a single-line script that works on most Linux distributions:

curl -fsSL https://copaw.agentscope.io/install.sh | bash

Behind the scenes, this script does three things:

  1. Installs CoPaw into /root/.copaw
  2. Creates an isolated Python virtual environment using uv
  3. Installs all required dependencies

Once the installation finishes, initialize with default settings:

copaw init --defaults

Heads up: If your server is in a restricted network environment where outbound connections are blocked, curl will return an empty response and the installation will fail. In that case, confirm your server has outbound HTTPS access (port 443) before proceeding, or download the installer from a machine with internet access and transfer it manually.


Fixing Common Errors

In practice, getting from a fresh install to a working CoPaw instance usually involves working through a few predictable issues. Here’s what you’re likely to hit and how to resolve each one.


Error 1: SQLite3 Version Not Supported

When running copaw init --defaults, you might see:

RuntimeError: Your system has an unsupported version of sqlite3. Chroma
requires sqlite3 >= 3.35.0.

Why this happens

ChromaDB — the vector database CoPaw uses for memory and file storage — requires SQLite3 version 3.35.0 or higher. Many Linux distributions, particularly CentOS and its derivatives (including OpenCloudOS), ship with an older SQLite3 (often around 3.26.0) that doesn’t meet this requirement.

The fix: install pysqlite3-binary and patch Python’s module loader

pysqlite3-binary is a pre-compiled Python package that bundles a modern SQLite3 build. Once installed, you can use a Python .pth file to tell the interpreter to use it instead of the system version — no need to touch system packages.

Step 1: Locate uv

CoPaw manages its virtual environment with uv, not traditional pip. Find where uv is installed:

which uv
# Typically: /root/.local/bin/uv

Step 2: Install pysqlite3-binary into the CoPaw venv

/root/.local/bin/uv pip install pysqlite3-binary --python /root/.copaw/venv/bin/python

Step 3: Create a .pth patch file

A .pth file is a Python mechanism that executes a line of code automatically at interpreter startup. We use it here to swap out the sqlite3 module:

cat > /root/.copaw/venv/lib/python3.12/site-packages/pysqlite3_patch.pth << 'EOF'
import sys; import pysqlite3; sys.modules["sqlite3"] = pysqlite3
EOF

This one-liner replaces the system sqlite3 module with pysqlite3 at startup, allowing ChromaDB to use the newer SQLite3 version transparently.

Now run initialization again:

copaw init --defaults

Error 2: pip Command Not Found

You might run into this:

-bash: /root/.copaw/venv/bin/pip: No such file or directory

Why this happens

CoPaw’s virtual environment is managed by uv, which by design does not generate a standalone pip executable inside the venv. Instead, uv has its own package management commands.

The fix

Use either of these instead:

# Option A: uv pip (recommended)
/root/.local/bin/uv pip install <package-name> --python /root/.copaw/venv/bin/python

# Option B: python -m pip
/root/.copaw/venv/bin/python -m pip install <package-name>

Error 3: pysqlite3 Module Not Found

After creating the .pth file, you might see this on startup:

ModuleNotFoundError: No module named 'pysqlite3'
Remainder of file ignored

Why this happens

This means pysqlite3-binary wasn’t actually installed into CoPaw’s virtual environment — the .pth file references a module that doesn’t exist yet.

How to diagnose and fix it

First, confirm where uv is located:

find / -name "uv" -type f 2>/dev/null

Then re-run the install command, making sure you’re pointing at the correct Python interpreter:

/root/.local/bin/uv pip install pysqlite3-binary --python /root/.copaw/venv/bin/python

Verify the installation worked:

/root/.copaw/venv/bin/python -c "import pysqlite3; print(pysqlite3.sqlite_version)"
# Should print a version >= 3.35.0

Once confirmed, recreate the .pth file and restart CoPaw.


Troubleshooting Public Access Issues

This is the most common category of deployment problems. The symptom: curl 127.0.0.1:8088 works fine locally, but accessing the server’s public IP returns a 502 error or refuses to connect entirely.


Works Locally but Returns 502 Over the Internet

A 502 Bad Gateway means the request reached your server, but the reverse proxy failed to forward it to the upstream application. This usually points to Nginx being present but misconfigured, or the upstream service being unreachable.

Start your investigation here:

# Check whether Nginx is running
systemctl status nginx

# Look at recent error output
tail -50 /var/log/nginx/error.log

Service Only Listens on 127.0.0.1

Check what address CoPaw is actually bound to:

ss -tlnp | grep 8088

If the output looks like this:

LISTEN 0  2048  127.0.0.1:8088  0.0.0.0:*  users:(("copaw",...))

CoPaw is only accepting connections from localhost. Public requests can’t reach it at all — they’re being rejected before they even get to the application layer.

Why does CoPaw bind to 127.0.0.1 by default?

This is a deliberate security pattern used by most web services: bind only to the loopback interface, then let a reverse proxy handle public-facing traffic. This keeps the application layer off the public internet and lets Nginx handle concerns like SSL termination, access control, and request filtering.

To confirm there’s no config file or startup flag to change this behavior:

# Check the CoPaw directory for config files
find ~/.copaw -not -path "*/venv/*" | head -50

# Check the startup command
ps aux | grep copaw
cat /proc/<PID>/cmdline | tr '\0' ' '

If CoPaw is launched with just copaw app and there’s no standalone config file, the bind address isn’t configurable — you’ll need Nginx as a reverse proxy.


Setting Up Nginx as a Reverse Proxy

Step 1: Install Nginx

yum install -y nginx

Step 2: Create a reverse proxy config

Write the following to /etc/nginx/conf.d/copaw.conf:

cat > /etc/nginx/conf.d/copaw.conf << 'EOF'
server {
    listen 80;
    location / {
        proxy_pass http://127.0.0.1:8088;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}
EOF

The Upgrade and Connection headers are required to support WebSocket connections, which CoPaw uses for real-time communication.

Step 3: Enable and start Nginx

systemctl enable nginx && systemctl start nginx

Step 4: Open port 80 in your cloud provider’s firewall

On most cloud platforms (AWS security groups, Tencent Cloud security policies, Alibaba Cloud ECS security groups, etc.), inbound traffic is blocked by default at the network level regardless of your server’s local firewall settings. You need to explicitly allow TCP port 80 inbound in your cloud console.

The exact steps vary by provider, but the goal is the same: add an inbound rule allowing TCP traffic on port 80 from any source (0.0.0.0/0).


Nginx Shows the Default Welcome Page Instead of CoPaw

After configuring Nginx, you visit the server’s public IP and see:

Welcome to nginx on OpenCloudOS Linux!

This means conf.d/copaw.conf was loaded, but nginx.conf contains a default server {} block that takes priority and intercepts port 80 requests first.

Confirm the conflict:

grep -n "server {" /etc/nginx/nginx.conf

Check the content of the default server block (usually around line 38):

sed -n '35,60p' /etc/nginx/nginx.conf

It typically looks like this:

server {
    listen       80 default_server;
    listen       [::]:80 default_server;
    server_name  _;
    root         /usr/share/nginx/html;
    include /etc/nginx/default.d/*.conf;
    location / {
    }
    ...
}

Fix: remove the default server block

Back up the config, then delete the offending lines (adjust line numbers to match your file):

cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
sed -i '38,57d' /etc/nginx/nginx.conf

Validate the syntax and reload:

nginx -t && systemctl reload nginx

Visit the public IP again — you should now see CoPaw’s web interface.


Running CoPaw as a Background Service

By default, copaw app runs in the foreground. Close your terminal and the process dies. For any real deployment, you need it to run persistently in the background — and ideally restart automatically if the server reboots or the process crashes.

Here are three approaches, each suited to a different situation.


Option 1: nohup (Quick and Simple)

Good for quick tests or situations where you don’t need automatic restarts:

nohup copaw app > ~/.copaw/copaw.log 2>&1 &
echo $!  # Print and save the PID

Monitor the logs:

tail -f ~/.copaw/copaw.log

Stop the service:

kill <PID>

Limitation: If the server reboots, you’ll need to start CoPaw manually again.


Option 2: systemd (Recommended for Production)

systemd is the standard service manager on modern Linux systems. It handles startup ordering, automatic restarts on failure, and centralized log management — everything you need for a stable production deployment.

Create a service unit file:

cat > /etc/systemd/system/copaw.service << 'EOF'
[Unit]
Description=CoPaw App
After=network.target

[Service]
Type=simple
User=root
ExecStart=/root/.copaw/venv/bin/copaw app
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Enable and start the service:

systemctl daemon-reload
systemctl enable copaw
systemctl start copaw

Useful management commands:

# Check service status
systemctl status copaw

# Stream live logs
journalctl -u copaw -f

# Restart the service
systemctl restart copaw

# Stop the service
systemctl stop copaw

The Restart=always and RestartSec=5 directives tell systemd to wait 5 seconds and automatically restart CoPaw whenever the process exits unexpectedly.


Option 3: screen (Useful for Debugging)

screen is a terminal multiplexer that keeps a session alive even after you disconnect from SSH:

# Install screen
yum install -y screen

# Start a named session
screen -S copaw

# Launch CoPaw inside the session
copaw app

# Detach (keep running in background): press Ctrl+A, then D

Reattach to the session later:

screen -r copaw

List all active sessions:

screen -ls

Limitation: screen sessions don’t survive a server reboot, so this isn’t suitable for long-running production use.


Comparison: Which Method Should You Use?

Method Auto-start on Boot Auto-restart on Crash Best For
nohup Quick tests, temporary runs
systemd Production, long-term deployment
screen Debugging, watching live output

FAQ

What does “Empty reply from server” mean when running the install script?

It means your server’s outbound network access is blocked. The curl request never reached the installation server. Check whether your firewall or cloud security group allows outbound HTTPS traffic on port 443.


Is there an alternative to pysqlite3-binary for fixing the SQLite3 version issue?

Yes — you can compile and install a newer version of SQLite3 from source. But that’s time-consuming and carries a risk of affecting other system components that depend on the system SQLite3. pysqlite3-binary is isolated to the Python virtual environment and doesn’t touch system packages, making it the lower-risk option in almost every case.


How do I verify that the pysqlite3 patch is actually working?

Run this command:

/root/.copaw/venv/bin/python -c "import sqlite3; print(sqlite3.sqlite_version)"

If the output shows version 3.35.0 or higher, the patch is working correctly.


Nginx is configured but I’m still getting a 502. What should I check?

Work through this checklist in order:

  1. Is CoPaw actually running? → ss -tlnp | grep 8088
  2. Is Nginx running? → systemctl status nginx
  3. Are there errors in the Nginx log? → tail -50 /var/log/nginx/error.log
  4. Is port 80 open in your cloud provider’s security group?

What port does CoPaw listen on by default?

CoPaw binds to 127.0.0.1:8088 by default, accepting only local connections. You need Nginx (or another reverse proxy) to forward external traffic to that local address.


My systemd service fails to start. How do I diagnose it?

journalctl -u copaw -n 50 --no-pager

Common causes include: the Python executable path being wrong in the service file, missing dependencies, or port 8088 already being occupied by another process.


Can I access CoPaw over HTTPS?

Yes. Add an SSL certificate to your Nginx configuration. The most common approach is to use a free certificate from Let’s Encrypt via certbot. You’ll need a domain name pointing to your server’s IP address before setting this up.


Summary

Deploying CoPaw on a Linux server follows a clear progression of stages:

Installation is straightforward — a single curl command handles everything, with uv managing the virtual environment.

Dependency troubleshooting is where most people get stuck. The SQLite3 version conflict with ChromaDB is the most common blocker, but it’s solvable without modifying system packages: install pysqlite3-binary into the virtual environment and create a .pth patch file to redirect Python’s module loader.

Network configuration requires understanding that CoPaw only listens locally by design. Setting up Nginx as a reverse proxy is the standard solution. The one gotcha to watch out for: Nginx’s default server {} block in nginx.conf will intercept port 80 traffic before your custom config gets a chance to handle it — delete that default block after creating your reverse proxy config.

Process management should use systemd for any deployment you care about. It handles auto-start on boot and crash recovery automatically, and journalctl gives you a reliable way to inspect logs.

The errors you’ll encounter along the way follow consistent patterns. The error messages usually point directly at the cause — working through them systematically is generally enough to reach a running deployment.