CoWork-OSS: A Comprehensive Guide to Local-First AI Agents on macOS
In the modern digital workflow, managing files, generating reports, and organizing data across multiple directories can be a tedious and time-consuming process. While cloud-based AI solutions offer convenience, they often come at the cost of privacy and data control. Enter CoWork-OSS, an open-source, local-first agent workbench designed specifically for macOS that brings the power of AI directly to your desktop.
This tool allows you to automate multi-step tasks within a folder-scoped workspace, ensuring that your data stays local while leveraging advanced Large Language Models (LLMs). Whether you are generating complex spreadsheets, organizing your downloads, or automating web research, CoWork-OSS provides a secure, transparent, and powerful environment to get the job done.
This guide will explore the architecture, features, safety protocols, and practical usage of CoWork-OSS, demonstrating why it is becoming an essential tool for developers, data analysts, and privacy-conscious professionals.
What is CoWork-OSS?
CoWork-OSS is an independent, open-source project that implements a local, folder-scoped agent workflow pattern. It is currently available as a macOS desktop app built on Electron, with cross-platform support planned for the future. The core philosophy of CoWork-OSS is “Local-first,” meaning your tasks, events, and artifacts are stored locally in a SQLite database. It operates on a BYOK (Bring Your Own Key) model, allowing you to use credentials from Anthropic API, AWS Bedrock, or run completely offline using local models via Ollama.
Unlike many cloud-based agents that act as black boxes, CoWork-OSS features a terminal-inspired user interface (CLI-style) that uses monospace fonts and status indicators. This design choice prioritizes readability and efficiency for technical users who prefer direct control over flashy graphical interfaces.

Figure 1: The Terminal-inspired UI of CoWork-OSS emphasizes transparency and efficiency.
Core Philosophy
-
Local-First State: By default, no telemetry is sent. Task metadata, timeline events, and workspace configurations reside on your machine. -
Folder-Scoped Security: File operations are strictly constrained to the workspace folder you select, utilizing path traversal protection to prevent unauthorized access to system files. -
Permissioned Execution: The agent cannot perform destructive actions like deletion or bulk renaming without explicit user approval. -
Transparent Runtime: Every step, tool call, and decision is visible in a real-time timeline, allowing you to audit the agent’s logic.
⚠️ Safety & Data Loss Warning
Before diving into installation, it is crucial to understand the safety implications of running an AI agent with file system access.
CoWork-OSS can modify, move, overwrite, or delete files as part of its normal operation. Furthermore, bugs, misuse, or unexpected AI behavior can lead to unintended consequences. The maintainers explicitly state that they are not responsible for any data loss resulting from the use of this software.
To mitigate risks, follow these strict guidelines:
-
Use a Separate Environment: Whenever possible, run CoWork-OSS on a dedicated Mac, a separate user account, or within a virtual machine to isolate it from your primary data. -
Work in Non-Critical Folders: If isolation is not possible, only point CoWork-OSS at folders you can afford to lose. Never direct the agent toward important personal files, system folders, or production data. -
Enable and Verify Backups: Ensure Time Machine or another backup solution is active and has been verified. Test your ability to restore files before relying on them during agent operations. -
Review Approvals Carefully: The agent will ask for permission before destructive operations. You must read exactly what the agent intends to do before clicking “Approve.” -
Expect the Unexpected: AI systems are probabilistic and can behave unpredictably. Treat every workspace as potentially at risk.
Architecture and Technology Stack
Understanding how CoWork-OSS is built helps in appreciating its reliability and extensibility. The application is structured using a modern web technology stack, orchestrated by Electron.
The architecture is divided into four distinct layers:
-
React UI (Renderer): This is the frontend layer responsible for the Task List, Task Timeline, Approval Dialogs, and Workspace Selector. It communicates with the backend via Inter-Process Communication (IPC). -
Agent Daemon (Main Process): This Node.js layer handles task orchestration. It manages the Agent Executor, which runs the plan-execute-observe loops, and oversees the Tool Registry and Permission Manager. -
Execution Layer: This is where the actual work happens. It handles File Operations, Document Creation Skills, LLM Providers (connecting to Anthropic, Bedrock, or Ollama), and Search Providers. -
SQLite Local Database: A persistent storage layer embedded in the application that holds tasks, events, artifacts, and workspace configurations.
The tech stack includes:
-
Frontend: React 19, TypeScript, Vite. -
Backend: Electron 40, Node.js. -
Database: better-sqlite3. -
Build Tool: electron-builder.
Key Features and Capabilities
CoWork-OSS is more than just a chat interface; it is a comprehensive workbench with specific skills designed to handle office automation.
1. Skill System: Document Generation
One of the standout features is the built-in skill system for creating professional outputs. The agent isn’t just generating text; it is generating files.
-
Excel Spreadsheets (.xlsx): The agent can generate complex spreadsheets with multiple sheets, auto-fit columns, formatting, and filters. This is ideal for summarizing data or creating reports. -
Word Documents (.docx): It can structure documents with headings, paragraphs, lists, tables, and even code blocks. -
PDF Documents: Generate professionally formatted PDFs with custom fonts for official reports. -
PowerPoint Presentations (.pptx): Create presentations with multiple layouts, themes, and speaker notes automatically. -
Folder Organization: Automatically organize directories by file type, date, or custom rules, cleaning up messy folders like “Downloads.”
2. Browser Automation
By integrating Playwright, CoWork-OSS gains full web browser control capabilities. This transforms the agent from a passive text processor to an active web navigator. It can:
-
Navigate to URLs, take screenshots, and save pages as PDFs. -
Interact with web pages: click buttons, fill forms, type text, and press keys. -
Extract data: Pull out page content, links, and form data. -
Dynamic interaction: Scroll pages, wait for elements to load, and execute JavaScript.
3. Remote Control via Telegram and Discord
CoWork-OSS bridges the gap between desktop automation and remote communication platforms.
-
Telegram Bot: You can run tasks remotely via Telegram. The bot supports streaming responses, workspace selection, and task management commands like /statusand/cancel. -
Discord Bot: Integration with Discord allows for slash commands and direct messages. You can mention the bot in a server or DM it to trigger tasks.
Both integrations support security modes such as “Pairing” (default), “Allowlist,” and “Open,” though “Open” is not recommended for security reasons.
4. Web Search Integration
To ensure the agent has access to current information, CoWork-OSS includes a multi-provider web search system. It supports:
-
Tavily: AI-optimized search results. -
Brave Search: Privacy-focused search. -
SerpAPI: Comprehensive Google results via API. -
Google Custom Search: Direct Google integration.
The system features a fallback mechanism; if the primary provider fails, the backup provider is automatically used.
Providers and Costs (BYOK)
CoWork-OSS is free and open-source, licensed under MIT. However, running the tasks requires computing power, which is managed through the provider you choose.
Using Anthropic API or AWS Bedrock
These cloud options provide high-speed reasoning and advanced capabilities. You must configure your own API keys. Usage is billed directly by the provider; CoWork-OSS acts purely as a client interface and does not proxy or resell access.
Using Ollama (Local LLMs)
For users who prioritize privacy or wish to avoid API costs, Ollama is the recommended solution. This allows the agent to run completely offline.
Recommended Models for Ollama:
-
llama3.2: Fast and efficient, suitable for general tasks. -
qwen2.5:14b: Offers a balance of performance and reasoning capabilities. -
deepseek-r1:14b: Strong capabilities for coding and technical tasks.
Note: While local models offer total privacy, they generally run slower than cloud APIs and require significant RAM (e.g., 14B models need ~16GB RAM).
Installation and Setup
This section details the step-by-step process to get CoWork-OSS running on your macOS machine.
Prerequisites
-
Node.js 18+ and npm: The runtime environment for the application. -
macOS: Required for Electron native features (currently exclusive). -
Model Access: An Anthropic API key, AWS Bedrock access, or a local installation of Ollama.
Step 1: Clone the Repository
Open your terminal and clone the source code from GitHub:
git clone https://github.com/mesutfelat/cowork-oss.git
cd cowork-oss
Step 2: Install Dependencies
Use npm to install the necessary JavaScript packages:
npm install
Step 3: Configure API Credentials
You need to tell the application how to connect to the AI model. Copy the example environment file and add your specific key.
cp .env.example .env
Edit the .env file in a text editor and add your ANTHROPIC_API_KEY. If you are using Ollama, you may skip this step or leave it blank, configuring the provider later in the app settings.
Step 4: Run in Development Mode
To launch the application for testing or development:
npm run dev
Step 5: Building for Production
If you want a standalone application file:
npm run build
npm run package
The packaged app will be generated in the release/ directory.
How to Use CoWork-OSS
Once the application is running, the workflow is designed to be intuitive yet secure.

Figure 2: The Welcome Screen features AI disclosure and quick commands.
1. Select a Workspace
On first launch, you will be prompted to select a folder. This folder becomes your “Workspace.” All file operations performed by the agent will be restricted to this directory. It is mounted for read/write access and protected by permission boundaries.
2. Create a Task
Click the “New Task” button. Here, you describe your objective in natural language. Because the agent uses a “Plan-Execute-Observe” loop, you can give complex instructions.
Example Tasks:
-
“Organize my Downloads folder by file type and move images into an ‘Images’ subfolder.” -
“Create a quarterly report spreadsheet with Q1-Q4 sales data based on these CSV files.” -
“Generate a presentation about our product roadmap using the data in the project folder.” -
“Analyze these log files and create a summary document listing the top 10 errors.”
3. Monitor Execution
As the agent works, you will see a real-time timeline. This is the “Transparent Runtime” in action. You can watch as it:
-
Creates an execution plan: Breaking your request into steps. -
Executes steps: Using tools like file readers or web browsers. -
Requests approvals: If it needs to delete a file or run a shell command, it pauses. -
Produces artifacts: Shows you the files it has created or modified.

Figure 3: Real-time task execution showing the plan steps and tool calls.
4. Approve Requests
When a destructive action is required, a dialog will appear. For example, if the agent decides to delete 50 temporary files to clean up the folder, it must ask you first. Review the details carefully and choose to Approve or Deny.

Figure 4: Task completion screen with verification and file tracking.
Configuring Ollama for Local Privacy
For those who want to ensure zero data leaves their machine, setting up Ollama is the best path.
-
Install Ollama: Download it from the official website or use Homebrew ( brew install ollama). -
Pull a Model: Open your terminal and pull a model. For example: ollama pull llama3.2 -
Start the Server: Run ollama serve. The server typically runs athttp://localhost:11434. -
Configure in App: -
Open CoWork-OSS Settings (gear icon). -
Select Ollama (Local) as your provider. -
Click Refresh Models. -
Select your model (e.g., llama3.2). -
Test the connection and save.
You can also set environment variables likeOLLAMA_BASE_URLif you need to point to a remote server running Ollama.
-
Configuring Telegram Bot for Remote Access
Running tasks from your phone is a powerful feature. Here is how to set it up:
-
Create Bot: In Telegram, search for @BotFather. Send/newbot, follow the prompts, and copy the bot token. -
Configure App: In CoWork-OSS, go to Settings > Channels. Paste the token and click Add Telegram Channel. Test the connection. -
Pairing: For security, use the “Pairing” mode. Click Generate Pairing Code in the app, then send this code to your bot in Telegram to authenticate. -
Usage: Send /workspacesto see folders,/workspace 1to select one, and then type your task. The bot will stream the results back to you.
Common Bot Commands:
-
/workspaces: List available folders. -
/status: Check current session status. -
/cancel: Stop a running task.
Security Model and Permissions
The security architecture of CoWork-OSS is designed to give the user control. The permissions are defined by a WorkspacePermissions interface which includes:
-
Read: Ability to read files. -
Write: Ability to create or modify files. -
Delete: Ability to delete files (Always requires approval). -
Network: Network access (Future feature). -
Shell: Execute shell commands (Always requires approval).
Approval Requirements:
Besides deletion and shell commands, the following actions also trigger approval gates: -
Bulk renaming (affecting more than 10 files). -
Network access beyond a predefined allowlist. -
External service calls.
This ensures that even if the AI hallucinates a harmful command, it cannot execute it without your intervention.
Web Search Configuration
To enable web searching capabilities, you must configure at least one provider in your .env file.
Example Configuration:
TAVILY_API_KEY=tvly-your-api-key-here
BRAVE_API_KEY=BSA-your-api-key-here
Once configured, go to Settings > Web Search. Select your Primary Provider. You can also choose a Fallback Provider to ensure that if your primary search service goes down or hits a rate limit, the agent can still retrieve information from the web.
Troubleshooting Common Issues
Even experts run into issues. Here are solutions to common problems:
-
“ANTHROPIC_API_KEY not found”: Ensure you have created the .envfile in the root directory and that the variable name is spelled correctly. You may need to export it in your shell:export ANTHROPIC_API_KEY=your_key_here. -
Electron won’t start: Sometimes cached files cause issues. Clear the cache and rebuild: rm -rf node_modules dist npm install npm run dev -
Database locked: If the app crashed previously, the SQLite database might be locked. Close all instances of the app and delete the journal file: rm ~/Library/Application\ Support/cowork-oss/cowork-oss.db-journal
Compliance and Trademark Notice
CoWork-OSS requires users to comply with the terms of service of their chosen model providers (Anthropic, AWS, etc.). For consumer-facing use, Anthropic’s policy requires disclosing that users are interacting with AI. CoWork-OSS handles this by showing an explicit “AI system” disclosure at the beginning of every session.
It is important to note that “Cowork” is an Anthropic product name. CoWork-OSS is an independent project and is not affiliated with, endorsed by, or sponsored by Anthropic. The developers have committed to updating the branding if requested by the rights holder to avoid confusion.
Roadmap
The project is actively developed. Future plans include:
-
VM Sandbox: Using the macOS Virtualization.framework for stronger isolation. -
MCP Connector: Support for the Model Context Protocol. -
Sub-agent Coordination: Enabling parallel task execution. -
Network Egress Controls: Better proxy and network management.
Conclusion
CoWork-OSS represents a significant step forward for user-controlled AI automation. By combining the raw power of LLMs with a local-first architecture, strict folder-scoping, and robust permission controls, it solves the “black box” problem inherent in many cloud agents. Whether you are a developer looking to automate log analysis or a manager needing to generate reports from local data, CoWork-OSS provides the tools to do so safely and efficiently.
By following the installation and safety guidelines outlined above, you can leverage this powerful agent workbench to streamline your workflow while maintaining complete ownership of your data.
{
"@context": "https://schema.org",
"@type": "SoftwareApplication",
"name": "CoWork-OSS",
"operatingSystem": "macOS",
"applicationCategory": "BusinessApplication",
"offers": {
"@type": "Offer",
"price": "0",
"priceCurrency": "USD"
},
"description": "An open-source, local-first agent workbench for running multi-step tasks in a folder-scoped workspace with explicit approvals.",
"featureList": "Local-first state, Folder-scoped security, Permissioned execution, Built-in document skills, Browser automation, Web search",
"screenshot": "https://t1.chatglm.cn/file/69770d11b3c08f1ae76905dd.md?expired_at=1769411055&sign=ea7fa0c2f8ae7a08340e2983109e4549&ext=screenshots/cowork-oss4.jpeg"
}

