Site icon Efficient Coder

How to Build a Private AI Workflow in Obsidian with ChatGPT MD

Building Your Private AI Workflow in Obsidian: The Complete Guide to ChatGPT MD

Have you ever imagined having a direct conversation with the world’s most powerful language models, right inside your trusted, private note-taking space? Whether it’s accessing the latest GPT-5 from the cloud or running a model completely offline, all traces of your dialogue and thinking remain securely on your own device.

This is no longer a fantasy. The ChatGPT MD plugin for Obsidian is turning this experience into reality. It’s more than just a “chat plugin”; it’s a bridge that deeply integrates cutting-edge AI capabilities into your personal knowledge management system.

Why Should You Pay Attention to ChatGPT MD?

In an age of information overload, our note-taking tools need to evolve. Traditional note-taking software acts as a “warehouse” for information, while tools infused with AI can become a “catalyst” and “collaborator” for thought. The core value of ChatGPT MD lies in three words: Privacy, Seamlessness, and Flexibility.


  • Privacy: By supporting local large language models (like those via Ollama), all your conversations, drafts, and inspirations can be processed entirely offline, eliminating concerns about data privacy or incurring API costs.

  • Seamless Integration: It operates directly within your Obsidian Markdown notes. AI responses become part of the note itself—editable, linkable, and reorganizable just like any other text—truly integrating AI into your thinking workflow.

  • Flexibility: You can freely switch between dozens of cloud models from OpenAI and OpenRouter.ai, and models running on your own computer, choosing the most suitable “brain” for the task at hand.

The animated image below demonstrates how it works within a note, conversing with context from a web link you provide, such as planning a vacation:

Next, I’ll take you through a comprehensive look at this tool, from quick starts to advanced techniques, helping you build your own efficient, AI-assisted thinking system.

Latest Developments: Welcoming the GPT-5 Era

ChatGPT MD is constantly updated to support the latest models. The newest v2.8.0 version now fully supports OpenAI’s recently released GPT-5 family, including:


  • gpt-5: The flagship model with enhanced reasoning capabilities.

  • gpt-5-mini: Optimized for speed and efficiency, a balanced choice.

  • gpt-5-nano: An ultra-lightweight model for rapid responses.

  • gpt-5-chat-latest: An always-updated chat model.

Beyond new models, this version includes optimizations in token management, message service architecture, and API integration, making conversations more stable and reliable.

The Five-Minute Quick Start Guide

Getting started with ChatGPT MD is remarkably simple, requiring just three steps:

  1. Install the Plugin: Inside Obsidian, go to Settings -> Community Plugins -> Browse, search for “ChatGPT MD,” and click Install. Remember to enable it in your list of Community Plugins.
  2. Configure Your Key or Local Model: Go to the plugin settings. Here, you can choose to enter your OpenAI or OpenRouter.ai API key. Alternatively, if privacy is your priority, you can skip to the next step and set up a local model via Ollama.
  3. Start a Conversation: In any note, press Cmd+P (Mac) or Ctrl+P (Windows/Linux) to open the Command Palette, type “ChatGPT MD: Chat,” and execute it to start a chat based on the content of the current note.

A crucial tip for enhancing your experience: Set a hotkey for the chat command. Go to Settings -> Hotkeys, search for “ChatGPT MD: Chat,” and bind a convenient key combination like Cmd+J. This allows you to instantly summon the AI conversation interface from within any note.

Diving Deeper: A Detailed Look at the Three Core Usage Modes

The power of ChatGPT MD lies in the multiple pathways it provides to access AI, allowing you to choose based on your needs.

Mode 1: Using Mainstream Cloud Models (OpenAI & OpenRouter.ai)

This is the most direct method, giving you access to top-tier models including GPT-5, Claude, Gemini, DeepSeek, and more.


  • OpenAI: Enter your API key in the plugin settings to use all official OpenAI models.

  • OpenRouter.ai: This is a model aggregation platform. After obtaining and entering your API key here, you can choose from models by Anthropic (Claude), Google (Gemini), Meta (Llama), DeepSeek, and even Perplexity’s online search models—all from a single interface. This is especially useful for tasks requiring up-to-date web information.

How to Specify a Model for a Single Note?
You can configure this using “frontmatter” at the top of your note, offering tremendous flexibility.

---
model: gpt-5  # Specify the latest GPT-5 model
system_commands: ['You are a senior software architect.']
temperature: 0.7  # Controls creativity of responses
max_tokens: 2000  # Increase output length for complex analysis
---
# My note content
Here is my question for the AI...
Mode 2: Fully Private Local Models (The Ollama Approach)

If you don’t want any conversation data leaving your computer, or wish to avoid API costs, Ollama is the best choice. It allows you to easily run various open-source large language models locally.

Setup Steps:

  1. Install Ollama: Visit ollama.ai to download and install the version for your operating system.
  2. Pull a Model: Open your terminal (command line) and enter a command to download a model you’re interested in. For example:
    ollama pull llama3.2  # Meta's powerful, lightweight model
    ollama pull qwen2.5   # Alibaba's excellent Qwen model
    ollama pull deepseek-r1:7b  # DeepSeek's reasoning-specialized model
    
  3. Configure the Plugin: In the ChatGPT MD settings, find the “Ollama Defaults” section. The URL should typically remain http://localhost:11434. In the “Default Model” field, enter the model you want to use regularly, formatted as ollama@model-name, e.g., ollama@llama3.2.
  4. Start a Local Conversation: Now, when you use the chat command, it will default to your chosen local model. You can also override this in a single note:
    ---
    model: ollama@deepseek-r1:7b
    temperature: 0.1  # Lower randomness for reasoning tasks
    ---
    

How Do I Know Which Local Models I Have Installed?
Run the command ollama list in your terminal. All downloaded models will be displayed for your reference when configuring.

Mode 3: Another Local Approach (LM Studio)

LM Studio is another popular tool for running local models, offering a graphical interface to manage and load models.

  1. Install LM Studio: Download it from lmstudio.ai.
  2. Download and Load a Model: In LM Studio’s graphical interface, browse, download your preferred model, and load it into memory.
  3. Start the Local Server: In LM Studio, switch to the “Local Server” tab and click to start the server.
  4. Configure the Plugin: In the ChatGPT MD settings under “LM Studio Defaults,” set the URL to http://localhost:1234 and your default model, formatted like lmstudio@your-model-name.
  5. Usage: Once configured, usage is identical to Ollama.

Unlocking Advanced Features: Using It Like a Pro

Once you’re comfortable with basic conversations, the following features will significantly boost your productivity.

1. System Commands: Setting the AI’s Role

This is key to obtaining high-quality answers. You can use the system_commands parameter to give the AI a clear directive before the conversation even begins.

---
system_commands: ['You are a strict academic editor. Please check the following text for grammar, logic, and provide revision suggestions.']
model: gpt-5-mini
---
2. Link Context: Letting the AI Read Your Other Notes

This is what fundamentally differentiates ChatGPT MD from a standard chatbox. You can reference other notes directly in the conversation using Obsidian’s internal link syntax [[Note Name]] or Markdown links. The AI will use the content of these links as reference context when formulating its response, enabling true “conversations based on your personal knowledge base.”

3. Comment Blocks: Telling the AI What to Ignore

If your note contains private records or temporary drafts you don’t want processed by the AI, you can wrap them in a specific comment block syntax. The AI will automatically ignore these sections.

This is normal text the AI will see.

%% This is a comment block
Everything inside here will be completely ignored by the AI during conversation.
It will not be included as part of the context.
%%

The conversation can continue from here.
4. Chat Templates: Streamlining Recurring Workflows

If you frequently engage in certain types of dialogues (e.g., code review, book summaries, weekly report generation), you can create note templates with specific frontmatter and save them to a designated folder. Later, use the ChatGPT MD: New Chat From Template command to quickly create a new conversation based on that template, saving you from repetitive configuration.

5. Smart Titles and Utility Tools

  • Infer Title: After a few exchanges in a conversation, you can use the Infer Title command to prompt the AI to automatically generate a concise note title based on the dialogue content.

  • Add Divider: Use the Add Divider command to insert a horizontal rule into your note, making the structure of long conversations clearer.

  • Clear Chat: The Clear Chat command empties all message history from the current note while retaining the frontmatter configuration, allowing you to easily start a brand-new conversation.

Configuration Deep Dive: Fine-Grained Control from Global to Local

ChatGPT MD configuration works on two levels: Global Settings and Note-Level Frontmatter Settings. Note-level settings override global ones, giving you maximum flexibility.

Global Default Configuration

The plugin comes with sensible defaults. You can modify these in the plugin settings interface, affecting all notes without specific overrides.

Note-Level Frontmatter Configuration

This is the core configuration method. Typing --- on the first line of a note begins the frontmatter configuration area. Here, you can set all parameters specifically for that note.

A comprehensive frontmatter configuration example:

---
# Model & Basic Parameters
model: gpt-5-mini  # Specify the model
system_commands: ['You are a helpful assistant.']
temperature: 0.3    # Creativity (0-2)
top_p: 1
max_tokens: 300     # Maximum length of the reply
presence_penalty: 0.5
frequency_penalty: 0.5

# Service Endpoint Configuration (Optional, uses global settings if blank)
openaiUrl: https://api.openai.com
# openrouterUrl: https://openrouter.ai
# ollamaUrl: http://localhost:11434
---

Important Note: The max_tokens parameter controls the length of the AI’s response. For simple Q&A, 300 might suffice. However, for complex tasks like code generation or long-form analysis, it’s recommended to increase this to 2048 or 4096 to receive complete answers.

Privacy, Security, and Your Data

ChatGPT MD is designed with a philosophy of returning control to the user:


  • Data Storage: All conversation history and configuration are saved locally within your Obsidian vault files. The plugin itself does not perform any data collection or tracking.

  • API Calls: When using cloud services (OpenAI/OpenRouter), the plugin communicates directly only with the API endpoints you specify.

  • The Ultimate Privacy Solution: When using Ollama or LM Studio to run local models, all data processing occurs internally on your computer, achieving complete offline privacy and zero API cost.

Frequently Asked Questions (FAQ)

Q: How do I start my first chat?
A: In any note, open the Obsidian Command Palette (Cmd+P / Ctrl+P), type and select the “ChatGPT MD: Chat” command. We strongly recommend setting a hotkey for this command.

Q: Can I use multiple AI providers at once?
A: Absolutely. You can configure multiple API keys in the global settings. For a specific conversation, use the model parameter in the frontmatter to specify which one to use. For example, model: gpt-5 calls OpenAI, while model: ollama@llama3.2 calls your local Ollama model.

Q: How do I use a model deployed by my company or a third-party service compatible with the OpenAI API?
A: You’ll need the service’s endpoint URL and API key. In the global settings or note frontmatter, use the openaiUrl parameter to specify your custom endpoint URL, and enter the corresponding key in the API key field. This works for services like Azure OpenAI Service or many other hosting services offering compatible APIs.

Q: The url parameter in the note frontmatter is gone?
A: Starting from v2.2.0, to more clearly distinguish between different services, the single url parameter was deprecated in favor of service-specific parameters: openaiUrl, openrouterUrl, and ollamaUrl. Please update your old notes and templates accordingly.

Q: How can I try upcoming features before they’re officially released?
A: You can join the Beta testing program using the BRAT plugin for Obsidian. However, please be aware that beta versions can be unstable. Always test on a new, non-essential vault. Never use beta versions directly on your main knowledge base to prevent potential data loss.

Conclusion: Let AI Become an Extension of Your Mind

ChatGPT MD is more than a tool; it represents a new way of working—seamlessly integrating external, powerful intelligence with your internal, private thinking space. Whether you choose the cloud route for ultimate efficiency and access to the latest models, or the local route for privacy control and cost savings, it provides a robust technical implementation.

Its appeal lies in the fact that AI responses are no longer transient text floating on a webpage. Instead, they become directly integrated into your personal knowledge base—editable, linkable, and reusable. Every conversation with AI tangibly enriches and constructs your own digital brain.

Now is the time to open Obsidian, install ChatGPT MD, and begin exploring this next-generation note-taking experience that blends human intuition with machine intelligence. Start with a simple question, and you’ll discover that your approach to knowledge management will be forever changed.


About the Developers: ChatGPT MD was created by Bram in 2023, with Deniz joining in 2024 to co-maintain it. They are dedicated to building tools that enhance personal productivity. If you encounter issues or have ideas for improvement, the project’s GitHub repository welcomes contributions from everyone.

Exit mobile version