Promptomatix: A Powerful LLM Prompt Optimization Framework to Boost Your AI Interactions
Summary
Promptomatix is an AI-driven LLM prompt optimization framework powered by DSPy and advanced optimization techniques. It automatically analyzes tasks, generates tailored data, iteratively refines prompts, supports multiple LLM providers, and offers flexible CLI/API access—reducing manual trial-and-error while enhancing output quality and efficiency.
Getting to Know Promptomatix: Why You Need This Prompt Optimization Framework
Have you ever struggled with large language models (LLMs) where your input doesn’t yield the desired output? Spent hours tweaking prompts with little success? If so, Promptomatix might be the tool you’ve been searching for.
In simple terms, Promptomatix is a framework specifically designed to optimize LLM prompts. Unlike rigid prompt templates, it adapts to your unique task by automatically analyzing requirements, generating relevant data, and iteratively refining prompts—ultimately delivering results that align with your expectations. Whether you’re a researcher exploring LLM capabilities or a developer building production-grade applications, this framework provides a structured solution that takes the guesswork out of prompt engineering.
Promptomatix Core Architecture: How Does It Work?
To fully appreciate Promptomatix’s advantages, let’s break down its “backbone”—the architectural design. The framework operates like a well-oiled assembly line, with distinct components working in harmony to transform your raw需求 into optimized prompts.

1. Input Processing Module: Understanding Your Needs
First, the Input Processing Module “analyzes” your raw request. For example, if you say “Summarize this article,” it identifies the task as “summarization” and extracts key requirements—laying the groundwork for subsequent optimization.
2. Synthetic Data Generation Module: Building Task-Specific Data
Optimization can’t happen without data. This module automatically generates training and testing datasets tailored to your task. For instance, if you’re working on sentiment analysis, it creates text samples labeled with positive/negative sentiments—providing the framework with the data it needs to learn how to refine prompts effectively.
3. Optimization Engine: The Prompt Polisher
This is the framework’s driving force. Leveraging DSPy or meta-prompt backends, it fine-tunes prompts repeatedly—similar to revising a draft countless times based on feedback. Each iteration builds on the previous one, gradually improving prompt performance.
4. Evaluation System: Scoring Prompt Effectiveness
Blind optimization is ineffective. The Evaluation System uses task-specific metrics to assess prompt performance: for summarization, it checks if core information is retained; for classification, it measures accuracy. This ensures every optimization step has a clear purpose.
5. Feedback Integration Module: Incorporating Your Insights
Machine evaluations don’t always capture subjective preferences—like “too formal” or “too verbose.” This module lets you input direct feedback, which is then integrated into the next optimization cycle. For example, if you find a product description too technical, the framework adjusts the prompt to be more accessible.
6. Session Management Module: Tracking Every Step
Think of this as a “digital logbook” that records every detail of the optimization process—including intermediate prompts, evaluation scores, and your feedback. If you pause mid-optimization, you can resume right where you left off. It also makes it easy to review and replicate successful workflows.
Key Features of Promptomatix: What Problems Does It Solve?
Now that we’ve covered the architecture, let’s explore Promptomatix’s standout features—these are the tools that will directly boost your LLM efficiency.
1. Zero-Configuration Intelligence: Start Without Complex Setup
You don’t need to be a technical expert to use Promptomatix. Simply input your request, and the framework automatically:
-
Analyzes task type (e.g., translation, classification) -
Selects optimal optimization techniques -
Configures parameters
For example, inputting “Translate this English text to Chinese” triggers automatic recognition of a translation task and deploys the appropriate optimization strategy—no manual configuration required.
2. Automated Dataset Generation: No Need to Source Data Yourself
Many tasks lack pre-existing training data. Promptomatix solves this by generating synthetic datasets tailored to your use case. For medical question-answering tasks, it creates common medical queries and corresponding answers—equipping the framework to optimize prompts for medical domain specifics.
3. Task-Specific Optimization: Tailored Strategies for Every Goal
Not all tasks benefit from the same optimization approach. Promptomatix selects DSPy modules and evaluation metrics based on task type:
-
Classification tasks prioritize accuracy metrics -
Summarization tasks focus on conciseness and information retention -
Translation tasks emphasize fluency and fidelity
This targeted approach ensures optimization aligns with your specific objectives.
4. Real-Time Human Feedback: Your Input Drives Improvement
Machine evaluations miss subjective nuances. Promptomatix lets you provide real-time feedback (e.g., “too long,” “tone is off”) that directly shapes the next optimization cycle. This human-in-the-loop approach ensures prompts align with your unique preferences.
5. Comprehensive Session Management: Traceable Workflows
Every optimization cycle is saved as a “session,” storing:
-
Optimized prompts -
Model used -
Evaluation scores -
User feedback
You can revisit historical sessions, compare strategies, and build on past successes—eliminating redundant work.
6. Framework-Agnostic Design: Support for Multiple LLM Providers
Promptomatix works seamlessly with leading LLM providers, including:
-
OpenAI (GPT series) -
Anthropic (Claude) -
Cohere
This flexibility means you won’t need to learn a new tool if you switch providers—reducing adoption friction.
7. Flexible Usage Options: CLI and API Access
Choose the interface that best fits your workflow:
-
CLI (Command Line Interface): Ideal for quick tasks or batch processing -
REST API: Perfect for integrating prompt optimization into your applications
This versatility makes Promptomatix suitable for both casual users and professional developers.
How to Install Promptomatix: Step-by-Step Guide
Ready to get started? The installation process is straightforward—follow these steps to set up Promptomatix on your system.
Quick Install (Recommended)
-
Clone the repository. Open your terminal and run:
git clone https://github.com/airesearch-emu/promptomatix.git cd promptomatixThis downloads the framework code to your machine and navigates to the project folder.
-
Run the installation script:
./install.shThe script automates these critical steps:
-
✅ Checks for Python 3.8+ installation -
✅ Creates a virtual environment ( promptomatix_env) to avoid dependency conflicts -
✅ Initializes git submodules (including DSPy, the framework’s core dependency) -
✅ Installs all required libraries
-
Activate the Virtual Environment
Important: You must activate the virtual environment every time you use Promptomatix to ensure dependencies are loaded correctly.
In your terminal, run:
source promptomatix_env/bin/activate
When activated, you’ll see (promptomatix_env) at the start of your terminal prompt:
(promptomatix_env) yourusername@yourcomputer:~$
To deactivate the environment when you’re done:
deactivate
The (promptomatix_env) label will disappear from your prompt.
Pro Tip: Set Up Auto-Activation
Save time by creating an alias to auto-activate the environment. Add this line to your ~/.bashrc or ~/.zshrc file (depending on your terminal):
alias promptomatix='source promptomatix_env/bin/activate && promptomatix'
After saving, simply type promptomatix in your terminal to automatically activate the environment.
Configure API Keys
To use LLMs, you’ll need API keys from your chosen provider (e.g., OpenAI, Anthropic). You can set them in two ways:
Option 1: Temporary Setup (Session-Only)
Run these commands in your terminal (keys will reset when you close the terminal):
export OPENAI_API_KEY="your_openai_api_key"
export ANTHROPIC_API_KEY="your_anthropic_api_key"
Option 2: Permanent Setup (Recommended)
-
Copy the example environment file: cp .env.example .env -
Open the .envfile with a text editor and replace the placeholder text with your actual API keys. -
Save the file—your keys will persist across sessions.
Test Your Installation
Verify that Promptomatix is working correctly by running a test command:
python -m src.promptomatix.main --raw_input "Given a question about human anatomy answer it in two words" --model_name "gpt-3.5-turbo" --backend "simple_meta_prompt" --synthetic_data_size 10 --model_provider "openai"
If the command runs without errors and outputs an optimized prompt or result, your installation is successful!
How to Use Promptomatix: From Basic to Advanced Examples
Now that you’ve installed Promptomatix, let’s explore how to use it. Whether you’re a beginner or an experienced developer, there’s a workflow that fits your needs.
1. Interactive Notebooks: Learn by Doing
If you prefer visual, step-by-step learning, Jupyter notebooks are the best starting point. They guide you through the optimization process with hands-on examples.
Steps to Use Notebooks:
-
Navigate to the examples directory: cd examples/notebooks -
Launch the introductory notebook: jupyter notebook 01_basic_usage.ipynb
Notebook Guide (Start with the Basics!):
-
01_basic_usage.ipynb: Introduction to the core prompt optimization workflow—perfect for beginners. -
02_prompt_optimization.ipynb: Advanced optimization techniques for complex tasks. -
03_metrics_evaluation.ipynb: How to measure prompt performance with task-specific metrics. -
04_advanced_features.ipynb: Customization options and advanced framework capabilities.
2. Command Line Interface (CLI): Quick Tasks & Batch Processing
The CLI is ideal for users who prefer terminal-based workflows or need to process multiple tasks efficiently.
Basic Optimization Example
Optimize a prompt with minimal configuration—just provide your raw request:
python -m src.promptomatix.main --raw_input "Classify text sentiment into positive or negative"
The framework automatically analyzes the task (“sentiment classification”) and generates an optimized prompt.
Custom Model & Parameters
Specify the model, temperature, and task type for more control:
python -m src.promptomatix.main --raw_input "Summarize this article" \
--model_name "gpt-4" \
--temperature 0.3 \
--task_type "summarization"
This command uses GPT-4 with a low temperature (for more deterministic outputs) and explicitly defines the task as summarization.
Advanced Configuration
Customize the optimization backend, synthetic data size, and provider:
python -m src.promptomatix.main --raw_input "Given a question about human anatomy answer it in two words" \
--model_name "gpt-3.5-turbo" \
--backend "simple_meta_prompt" \
--synthetic_data_size 10 \
--model_provider "openai"
This example targets a “two-word answer for human anatomy questions” task, using 10 synthetic data points and OpenAI’s GPT-3.5-turbo.
Using Your Own CSV Data
Leverage pre-existing datasets by pointing Promptomatix to your local CSV files:
python -m src.promptomatix.main --raw_input "Classify the given IMDb rating" \
--model_name "gpt-3.5-turbo" \
--backend "simple_meta_prompt" \
--model_provider "openai" \
--load_data_local \
--local_train_data_path "/path/to/your/train_data.csv" \
--local_test_data_path "/path/to/your/test_data.csv" \
--train_data_size 50 \
--valid_data_size 20 \
--input_fields rating \
--output_fields category
Key parameters:
-
--load_data_local: Enables local data usage -
--local_train_data_path/--local_test_data_path: Paths to your training/test CSV files -
--train_data_size/--valid_data_size: Number of samples to use for training/validation -
--input_fields/--output_fields: Column names for input (e.g., “rating”) and output (e.g., “category”) data
3. Python API: Integrate into Your Applications
Developers can embed Promptomatix’s optimization capabilities directly into their code using the Python API.
Basic Optimization Example
from promptomatix import process_input
# Optimize a sentiment classification prompt
result = process_input(
raw_input="Classify text sentiment",
model_name="gpt-3.5-turbo",
task_type="classification"
)
# Access the optimized prompt and session details
print("Optimized Prompt:", result['result'])
print("Session ID:", result['session_id'])
Generate Feedback & Iterate
Improve prompts further using human or automated feedback:
from promptomatix import generate_feedback, optimize_with_feedback
# Generate feedback on the optimized prompt
feedback = generate_feedback(
optimized_prompt=result['result'],
input_fields=result['input_fields'],
output_fields=result['output_fields'],
model_name="gpt-3.5-turbo"
)
print("Generated Feedback:", feedback)
# Optimize the prompt using the feedback
improved_result = optimize_with_feedback(result['session_id'])
print("Improved Prompt:", improved_result['result'])
Using Local CSV Data with the API
from promptomatix import process_input
# Optimize an IMDb rating classification prompt with local data
result = process_input(
raw_input="Classify the given IMDb rating",
model_name="gpt-3.5-turbo",
backend="simple_meta_prompt",
model_provider="openai",
load_data_local=True,
local_train_data_path="/path/to/your/train_data.csv",
local_test_data_path="/path/to/your/test_data.csv",
train_data_size=50,
valid_data_size=20,
input_fields=["rating"],
output_fields=["category"]
)
Promptomatix Project Structure: Understand Its Inner Workings
If you’re interested in deepening your knowledge or contributing to the framework, understanding the project structure is essential. The files are organized logically, with each directory serving a specific purpose:
promptomatix/
├── images/ # Project images (logos, architecture diagrams)
├── libs/ # External libraries and submodules (e.g., DSPy)
├── logs/ # Log files for tracking runtime activity
├── promptomatix_env/ # Python virtual environment (dependencies)
├── sessions/ # Saved optimization sessions (for traceability)
├── dist/ # Distribution files (if applicable)
├── build/ # Build artifacts (if applicable)
├── examples/ # Tutorial notebooks and sample scripts
├── src/
│ └── promptomatix/ # Core Python package
│ ├── cli/ # Command Line Interface implementation
│ ├── core/ # Core functionality (optimization engine, evaluation)
│ ├── metrics/ # Task-specific evaluation metrics
│ ├── utils/ # Utility functions (data processing, logging)
│ ├── __init__.py # Package initialization
│ ├── main.py # Main application entry point
│ ├── lm_manager.py # LLM provider integration (API calls)
│ └── logger.py # Logging configuration
├── .env.example # Template for API key configuration
├── .gitignore # Files excluded from Git version control
├── .gitmodules # Git submodule configuration (e.g., DSPy)
├── .python-version # Specified Python version requirement
├── CODEOWNERS # Code ownership for collaborative development
├── CODE_OF_CONDUCT.md # Community guidelines for contributors
├── CONTRIBUTING.md # Contribution instructions
├── LICENSE.txt # Apache License documentation
├── README.md # Project overview and basic usage
├── SECURITY.md # Security policies and vulnerability reporting
├── how_to_license.md # License compliance guidelines
├── install.sh # One-click installation script
├── requirements.txt # Dependencies list
└── setup.py # Package installation configuration
Frequently Asked Questions (FAQ)
1. Which LLM providers does Promptomatix support?
Promptomatix is framework-agnostic and currently supports leading providers including OpenAI, Anthropic, and Cohere. Its flexible design enables future expansion to additional providers.
2. Can I use Promptomatix without programming experience?
Yes! The interactive Jupyter notebooks provide step-by-step guidance, and the CLI supports simple, no-code commands. Beginners can start with basic workflows and gradually explore advanced features.
3. How long does synthetic data generation take?
Generation time depends on the task type and synthetic_data_size parameter. For example, generating 10 samples for a simple classification task typically takes a few seconds (varies based on network speed and LLM response time).
4. Can I use my own datasets with Promptomatix?
Absolutely. Use the --load_data_local flag (CLI) or load_data_local=True (API) to point the framework to your local CSV files. Specify input/output columns and data sizes for seamless integration.
5. Will optimized prompts always be better than manually written ones?
Promptomatix’s data-driven, iterative approach reduces manual trial-and-error, especially for complex tasks or high-stakes applications. However, results may vary based on task complexity, data quality, and LLM capabilities.
6. How can I access previous optimization sessions?
Session data is stored in the sessions/ directory. You can also retrieve historical sessions using the session ID returned by the CLI or API.
7. What should I do if I get a “Python version too old” error during installation?
Promptomatix requires Python 3.8 or higher. Update your Python installation to the latest compatible version and re-run the install.sh script.
Citing Promptomatix
If Promptomatix contributes to your research or work, please consider citing the associated paper:
@misc{murthy2025promptomatixautomaticpromptoptimization,
title={Promptomatix: An Automatic Prompt Optimization Framework for Large Language Models},
author={Rithesh Murthy and Ming Zhu and Liangwei Yang and Jielin Qiu and Juntao Tan and Shelby Heinecke and Caiming Xiong and Silvio Savarese and Huan Wang},
year={2025},
eprint={2507.14241},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.14241},
}
Further Reading
For detailed guidance on effective prompt engineering, refer to Appendix B (Page 17) of the official paper:
Promptomatix: An Automatic Prompt Optimization Framework for Large Language Models
Contact
For questions, suggestions, or contributions, reach out to:
Rithesh Murthy
Email: rithesh.murthy@salesforce.com
Final Thoughts: Unlock LLM Potential with Promptomatix
Whether you’re looking to streamline daily LLM usage or build robust, production-grade applications, Promptomatix offers a comprehensive toolchain for prompt optimization. Its core strength lies in transforming complex prompt engineering into a structured, automated process—freeing you from the frustration of trial-and-error.
By leveraging task-specific optimization, synthetic data generation, and human feedback integration, Promptomatix helps you get the most out of your LLM investments. Whether you’re a researcher, developer, or casual user, this framework empowers you to create prompts that deliver consistent, high-quality results.
Ready to elevate your LLM interactions? Install Promptomatix today and experience the power of automated prompt optimization.

