Exploring WriteHERE: An Open-Source Framework for Adaptive Long-Form Writing

Have you ever wondered how AI can mimic the way humans approach writing long pieces, like novels or detailed reports? Traditional AI tools often stick to rigid plans, creating outlines first and then filling them in without much flexibility. But what if the tool could adjust on the fly, just like a real writer who changes direction mid-sentence? That’s where WriteHERE comes in. This open-source framework uses recursive planning to make AI writing more adaptive and human-like. If you’re into AI, writing, or just curious about how technology can enhance creativity, stick around as I break it down step by step.

WriteHERE stands out because it integrates retrieval, reasoning, and composition in a dynamic way. It’s built on the idea that writing isn’t linear—it’s a process of breaking down tasks, gathering info, thinking through ideas, and putting words on the page, all while adapting to new insights. The framework has been evaluated on fiction and technical reports, showing better results than other methods. Plus, it’s fully open-source under the MIT license, so you can tweak it as needed.

Let’s start with the basics. What makes WriteHERE different from other AI writing assistants?

Understanding the Core of WriteHERE

WriteHERE is designed to handle long-form writing by simulating human adaptive planning. Instead of following a fixed workflow, it recursively decomposes tasks into smaller ones and executes them in an interleaved manner. This means the AI can plan, act, and replan as it goes, much like how you might outline a chapter, research a fact, revise your thoughts, and write—all without being locked into an initial structure.

The framework draws from hierarchical task network (HTN) planning, where complex goals are achieved through primitive tasks. In writing terms, these primitives fall into three categories:

  1. Retrieval: Gathering information from external sources, like searching for facts or references.
  2. Reasoning: Planning content, organizing ideas, or ensuring consistency.
  3. Composition: Actually generating the text, such as drafting paragraphs.

This heterogeneous approach allows for seamless integration. For example, while composing a technical report, the AI might pause to retrieve data, reason about its relevance, and then incorporate it. Evaluations show this leads to more coherent and high-quality output compared to state-of-the-art tools.

The project’s philosophy emphasizes openness: all code is available on GitHub at principia-ai/WriteHERE, and it’s geared toward research and education. No commercial strings attached—just pure innovation driven by the community.

WriteHERE Architecture Overview

This overview image from the project illustrates how tasks form a directed acyclic graph (DAG), with nodes representing different task types and edges showing dependencies.

Recent Developments in WriteHERE

As of November 2025, WriteHERE has gained traction in the AI community. The associated paper was accepted for an oral presentation at EMNLP 2025 in Suzhou, China. This is a big deal because EMNLP is a top conference for natural language processing, and an oral slot means the work is considered particularly impactful. If you’re attending, it’s a chance to see the creators discuss it live.

The paper, titled “Beyond Outlining: Heterogeneous Recursive Planning for Adaptive Long-form Writing with Language Models,” is available on arXiv (2503.08275). It details how WriteHERE outperforms baselines in metrics like coherence, factuality, and adaptability.

Why Choose WriteHERE for Your Writing Needs?

If you’re asking, “Is WriteHERE right for me?” consider this: traditional tools like simple prompt-based generators or outline-first systems often fall short for complex writing. They can’t easily handle mid-process changes, such as incorporating new research or shifting plot directions in a story. WriteHERE’s dynamic adaptation fixes that.

Key benefits include:

  • Flexibility: Adjusts plans in real-time based on context.
  • Integration: Combines different task types without rigid boundaries.
  • Transparency: You can visualize the entire process, seeing how tasks decompose.
  • Versatility: Works for creative fiction and factual reports alike.

For instance, in fiction writing, it can recursively plan plot points, retrieve inspirational elements, and compose scenes. In reports, it searches for data, reasons through arguments, and structures the document.

The open-source nature means you can customize prompts, models, or even the workflow. If you’re a developer, this is gold for building custom AI writing apps.

Diving Deeper into How WriteHERE Works

To really get WriteHERE, let’s explore its mechanics. The framework views writing as a planning problem. The top-level goal is a composition task with an initial workspace state and memory. It decomposes this into subtasks across the three types, recursively until reaching executables.

This is inspired by cognitive models of agents and HTN planning. The interleaving of planning and execution is key—execute primitive tasks immediately, update states, and proceed. This avoids the pitfalls of pre-fixed outlines.

The state-based hierarchical task scheduling algorithm manages everything. Tasks are in a DAG with states: Active (ready), Suspended (waiting), or Silent (done). It processes the closest Active task to the root using BFS depth.

Memory management uses the task graph and workspace, with context control pulling relevant info for each task. This keeps things efficient, avoiding overload from full history.

In practice, this means better handling of long-form challenges like maintaining consistency over thousands of words or integrating diverse sources.

Illustration of the WriteHERE framework

Here, you see the DAG representation, with the scheduling algorithm managing adaptive planning.

The abstract flow of tasks

This shows information flow: retrieval updates memory, reasoning transforms it, composition alters the workspace.

Setting Up WriteHERE: A Step-by-Step Guide

Ready to try it? Installation is straightforward, but let’s walk through it carefully. Whether you’re a beginner or experienced coder, these steps will get you running.

Prerequisites

You’ll need:

  • Python 3.6 or later.
  • Node.js 14 or higher for the frontend.
  • API keys from OpenAI (for GPT models), Anthropic (for Claude), and SerpAPI (for searches in reports).

Store keys in an api_key.env file for security.

Quick Start Without Visualization

For simple use or batch jobs:

  1. Create and activate a virtual environment:

    python -m venv venv
    source venv/bin/activate
    
  2. Install dependencies:

    pip install -v -e .
    
  3. Copy and edit the env file:

    cp recursive/api_key.env.example recursive/api_key.env
    nano recursive/api_key.env
    
  4. Run the engine:

    cd recursive
    python engine.py --filename <input> --output-filename <output> --done-flag-file <done> --model <model> --mode <story|report>
    

Example for a story:

python engine.py --filename ../test_data/meta_fiction.jsonl --output-filename ./project/story/output.jsonl --done-flag-file ./project/story/done.txt --model gpt-4o --mode story

For a report:

python engine.py --filename ../test_data/qa_test.jsonl --output-filename ./project/qa/result.jsonl --done-flag-file ./project/qa/done.txt --model claude-3-sonnet --mode report

Running with the Visualization Interface

For real-time monitoring:

  1. Run the setup script (one-time):

    ./setup_env.sh
    
  2. Start the app:

    ./start.sh
    

This launches the backend on port 5001 and frontend on 3000, opening http://localhost:3000 in your browser.

Customize ports:

./start.sh --backend-port 8080 --frontend-port 8000

For Anaconda users with conflicts:

./run_with_anaconda.sh

Or with custom ports:

./run_with_anaconda.sh --backend-port 8080 --frontend-port 8000

Manual Installation for Advanced Users

If you want control:

Backend:

  1. Virtual env:

    python -m venv venv
    source venv/bin/activate
    
  2. Install:

    pip install -v -e .
    pip install -r backend/requirements.txt
    
  3. Start:

    cd backend
    python server.py --port 8080
    

Frontend:

  1. Install:

    cd frontend
    npm install
    
  2. Start:

    PORT=8000 npm start
    

Trouble? Check TROUBLESHOOTING.md for fixes.

Key Features of WriteHERE

WriteHERE packs features that make it powerful:

  • Recursive task decomposition for handling complexity.
  • Dynamic integration of retrieval, reasoning, and composition.
  • Adaptive workflows that flex with needs.
  • Support for fiction and reports.
  • Intuitive web interface.
  • Real-time visualization of task hierarchy, status, and types.
  • Full customization.

The visualization lets you see the agent’s thinking: task breakdown, current work, statuses (ready, in progress, completed), and types.

Project Structure Explained

Understanding the files helps if you’re contributing or modifying:

  • backend/: Flask server for API.
  • frontend/: React app for UI.
  • recursive/: Core engine.

    • agent/: Agents and prompts.
    • executor/: Task runners.
    • llm/: Model integrations.
    • utils/: Helpers.
    • cache.py: Efficiency boosts.
    • engine.py: Main planning engine.
    • graph.py: DAG handling.
    • memory.py: State management.
    • test_run_report.sh: Report script.
    • test_run_story.sh: Story script.
  • test_data/: Samples.
  • start.sh: Easy launch.

This modular setup makes it easy to extend.

Contributing to WriteHERE

Community is key. How can you help?

Code Contributions:

  1. Fork from main.
  2. Set up dev env per install guide.
  3. Make changes, match style.
  4. Add tests.
  5. Run tests.
  6. PR with clear description.

Bugs/Features:

  • Use Issues for reports or ideas.
  • Bugs: Steps to reproduce, expected vs actual.
  • Features: Describe benefit.

Docs:

  • Fix errors, add examples via PR.

Support:

  • Answer Issues.
  • Share use cases.

Guidelines: Follow style, document new code, clear commits, focused PRs. Contributions under MIT.

Citing the Work

Use this BibTeX for research:

@misc{xiong2025heterogeneousrecursiveplanning,
      title={Beyond Outlining: Heterogeneous Recursive Planning for Adaptive Long-form Writing with Language Models}, 
      author={Ruibin Xiong and Yimeng Chen and Dmitrii Khizbullin and Mingchen Zhuge and Jürgen Schmidhuber},
      year={2025},
      eprint={2503.08275},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2503.08275}
}

License Details

MIT—free for use, modification, distribution in research, education, personal projects.

Insights from the Paper

The paper provides deep dives. Abstract: WriteHERE enables adaptive writing via decomposition and integration, outperforming baselines.

Introduction: Long-form challenges include coherence; pre-planning limits adaptability. WriteHERE unifies under goal-driven framework.

Related Works: Compares to Plan-Write, STORM, Agent’s Room—highlights WriteHERE’s generality.

Formulation: Defines agent system, task types, planning problem.

HRP: Recursive with type annotation for heterogeneous decomposition.

Framework: Algorithm details scheduling, memory.

Experiments: Better on Tell me a Story (fiction) and Wildseed (reports). Example story: Emily’s mountain trip with anxiety, dream sequence.

Risks: Hallucination (mitigated by retrieval, but verify); bias (add ethics checks).

Licenses for datasets: CC BY-SA 4.0 for WildSeek, CC BY 4.0 for others.

Real-World Applications and Examples

Let’s see WriteHERE in action. For fiction: Input a prompt like a meta-fiction setup, output a full story with adaptive plots.

For reports: QA input leads to structured document with retrieved facts.

Extend to blogs, essays, or code docs by customizing modes.

Case study: Generating a technical report on AI ethics—retrieves sources, reasons pros/cons, composes sections.

Another: Novel chapter—plans arc, retrieves folklore, writes dialogue.

Potential Challenges and Solutions

Altitude sickness in setup? Use Anaconda script.

Model choice: GPT for creativity, Claude for reasoning.

Scale: For longer texts, monitor API costs.

FAQ Section

What is recursive planning in WriteHERE?

It’s breaking tasks into subtasks recursively, executing when primitive.

How does WriteHERE handle adaptation?

Interleaves planning/execution, updates based on outcomes.

Can I use it without coding?

With interface, yes—run scripts, view process.

What models work best?

gpt-4o for stories, claude-3-sonnet for reports.

Is it free?

Yes, open-source; API keys may cost.

How to debug?

TROUBLESHOOTING.md or Issues.

How-To: Generate a Custom Story

  1. Prepare jsonl input with prompts.
  2. Set env, keys.
  3. Run engine with story mode.
  4. Review output.jsonl.

For reports, swap mode.

Expanding Your Use of WriteHERE

Try integrating with other tools, like adding voice mode or API services. Experiment with prompts for genres like sci-fi or academic papers.

Community contributions could add new task types or visualizations.

In summary, WriteHERE revolutionizes AI writing by making it adaptive and human-like. With its open nature, it’s perfect for tinkerers and researchers. Dive in, and see how it transforms your writing process.