HanaVerse: Interactive Live2D Anime Character Chat WebUI for Ollama

As local large language model (LLM) applications grow increasingly versatile, enhancing the interactivity and usability of local LLMs has become a key focus for developers and users alike. HanaVerse stands out as a unique tool that combines Ollama’s powerful local LLM capabilities with Live2D anime character interaction, creating a web chat interface that balances functionality and engagement. This article comprehensively breaks down HanaVerse’s features, installation process, usage tips, and configuration details, helping users of all technical backgrounds get started with ease.

I. Core Experience: More Than Just Chat—Immersive Interaction

HanaVerse is fundamentally an “interactive web UI for Ollama,” but it differentiates itself from traditional text-based chat tools by integrating a dynamic Live2D anime character named Hana, alongside professional content display and flexible customization. Below are its standout features, explained with real-world use cases:

1. Interactive Chat Interface: Lightweight Integration with Local Ollama Models

For users who regularly deploy local LLMs via Ollama, HanaVerse eliminates the need for cloud services by enabling direct, user-friendly web-based conversations with local models. Unlike typing commands in a terminal, HanaVerse’s interface mirrors everyday chat applications—complete with an input box, send button, and organized chat history—lowering the barrier to using local LLMs for casual and professional tasks alike.

2. Live2D Animation: Adding Warmth to Interactions

HanaVerse’s most distinctive feature is its animated Live2D character, Hana, who responds to chat interactions with expressive movements and facial expressions. Powered by Live2D’s Cubism SDK, Hana is not a static image but a “virtual conversationalist” that reacts in real time to your messages. Whether you’re sending a query or waiting for a response, Hana’s dynamic feedback transforms cold text exchanges into engaging, immersive experiences—ideal for users seeking to make local LLM interactions more enjoyable.

3. Professional Content Display: Dual Support for Markdown & LaTeX

Many users leverage local LLMs for specialized tasks like code explanations, mathematical derivations, or academic writing. HanaVerse caters to these needs with:

  • Markdown Support: Model responses automatically render Markdown formatting (headings, lists, code blocks, bold/italic text), with syntax highlighting for code snippets via the Prism library—making technical content easier to read.
  • LaTeX Math Rendering: Powered by KaTeX, complex mathematical equations (e.g., calculus formulas, linear algebra expressions) are displayed accurately, solving the problem of unreadable math content in plain text interfaces.

4. Highly Customizable: Adaptable to Diverse Use Cases

HanaVerse doesn’t lock users into a one-size-fits-all experience. Key customization options include:

  • Modify the Ollama server URL (default: http://localhost:11434), enabling connections to Ollama instances deployed on other devices within your local network.
  • Select from any Ollama model installed locally (e.g., llama3:8b, codellama:7b), allowing you to match the model to your specific task.
  • Customize system prompts to guide the model’s output style (e.g., specialized math helper, coding assistant).

5. Responsive Design & Real-Time Streaming Responses

  • Responsive Layout: The interface automatically adjusts to screen sizes, working seamlessly on desktops, smartphones, and tablets—no more distorted layouts or unclickable buttons on mobile devices.
  • Real-Time Streaming: Responses appear word-by-word as the model generates them, rather than waiting for the full output. This matches the user experience of mainstream AI chat tools, reducing waiting anxiety.

II. HanaVerse Interface & Demo: See the Interaction in Action

1. Interface Screenshots: All-in-One Chat & Settings

HanaVerse’s design balances simplicity and functionality, with core areas including the chat window, input box, Live2D character display, and collapsible settings menu:

HanaVerse Chat Interface

As shown, the Live2D character occupies the left (or top) panel, while the central area displays chat history (with rendered Markdown content) and the bottom features the input box and control buttons. The intuitive layout ensures first-time users can quickly navigate key functions.

2. Online Demo: Experience the Basic Skeleton

To test HanaVerse’s interface and interaction logic without local installation, visit the skeleton demo: https://hanaverse.vercel.app/. Note that this is a UI-only demonstration—you won’t connect to a local Ollama model, but it lets you preview the layout and character animations.

HanaVerse Chat Interface

III. Installation Guide: From Prerequisites to Launch

Installing HanaVerse involves setting up a local Flask server and connecting it to Ollama. Below is a step-by-step guide with explanations and troubleshooting tips to avoid common pitfalls.

1. Prerequisites: Ensure Your Environment Meets Requirements

Before starting, verify your system has the following (all are mandatory):

  • Python 3.8 or Higher: HanaVerse’s backend is built with Flask, and Python 3.8 is the minimum compatible version. Versions 3.9–3.11 are recommended for stability.
  • Ollama Installed & Running: HanaVerse relies on Ollama’s local models. Download Ollama from the official website (https://ollama.ai/), install it, and start the service (it runs on port 11434 by default).
  • Git: Required to clone the HanaVerse repository. Install Git from https://git-scm.com/ (compatible with Windows, macOS, and Linux).

2. Step 1: Clone the Repository

First, download the HanaVerse source code to your local machine. Open a terminal (CMD/PowerShell for Windows, Terminal for macOS/Linux) and run:

git clone https://github.com/Ashish-Patnaik/HanaVerse.git
cd HanaVerse

Frequently Asked Questions (FAQ):

  • “Git not found” error: Confirm Git is installed and added to your system’s PATH. Restart the terminal (Windows) or run git --version (macOS/Linux) to verify.
  • Slow cloning speed: Try switching to a Git mirror or download the ZIP package directly from the GitHub repository page.

3. Step 2: Install Python Dependencies

Once in the HanaVerse directory, install the required Python libraries with:

pip install -r requirements.txt

Important Notes:

  • Use a virtual environment (e.g., venv, conda) to avoid conflicts with your system’s Python environment:

    # Create a virtual environment (example: venv)
    python -m venv hanaverse-env
    # Activate the environment (Windows)
    hanaverse-env\Scripts\activate
    # Activate the environment (macOS/Linux)
    source hanaverse-env/bin/activate
    # Install dependencies after activation
    pip install -r requirements.txt
    
  • “Permission denied” error: On Windows, run the terminal as an administrator. On macOS/Linux, add sudo before the command (e.g., sudo pip install -r requirements.txt).

4. Step 3: Start the Flask Backend Server

With dependencies installed, launch the local Flask server:

python server.py

If successful, the terminal will display a message like “Running on http://localhost:5000″—the server is now active on the default port 5000.

Common Issues:

  • Port 5000 is occupied: Modify the port in server.py (find app.run(host='0.0.0.0', port=5000) and change 5000 to an unused port like 5001).
  • “Module not found” error: Ensure dependencies are installed and the virtual environment is activated.

5. Step 4: Access the HanaVerse Interface

With the server running, open the index.html file (located in the HanaVerse directory) in your web browser. As long as Ollama is running, you’re ready to start chatting with your local LLM.

IV. Usage Guide: From Basic Chat to Advanced Configuration

Once installed, mastering HanaVerse’s features unlocks its full potential. Below is a breakdown of basic operations and advanced settings:

1. Basic Chat: Send Your First Message

  • After opening the interface, type your query (e.g., “Explain Python decorators”) in the input box at the bottom.
  • Click the “Send” button or press Enter to submit your message. The model will generate a response, and Hana will animate in real time.
  • To stop the response mid-generation (e.g., if the output is irrelevant), click the “Stop” button.

2. Core Settings: Adjust Ollama & Interaction Parameters

Click the hamburger menu (☰) to open the settings panel—your hub for customizing HanaVerse. Key configurations include:

(1) Ollama Server Configuration

The default URL is http://localhost:11434 (Ollama’s standard port). If Ollama is running on another device in your local network (e.g., a computer with IP 192.168.1.100), update the URL to http://192.168.1.100:11434 to connect.

(2) Model Selection

HanaVerse works with any Ollama model installed locally. Below are popular options and their use cases:

Model Name Core Use Cases Key Features
llama3:8b General conversation, daily Q&A Balances performance and resource usage
codellama:7b Coding, debugging, explanations Optimized for multi-language programming tasks
mistral:latest Efficient Q&A, lightweight tasks Fast response times, low resource consumption
phi3:latest Lightweight professional tasks Compact size with strong reasoning capabilities (Microsoft-developed)

How to check installed models?
Run ollama list in the terminal to see all locally downloaded Ollama models—only these models will appear in HanaVerse’s model selector.

(3) Custom System Prompts

System prompts guide the model’s output style and focus. Here are practical examples you can reuse:

  • Math Helper: “Format math using LaTeX. Show step-by-step solutions.”
  • Coding Assistant: “Provide code examples with detailed explanations. Use appropriate syntax highlighting.”
  • Recipe Generator: “Present ingredients as bullet points and steps as numbered lists.”

3. Live2D Model Customization: Add Your Own Characters

HanaVerse supports custom Live2D models, but they must meet two key requirements:

  1. Only compatible with Cubism 4 models.
  2. Models must support motionsync3 (motion synchronization 3) (reference: https://docs.live2d.com/en/cubism-editor-manual/motion-sync/).

How to add a custom model?
Place the model files (complete with animations and textures) in the models folder of the HanaVerse directory. Restart the Flask server, and the new model will load automatically.

Common Issues:

  • No animation for custom models: Verify the model supports motionsync3 and all files (model, animations, textures) are complete.
  • Model display errors: Ensure the model is a Cubism 4 version—Cubism 3 and earlier are not compatible.

V. Project Structure: Understand the Code Organization

For users interested in secondary development or troubleshooting, understanding HanaVerse’s file structure is essential. Below is a breakdown of core files and folders:

Path Type Core Functionality
server.py File Flask backend server—handles API interactions with Ollama and request routing.
index.html File Main frontend interface—includes chat window, Live2D display, and input box.
style.css File UI styling—controls colors, layout, and responsive design.
script.js File Core Live2D interaction logic—manages character animations and responses.
chat.js File Chat functionality—handles message sending, receiving, and streaming responses.
sdk/ Folder Live2D Cubism SDK components—supports character rendering and animation.
prism/ Folder Prism syntax highlighting library—renders code blocks with color coding.
katex/ Folder KaTeX math rendering library—displays LaTeX formulas accurately.
models/ Folder Stores Live2D model files (default Hana model + custom models).
requirements.txt File Python dependency list—includes Flask and other backend libraries.

VI. Frequently Asked Questions (FAQ)

1. Which operating systems does HanaVerse support?

HanaVerse works on Windows (7/10/11), macOS (10.15+), and Linux (Ubuntu 18.04+, CentOS 8+, etc.). Any system that can run Python 3.8+ and Ollama is compatible.

2. “Failed to connect to Ollama” error—what should I do?

  • Confirm Ollama is running (run ollama serve in the terminal to start it manually).
  • Verify the Ollama port is 11434 (default). If you changed the port, update it in HanaVerse’s settings.
  • Temporarily disable firewalls/antivirus software (add ports 5000 and 11434 to your whitelist afterward).

3. Why is the mobile interface distorted?

HanaVerse’s responsive design works with most modern browsers (Chrome, Safari, Edge). Older or niche browsers (e.g., outdated Chinese browsers) may have compatibility issues—use a mainstream browser for best results.

4. How do I back up chat history?

Chat history is stored only in your browser’s local cache. To back it up:

  • Manually copy the conversation text.
  • Export browser local storage data (e.g., Chrome: Settings → Privacy and security → Site Settings → View all site data and permissions → Find localhost → Export data).

5. The model’s response is slow—how can I speed it up?

  • Use lightweight models (e.g., phi3:latest, mistral:latest) to reduce resource usage.
  • Close high-resource applications (e.g., video editors, games) to free up CPU/RAM.
  • Adjust Ollama model parameters (e.g., ollama run llama3:8b --num_ctx 2048 to reduce context window size) if your hardware is limited.

6. Can I deploy HanaVerse to a server?

Yes. Ensure the server has Python 3.8+, Ollama installed, and ports 5000 (Flask) and 11434 (Ollama) open. Update the Ollama server URL in index.html to the server’s public IP.

VII. Contributing: Help Improve HanaVerse

HanaVerse is an open-source project—contributions are welcome! Follow these steps to submit your changes:

  1. Fork the repository: Click the “Fork” button on the GitHub page to copy the repo to your account.
  2. Create a feature branch: Clone your forked repo locally and create a new branch (e.g., feature/mobile-optimize):

    git checkout -b feature/mobile-optimize
    
  3. Commit your changes: Make your edits and submit a clear commit message:

    git commit -m "Optimize mobile input box compatibility"
    
  4. Push the branch: Upload your changes to your GitHub repo:

    git push origin feature/mobile-optimize
    
  5. Open a Pull Request: On GitHub, click “Compare & pull request” to submit your changes for review.

VIII. License & Usage Terms

HanaVerse is licensed under a Custom Non-Commercial Use License. Key terms:

  • You may use, copy, and run the software for personal or educational purposes only.
  • Commercial use (e.g., offering paid chat services, embedding in commercial products) is prohibited.
  • Modifying the software for commercial purposes is not allowed.
  • For commercial use authorization, contact the project author.

IX. Acknowledgments: The Technology Behind HanaVerse

HanaVerse’s functionality relies on the following open-source tools and platforms:

  1. Ollama: Provides the local LLM runtime—HanaVerse’s core conversational engine.
  2. Live2D Cubism SDK: Powers Live2D character rendering and animation.
  3. pixi-live2d-display: WebGL-based Live2D renderer—ensures smooth character performance in browsers.
  4. KaTeX: Lightweight LaTeX renderer for accurate math formula display.
  5. Prism: Lightweight syntax highlighting library for readable code snippets.
  6. Live2D motionsync Library: Enables real-time animation synchronization with chat interactions.

Conclusion

HanaVerse’s strength lies in merging “practical local LLM tools” with “engaging Live2D interactions.” It preserves the privacy and flexibility of Ollama’s local deployment while solving the monotony of traditional text-only interfaces. Whether you’re using it for learning (code debugging, math problem-solving), entertainment, or as a base for open-source development, HanaVerse offers a clear installation path, flexible configuration, and user-friendly experience.

If you’re looking to enhance your local LLM interactions, follow the steps in this guide to install HanaVerse. Customize the model and interaction style to match your needs, and enjoy a more engaging, functional way to use local AI.