Gnomly: Your AI-Powered Web & Video Content Analysis Assistant
Transform Complex Content into Clear Insights
Why You Need This Tool
Do these scenarios sound familiar?
- 
Facing 20-page research reports but needing only core findings  - 
Saving 3-hour tutorial videos with no time to watch  - 
Comparing website perspectives with information overload  - 
Struggling with technical documentation needing plain-language explanations  
Meet Gnomly – the Chrome extension that solves these problems through three core capabilities:
- 
Intelligent extraction of web/video content  - 
Precise summarization and analysis  - 
Real-time Q&A for deeper exploration  
Performance tests: Processes 300-page PDFs in 2 minutes, achieves 92% accuracy on YouTube video summarization (Llama2 model)
Core Feature Breakdown
🌐 Universal Content Processing
| Content Type | Handling Method | Key Advantage | 
|---|---|---|
| Standard Webpages | Smart body extraction | Filters ads/navigation clutter | 
| YouTube Videos | Full transcript capture | Works on all public videos | 
| Long Articles | Chunk processing | Handles 10,000+ word documents | 
| Specific Elements | Precision targeting | Extracts only what you need | 
🚀 Six Technical Breakthroughs
- 
Adaptive Chunking System
Splits long texts while preserving logical flow - 
Cross-Platform Format Retention
Maintains tables, code blocks, and special characters 
graph LR
A[Original Content] --> B{Length Check}
B -->|Exceeds Limit| C[Smart Chunking]
B -->|Within Limit| D[Direct Processing]
C --> E[Sequential AI Processing]
E --> F[Progress Tracking]
F --> G[Result Compilation]
- 
URL-Triggered Automation
Auto-applies presets:- 
reddit.com/*→ Focuses on comment analysis - 
youtube.com/watch→ Prioritizes transcripts - 
github.com/*→ Highlights code segments 
 - 
 - 
Real-Time Token Monitoring
Displays in sidebar:- 
Token consumption  - 
Remaining capacity  - 
Limit warnings  
 - 
 - 
Element Targeting
Visually select page areas:// Example: Extract Reddit comments document.querySelector('[data-testid="comment"]') 
Step-by-Step Installation Guide
Prerequisites
- 
Essential software:
- 
Latest Chrome browser  - 
Ollama local service  - 
Recommended models: mistral(speed) orllama2(accuracy) 
 - 
 - 
Terminal commands:
git clone https://github.com/your/gnomly-repo.git cd gnomly-repo npm install && npm run build 
Browser Setup
- 
Navigate to chrome://extensions/ - 
Enable Developer mode (top-right)  - 
Click Load unpacked extension  - 
Select the generated /distfolder 
Initial Configuration
sequenceDiagram
    User->>Gnomly: Click settings icon ⚙️
    Gnomly->>User: Display configuration panel
    User->>Gnomly: Enter server address
    Gnomly->>Ollama: Connection test
    Ollama-->>Gnomly: Return model list
    Gnomly->>User: Show model dropdown
    User->>Gnomly: Select model → Save
Pro tip: Use
http://localhost:11434for local setups
Practical Usage Scenarios
Case 1: Academic Paper Analysis
- 
Open research paper  - 
Click toolbar fox icon 🦊  - 
Select Get Page Content  - 
Click AI Summary  - 
Ask follow-ups: “Explain methodology in simple terms”  
Case 2: YouTube Learning
- 
Open educational video  - 
Activate Gnomly sidebar  - 
Click Get Transcript  - 
System auto-detects video ID  - 
Generate timestamped key moments  
Case 3: Custom Prompts
| Step                   | Example                          |
|------------------------|----------------------------------|
| 1. Open prompt manager | Creating recipe analyzer         |
| 2. Click "Get URL"     | Auto-fills: `cooking-site.com/*` |
| 3. Write prompt        | "Extract ingredient list"        |
| 4. Element targeting   | Lock ingredient table            |
| 5. Set as default      | Applies to all recipes           |
Advanced: Use
*.gov/*for government sites
Technical Architecture
graph LR
    A[Webpage] --> B{Gnomly}
    B --> C[Content Extractor]
    C --> D[Chunk Processor]
    D --> E[Ollama Connector]
    E --> F[AI Processing]
    F --> G[Result Generator]
    G --> H[Chat Interface]
Core Technologies:
- 
Extraction: Enhanced Readability.js  - 
Chunking: Dynamic token calculation  - 
Connectivity: Custom HTTP headers  - 
Output: Original format preservation  
Frequently Asked Questions
❓ Is this free?
Completely open-source! Requires self-hosted Ollama (local/cloud)
❓ Supported AI models?
All Ollama-compatible models:
mistral: Speed/accuracy balancellama2-uncensored: Unfiltered versiondeepseek-coder: Technical specialist
❓ Content length limits?
Depends on model context window (4K-32K tokens). Longer content auto-chunks.
❓ Privacy concerns?
Local processing by default. Remote servers require your own infrastructure.
❓ Deepseek integration?
Free-tier available:
1. Register at https://platform.deepseek.com/ 2. Get API key 3. Switch provider in settings
Development Roadmap
UI Enhancements
- 
Light/dark mode toggle  - 
Auto-scroll content loading  - 
Cross-page transcript saving  
Feature Priorities
pie
    title User-Requested Features
    "Multi-AI Support" : 45
    "Enhanced Interaction" : 30
    "Auto-Scraping" : 15
    "Model Switching" : 10
Code Improvements
- 
Expanded test coverage  - 
Stricter linting rules  - 
Architectural optimization  
Get Started Now
1. [Install Ollama](https://ollama.ai/download)
2. Run: `ollama pull mistral`
3. [Get extension](https://github.com/your/repo)
4. Follow Chapter 3 installation
License: Apache 2.0 (commercial use permitted)
