FastScheduler: A Clean, Decorator-First Task Scheduler for Python
If you’ve ever needed to run background tasks in a Python application—whether it’s sending daily reports, checking API health every few minutes, syncing data on weekdays, or generating monthly summaries—you’ve probably wrestled with scheduling libraries.
Options like schedule, APScheduler, Celery (with beat), or even plain cron all work, but each comes with trade-offs: too basic, too heavy, complex setup, poor async support, or state loss on restart.
FastScheduler takes a different approach. It’s a lightweight, modern Python library that prioritizes:
-
Extremely readable decorator-based syntax -
Native async/await support -
Built-in persistence (survives restarts) -
Timezone-aware scheduling -
A clean, real-time FastAPI dashboard (optional) -
Production-friendly features like timeouts, retries, and a dead-letter queue
All without forcing you to manage brokers, external queues, or verbose configuration classes.
This guide walks through what FastScheduler offers, how to use it, and when it makes sense for your project. (Content is based entirely on the official project documentation as of early 2026.)
Core Features at a Glance
-
Decorator-first API — define jobs with one line above a function -
Full async support — works natively with async def -
Multiple scheduling styles: intervals, daily/weekly times, standard cron expressions, one-shot delays -
Timezone handling built-in (crucial for distributed or international teams) -
Persistence — JSON file by default, or SQL databases (SQLite, PostgreSQL, MySQL) via SQLModel -
Automatic retries with exponential backoff -
Timeouts to kill hung jobs -
Dead-letter queue for failed executions -
Beautiful real-time dashboard with Server-Sent Events (SSE) updates -
Pause/resume/cancel jobs programmatically or from the UI -
Execution history and basic statistics
Quick Start – 60 Seconds to Your First Scheduled Task
Install the base package:
pip install fastscheduler
For the dashboard add:
pip install fastscheduler[fastapi]
For cron expressions:
pip install fastscheduler[cron]
Or get everything:
pip install fastscheduler[all]
Minimal working example:
from fastscheduler import FastScheduler
scheduler = FastScheduler(quiet=True)
@scheduler.every(10).seconds
def heartbeat():
print(f"[{__import__('time').strftime('%H:%M:%S')}] Heartbeat")
scheduler.start() # blocks until interrupted
More realistic patterns you’ll actually use:
# Run every 5 minutes
@scheduler.every(5).minutes
def check_stock(): ...
# Every day at 14:30 local time
@scheduler.daily.at("14:30")
async def generate_report(): ...
# Weekdays at 09:00 (cron style)
@scheduler.cron("0 9 * * MON-FRI")
def open_market_check(): ...
# Only once, 60 seconds after start
@scheduler.once(60)
def warm_up_cache(): ...
# Next Monday at 10:00
@scheduler.weekly.monday.at("10:00")
def team_sync(): ...
Handling Timezones Properly
One of the nicest touches is first-class timezone support.
Two clean ways to set it:
# Option 1 – pass tz directly
@scheduler.daily.at("09:00", tz="America/New_York")
def nyc_daily(): ...
# Option 2 – chainable (often more readable)
@scheduler.weekly.friday.tz("Asia/Tokyo").at("17:00")
def tokyo_eod(): ...
Common strings you’ll reach for:
-
UTC -
Asia/Shanghai -
Asia/Tokyo -
America/Los_Angeles -
Europe/London -
Europe/Paris -
Australia/Sydney
No more mental math converting between UTC and local time.
Production-Ready Safeguards
These features separate “toy scheduler” from something you can trust in real applications.
Job Timeouts
Prevent one slow task from blocking the worker pool:
@scheduler.every(1).minutes.timeout(45) # kill after 45 seconds
def maybe_slow_query(): ...
Automatic Retries + Backoff
Flaky third-party APIs? Let it retry automatically:
@scheduler.every(10).minutes.retries(4)
def call_payment_gateway(): ...
Delays follow exponential backoff (2s → 4s → 8s → 16s …).
Skip Missed Executions After Restart
By default FastScheduler calculates and runs any missed jobs after a restart. Turn this off when you don’t want catch-up:
@scheduler.every(1).hours.no_catch_up()
def collect_metrics(): ...
Dead-Letter Queue for Failures
Jobs that exhaust retries land here (with full error trace, timestamp, attempt count).
Inspect them:
failed = scheduler.get_dead_letters(limit=50)
for entry in failed:
print(entry.error_message, entry.executed_at, entry.attempts)
Or clear:
scheduler.clear_dead_letters()
Beautiful Real-Time Dashboard (FastAPI)
If your project already uses FastAPI (very common in 2025–2026), adding monitoring takes ~10 lines.
from contextlib import asynccontextmanager
from fastapi import FastAPI
from fastscheduler import FastScheduler
from fastscheduler.fastapi_integration import create_scheduler_routes
scheduler = FastScheduler(quiet=True)
@asynccontextmanager
async def lifespan(app: FastAPI):
scheduler.start()
yield
await scheduler.stop(wait=True) # graceful shutdown
app = FastAPI(lifespan=lifespan)
# Mount dashboard (default path: /scheduler/)
app.include_router(create_scheduler_routes(scheduler))
Visit http://localhost:8000/scheduler/ and you get:
-
Live job list with status indicators, next-run countdowns, last 5 run results -
One-click Run / Pause / Resume / Cancel -
Execution history tab (filterable) -
Dead-letter tab showing failed jobs + stack traces -
Overall stats (success rate, uptime, active jobs) -
SSE-powered real-time updates (no polling)
Here’s how the dashboard looks in action:
(Imagine a clean, dark-mode-friendly table with green/red status dots, countdown timers, and quick-action buttons — official GIF and screenshot show exactly this flow.)
Persistence Options Compared
| Backend | Best For | Pros | Cons | Setup Example |
|---|---|---|---|---|
| JSON (default) | Development, single-process apps | Zero config, no dependencies | Not safe for high concurrency | FastScheduler() |
| SQLite | Small–medium production, single server | Transactional, single file, easy backup | Write contention under load | storage="sqlmodel", database_url="sqlite:///sched.db" |
| PostgreSQL | Serious production, multi-instance | Excellent concurrency, strong consistency | Requires DB maintenance | database_url="postgresql://user:pass@host/db" |
| MySQL | Teams already using MySQL | Good ecosystem integration | Slightly less performant than PG | database_url="mysql://user:pass@host/db" |
For most real deployments in 2026, PostgreSQL + SQLModel storage is the sweet spot.
Common Questions Developers Ask
Does state survive restarts?
Yes — job definitions, run history, and counters persist automatically.
How do I run a job right now?
UI button, scheduler.run_job_now(job_id), or API POST /scheduler/api/jobs/{id}/run.
Can I pause without deleting?
Yes — scheduler.pause_job(id) / scheduler.resume_job(id).
How real-time is the dashboard?
SSE stream → near-instant updates (typically sub-second).
Can one function have multiple schedules?
Yes — each decorator creates a separate job.
What happens on unhandled exceptions?
Logged → retried if configured → moved to dead-letter queue after max attempts.
Graceful shutdown?
scheduler.stop(wait=True) waits for running jobs (configurable timeout).
When to Choose FastScheduler
Pick it when you want:
-
Clean, Pythonic decorator syntax -
A nice built-in dashboard without extra services -
Async-first code (very common in FastAPI projects) -
Timezone correctness out of the box -
Restart resilience without adding Redis / RabbitMQ / Celery -
Enough guardrails (timeouts, retries, DLQ) for production comfort
If your needs are extremely simple → plain schedule library might still win.
If you already run a full distributed task queue → stick with Celery / Dramatiq / RQ.
But for the majority of web backends, data pipelines, and automation scripts in 2026, FastScheduler hits a very appealing balance of simplicity and capability.
Final Thoughts
Scheduling doesn’t have to be painful or over-engineered.
FastScheduler gives you readable code, production safety nets, and visibility — all in a package that installs with pip and runs inside your existing application.
Official repository (active development as of January 2026):
https://github.com/MichielMe/fastscheduler
Give it a try on your next background-task feature. Many developers who’ve switched report that it quickly becomes one of those “quietly indispensable” dependencies.
Questions or real-world usage stories? Drop them in the comments — happy scheduling!

