
Developers have long been able to use Cloudflare Workflows to construct sophisticated, long-running, multi-step applications on the Workers platform. This powerful tool for orchestrating complex processes has been a game-changer for many. However, there was a significant barrier: it was exclusively available in TypeScript. Today, that changes. Python Workflows are now in beta, empowering you to orchestrate these intricate applications using the language you know and love.
With Workflows, you can automate a sequence of idempotent steps within your application, complete with built-in error handling and retry behaviors. This ensures your processes are reliable and resilient. The initial support for only TypeScript created friction for a vast community of developers, particularly since Python has become the de facto language for data pipelines, artificial intelligence, machine learning, and task automation—all domains that heavily rely on robust orchestration.
Over the years, Cloudflare has steadily invested in bringing Python to its developer platform. The journey began in 2020 with Python support via Transcrypt, followed by the direct integration of Python into the workerd runtime in 2024. Earlier this year, we expanded this to include support for CPython and a wide array of packages built on Pyodide, such as the ever-popular matplotlib and pandas. Now, with the introduction of Python Workflows, the circle is complete, giving developers the freedom to create robust, orchestrated applications using their language of choice.
Why Python for Workflows? The Perfect Fit for Modern Applications
To truly appreciate the impact of Python Workflows, let’s explore some real-world scenarios where this combination shines. These are areas where Python already dominates, and the addition of native workflow orchestration unlocks new levels of productivity and efficiency.
Orchestrating Machine Learning Model Training
Imagine you’re in the process of training a Large Language Model (LLM). This isn’t a single action but a complex, iterative cycle of steps. You need to:
-
Label the dataset. -
Feed the data into the model. -
Wait for the training run to complete. -
Evaluate the loss and performance metrics. -
Adjust the model’s parameters based on the evaluation. -
Repeat the entire process until the model reaches the desired performance.
Without automation, this is a painstakingly manual process. You would have to initiate each step, constantly monitor its progress until it finishes, and then manually trigger the next one. This is not only inefficient but also prone to human error.
With a Python Workflow, you can orchestrate this entire training pipeline. The workflow can automatically trigger each step upon the successful completion of its predecessor. For steps that require human intervention, such as evaluating the loss and making manual adjustments, you can implement a step that notifies you (e.g., via Slack or email) and then pauses, waiting for your input before proceeding. This transforms a tedious manual chore into a streamlined, automated, and observable process.
Automating Data Pipelines
Data pipelines are arguably one of the most common and critical use cases for Python. They involve ingesting data from various sources, transforming it, cleaning it, and then loading it into a destination like a data warehouse or analytics platform. These pipelines are often composed of multiple, distinct stages that must execute in a specific order.
By automating a data pipeline through a defined set of idempotent steps (steps that produce the same result every time they run), developers can deploy a workflow that handles the entire process from start to finish. If a step fails—perhaps due to a temporary network issue or a source API being unavailable—the workflow’s built-in retry mechanism can automatically attempt to rerun it. This ensures data reliability and reduces the need for manual intervention, allowing data engineers to focus on building better pipelines rather than babysitting them.
Building Intelligent AI Agents
The rise of AI agents presents another exciting frontier for Python Workflows. An AI agent is an autonomous program that can perceive its environment, make decisions, and take actions to achieve a goal. Let’s consider a practical example: an AI agent designed to manage your weekly grocery shopping.
Each week, you provide the agent with a list of recipes you plan to cook. The agent’s job is to handle everything else. Using a Python Workflow, this process could be elegantly structured as follows:
-
await step.wait_for_event(): The workflow begins by pausing and waiting for you to input your list of recipes for the week. -
step.do(): Once the list is received, a step compiles a comprehensive list of all necessary ingredients. -
step.do(): Another step checks this list against your pantry inventory to see what ingredients you already have left over from previous weeks. -
step.do(): A step then calculates the difference—the ingredients you need to buy. -
step.do(): The agent makes an API call to your local grocery store’s delivery service to place an order for the required items. -
step.do(): Finally, a step processes the payment to complete the order.
Using workflows as the backbone for building agents on Cloudflare significantly simplifies their architecture. The individual step retries and state persistence inherent in Workflows dramatically improve the agent’s chances of successfully completing its task, even in the face of transient failures. With native support for Python Workflows, building these intelligent agents is now more accessible and straightforward than ever before.
How Python Workflows Work: A Look Under the Hood
Cloudflare Workflows leverages the same robust underlying infrastructure we created for durable execution, but it provides a way for Python users to write their workflows that feels natural and idiomatic. A key goal was to achieve complete feature parity between the JavaScript and Python SDKs. This was made possible because Cloudflare Workers supports Python directly within the runtime itself, eliminating the need for clumsy transpilation or emulation layers.
The Foundation: Workers and Durable Objects
At its core, every Cloudflare Workflow is fully built on top of two key platform primitives: Workers and Durable Objects.
-
「Workers」 provide the serverless compute environment where your workflow logic runs. -
「Durable Objects」 are responsible for storing the metadata for the workflow itself and the state information for each running instance.
This combination ensures that your workflow can run for extended periods—days, weeks, or even longer—while maintaining its state, even if the underlying Worker instance is recycled. For a deeper dive into the mechanics of the Workflows platform, you can refer to the detailed explanation in our original announcement post.
The Workflow Entry Point
At the heart of every workflow is the user-defined code, which resides in a Worker. This code is structured as a class that extends WorkflowEntrypoint. When a new instance of your workflow is ready to run, the Workflow engine initiates a Remote Procedure Call (RPC) to the run method of your user Worker. In the case of Python Workflows, this run method is part of a Python Worker.
A basic skeleton for a Workflow declaration looks like this:
from workers import WorkflowEntrypoint
class MyWorkflow(WorkflowEntrypoint):
async def run(self, event, step):
# Your workflow steps will be defined here
pass
The run method is where the magic happens. It receives two crucial parameters: event and step. The event parameter contains the data that triggered the workflow, while the step parameter is an object that implements the durable execution APIs. This WorkflowStep object is what you, as a developer, rely on to ensure your steps are executed at most once, a core principle of durable execution.
Overcoming the Language Barrier: Bridging Python and JavaScript
These durable execution APIs are implemented in JavaScript, but they need to be accessible from within the context of a Python Worker. This presents a fascinating technical challenge: how do you make a Python script seamlessly interact with JavaScript objects and functions?
The solution lies in a multi-layered approach involving RPC and a foreign function interface.
-
「RPC Communication」: The WorkflowStepobject must cross the RPC boundary. The workflow engine (the caller) exposes this object as anRpcTarget. This setup allows the user’s workflow code (the callee) to receive a special stub object in place of the actualWorkflowStep. This stub then communicates back to the engine via RPC whenever you call a method likestep.do()orstep.sleep(). This is how the durable execution capabilities are invoked. -
「The Python-JavaScript Bridge」: This RPC mechanism works for both Python and JavaScript Workflows. However, for Python, there’s an additional layer: the language bridge between the Python script and the JavaScript module that handles the RPC request. When an RPC call targets a Python Worker, a JavaScript entrypoint module acts as a proxy. It receives the request, translates it into a format the Python script can understand, waits for the script to process it, and then translates the result back to be returned to the original caller. -
「Pyodide and FFI」: Python Workers run on Pyodide, which is a port of the CPython interpreter to WebAssembly. Crucially, Pyodide provides a Foreign Function Interface (FFI) that allows Python code to call JavaScript methods and vice versa. This is the fundamental mechanism that enables Python packages and bindings to function within the Workers environment. We leverage this FFI layer not only to allow the direct use of the Workflow binding but also to expose the WorkflowStepmethods in a Python-friendly way. By treatingWorkflowEntrypointas a special class for the runtime, therunmethod is manually wrapped so that thestepparameter is exposed as a JsProxy. This proxy object allows Python to interact with the underlying JavaScript object without a full, complex type translation, providing a direct and efficient bridge.
Making the Python Workflows SDK Feel “Pythonic”
Simply exposing the JavaScript SDK to Python wouldn’t provide an optimal experience. A huge part of porting Workflows to Python involved creating an interface that feels familiar and intuitive to Python developers. This required thoughtful adaptation of the API to align with Python’s conventions and best practices.
Let’s compare a TypeScript workflow definition with its Python equivalent to see the difference.
「TypeScript Example:」
import {
WorkflowEntrypoint,
WorkflowStep,
WorkflowEvent,
} from "cloudflare:workers";
export class MyWorkflow extends WorkflowEntrypoint {
async run(event: WorkflowEvent<YourEventType>, step: WorkflowStep) {
let state = step.do("my first step", async () => {
// Access your properties via event.payload
let userEmail = event.payload.userEmail;
let createdTimestamp = event.payload.createdTimestamp;
return { userEmail: userEmail, createdTimestamp: createdTimestamp };
});
step.sleep("my first sleep", "30 minutes");
await step.waitForEvent<EventType>("receive example event", {
type: "simple-event",
timeout: "1 hour",
});
const developerWeek = Date.parse("22 Sept 2025 13:00:00 UTC");
await step.sleepUntil("sleep until X times out", developerWeek);
}
}
Notice how the step.do() method in TypeScript takes a name and an anonymous async function as a callback. Python doesn’t handle anonymous callbacks in the same idiomatic way. To solve this, the Python SDK uses decorators, a powerful and well-understood feature in Python.
The decorator allows us to intercept the function definition and use it as the callback for the step, all while keeping the syntax clean and readable. All other parameters maintain their original order and meaning.
「Python Equivalent:」
from workers import WorkflowEntrypoint
from datetime import datetime, timezone
class MyWorkflow(WorkflowEntrypoint):
async def run(self, event, step):
@step.do("my first step")
async def my_first_step():
user_email = event["payload"]["userEmail"]
created_timestamp = event["payload"]["createdTimestamp"]
return {
"userEmail": user_email,
"createdTimestamp": created_timestamp,
}
await my_first_step()
step.sleep("my first sleep", "30 minutes")
await step.wait_for_event(
"receive example event",
"simple-event",
timeout="1 hour",
)
developer_week = datetime(2024, 10, 24, 13, 0, 0, tzinfo=timezone.utc)
await step.sleep_until("sleep until X times out", developer_week)
Other methods like waitForEvent, sleep, and sleepUntil retain their original signatures, but their names have been converted to Python’s preferred snake_case convention (wait_for_event, sleep_until) to feel more natural to Python developers.
Advanced Workflows: Managing Concurrency with DAGs
When designing complex workflows, you often need to manage dependencies between steps, even when some of those tasks could be executed at the same time. Many workflows, whether you explicitly design them this way or not, follow a Directed Acyclic Graph (DAG) execution flow. A DAG is a conceptual representation of a series of activities, where each activity is a node and the flow from one activity to another is represented by a directed edge. The “acyclic” part means you can’t loop back to a previous step, ensuring a clear path from start to finish.
Concurrency is fully supported in Python Workflows. Pyodide is clever enough to capture JavaScript Promises (or “thenables”) and proxy them into Python awaitables. This means that the standard asyncio.gather() function works perfectly as a Python equivalent to JavaScript’s Promise.all(), allowing you to run multiple steps concurrently and wait for all of them to complete.
While this imperative approach is powerful, the Python SDK also supports a more declarative approach to defining DAGs, which can make complex workflows much easier to read and manage.
One of the advantages of using the decorator pattern for the do method is that it allows us to provide further abstractions on top of the original API. Here’s an example of a Python API that leverages these DAG capabilities:
from workers import WorkflowEntrypoint
class PythonWorkflowDAG(WorkflowEntrypoint):
async def run(self, event, step):
@step.do('dependency 1')
async def dep_1():
# This step performs its task
print('executing dep1')
@step.do('dependency 2')
async def dep_2():
# This step performs its task
print('executing dep2')
@step.do('final step', depends=[dep_1, dep_2], concurrent=True)
async def final_step(res1=None, res2=None):
# This step runs only after dep_1 and dep_2 are complete
print('executing final step')
await final_step()
In this example:
-
dep_1anddep_2are defined as independent steps. -
final_stepis declared with adependsparameter, explicitly stating that it cannot run until bothdep_1anddep_2have successfully completed. -
The concurrent=Trueparameter on the final step indicates that its dependencies (dep_1anddep_2) can be run in parallel.
This declarative approach makes the workflow’s structure and dependencies immediately clear, leaving the complex state management and execution logic to the Workflows engine and the Python Workers wrapper. It’s worth noting that even if multiple steps are given the same name, the engine will automatically modify each one to ensure uniqueness within the workflow instance. In Python Workflows, a dependency is considered resolved the moment the initial step that it depends on has successfully completed.
Frequently Asked Questions About Python Workflows
As you explore this new capability, you might have some questions. Here are answers to some of the most common ones, based on the information available.
「Are Python Workflows and TypeScript Workflows functionally identical?」
Yes, a primary goal was to achieve complete feature parity between the Python and JavaScript SDKs. All the core capabilities, including error handling, retry mechanisms, state persistence, and the various step types (do, sleep, wait_for_event, etc.), are available in both versions.
「How do I handle tasks that take a long time to run?」
The Workflows engine is designed specifically for long-running tasks. It automatically handles the persistence of your workflow’s state. This means that even if a Worker instance running your step is shut down, the workflow’s state is safely stored. When the step is ready to run again (either after a sleep or following a failure), the engine will spin up a new Worker and resume execution right where it left off. You just need to ensure your steps are idempotent, meaning they produce the same result even if run multiple times.
「Can I use external Python libraries like pandas or requests inside my workflow?」
Yes, you can. Python Workers support any packages that are compatible with Pyodide. This includes a rich ecosystem of scientific computing and data analysis libraries like pandas, numpy, and matplotlib. You can simply import them into your code and use them as you normally would.
「What exactly is a DAG workflow, and why would I need one?」
A DAG, or Directed Acyclic Graph, is a way of modeling a workflow where some steps can happen in parallel, while others must wait for certain dependencies to be met. You would use a DAG pattern whenever your process has multiple independent tasks that don’t rely on each other. By running them concurrently, you can significantly reduce the total execution time of your workflow. The declarative depends and concurrent syntax in the Python SDK makes defining these complex flows simple and intuitive.
「How can I monitor my running workflows and debug them if something goes wrong?」
Cloudflare provides a dashboard where you can monitor the status of your workflow instances, view their execution history, and inspect logs. Each step’s execution time, input, output, and result are recorded, making it easier to pinpoint where things might be failing. For debugging, you can use standard print() statements within your steps, and the output will appear in the workflow’s logs. You can also use the wrangler dev command to run and test your workflows in a local development environment.
「Is there a limit to how long a workflow can run?」
While individual steps within a workflow have a maximum execution time (e.g., 30 minutes for CPU-bound work), the workflow itself can run for a much longer duration—days, weeks, or even longer. This is because the state is persisted between steps, and the workflow isn’t consuming resources while it’s sleeping or waiting for an event.
Start Building with Python Workflows Today
The beta release of Python Workflows marks a significant milestone in making Cloudflare’s developer platform more accessible and powerful for the global community of Python developers. Whether you’re building complex data pipelines, training sophisticated AI models, or creating the next generation of intelligent agents, you can now do so with the language you prefer, on a global, resilient platform.
Ready to get started? You can learn more about writing Workers in Python and dive into the documentation to create your first Python Workflow right now. As this is a beta release, your feedback is incredibly valuable. If you have any feature requests or encounter any bugs, please share them directly with the Cloudflare team by joining the Cloudflare Developers community on Discord.
Cloudflare’s connectivity cloud protects entire corporate networks, helps customers build Internet-scale applications efficiently, accelerates any website or Internet application, wards off DDoS attacks, keeps hackers at bay, and can help you on your journey to Zero Trust.
Visit 1.1.1.1 from any device to get started with our free app that makes your Internet faster and safer.
To learn more about our mission to help build a better Internet, start here. If you’re looking for a new career direction, check out our open positions.

