Cron Jobs in Cloudflare Workers
How to add flexible cron scheduling, task queues, and retry logic to Cloudflare Workers.
Prerequisites
- A Cloudflare Workers project with Wrangler configured
- A Runlater account (free tier works)
- Your Worker deployed to a public URL
SDK overview
The runlater-js
SDK methods used in this guide:
| Method | Description |
|---|---|
| rl.send(url, opts) | Queue a task for immediate execution |
| rl.delay(url, opts) |
Execute after a delay (e.g. "5m", "1h")
|
| rl.cron(name, opts) | Create or update a recurring cron job (idempotent) |
| rl.sync(opts) | Declare all tasks at once — creates, updates, and removes to match |
Why you need this
Cloudflare Workers are great for request handling, but their scheduling options are basic. Cron Triggers only support simple intervals, have no built-in retries, and there's no way to queue work for later execution. If a trigger fails, the only way to know is checking logs in the Cloudflare dashboard. There's no alerting, no execution history, and no retry mechanism.
Runlater handles the scheduling and calls your Worker endpoint via HTTP. You get retries, monitoring, failure alerts, and execution history — without adding any infrastructure to your Cloudflare account. Your Workers stay lightweight; Runlater handles the orchestration.
Setup
Install the SDK in your Worker project:
npm install runlater-js
Add your API key as a Worker secret (this makes it available via the
env
parameter in your Worker):
npx wrangler secret put RUNLATER_KEY
For setup scripts that run locally (not inside a Worker), your API key should be in your shell
environment or a local .env
file as RUNLATER_KEY.
Example: Nightly KV cleanup
Clean up expired keys from KV every night at 2 AM. Create the cron task using the SDK — run this once from a local script or as part of your deploy:
import Runlater from "runlater-js" // Setup script runs locally — uses process.env const rl = new Runlater(process.env.RUNLATER_KEY!) await rl.cron("kv-cleanup", { url: "https://my-worker.username.workers.dev/cron/cleanup", schedule: "0 2 * * *", })
Then handle the request in your Worker:
export default { async fetch(request: Request, env: Env) { const url = new URL(request.url) if (url.pathname === "/cron/cleanup" && request.method === "POST") { return handleCleanup(env) } return new Response("Not found", { status: 404 }) } } async function handleCleanup(env: Env) { let deleted = 0 let cursor: string | undefined // KV list() returns max 1000 keys — paginate to get all do { const result = await env.SESSIONS.list({ cursor }) for (const key of result.keys) { if (key.expiration && key.expiration < Date.now() / 1000) { await env.SESSIONS.delete(key.name) deleted++ } } cursor = result.list_complete ? undefined : result.cursor } while (cursor) return Response.json({ deleted }) }
If the cleanup fails (KV timeout, Worker error), Runlater retries automatically. You'll also get an alert via email or Slack if it keeps failing.
Example: Queue background work from a request
When a user places an order, you need to process a payment, send a confirmation email, and update
inventory. Instead of doing all of that synchronously in the Worker (risking timeouts), queue each
task with rl.send():
import Runlater from "runlater-js" export default { async fetch(request: Request, env: Env) { const url = new URL(request.url) if (url.pathname === "/orders" && request.method === "POST") { return handleOrder(request, env) } // ... other routes } } async function handleOrder(request: Request, env: Env) { const order = await request.json() // In Workers, create the SDK with env (not process.env) const rl = new Runlater(env.RUNLATER_KEY) const base = "https://my-worker.username.workers.dev" // Queue confirmation email await rl.send(`${base}/send-confirmation`, { body: { order_id: order.id, email: order.email }, retries: 3, }) // Queue inventory update with serial execution await rl.send(`${base}/update-inventory`, { body: { items: order.items }, queue: "inventory", // One at a time }) return Response.json({ status: "accepted" }, { status: 202 }) }
The queue: "inventory"
parameter ensures
inventory updates run one at a time — no race conditions when multiple orders come in simultaneously.
Note: In Cloudflare Workers, secrets are accessed via the
env
parameter passed to your fetch handler (not process.env).
The SDK is instantiated per-request since each request receives its own env.
Example: Hourly usage reports
Aggregate usage data from your D1 database every hour and store it in R2:
await rl.cron("hourly-usage-report", { url: "https://my-worker.username.workers.dev/reports/usage", schedule: "0 * * * *", method: "POST", })
Handle it in your Worker's fetch handler by routing to a dedicated function:
// In your main fetch handler, add a route for this: // if (url.pathname === "/reports/usage") return handleUsageReport(env) async function handleUsageReport(env: Env) { const hourAgo = new Date(Date.now() - 3600000).toISOString() // Query D1 for usage data const { results } = await env.DB.prepare( "SELECT user_id, COUNT(*) as requests FROM api_logs WHERE created_at > ? GROUP BY user_id" ).bind(hourAgo).all() // Store report in R2 const key = `reports/usage/${new Date().toISOString()}.json` await env.REPORTS_BUCKET.put(key, JSON.stringify(results)) return Response.json({ rows: results.length, key }) }
Example: Queued webhook processing
When your Worker receives a webhook from a third party and needs to forward it to a downstream service,
use rl.send()
to queue the processing with retries:
import Runlater from "runlater-js" async function handleWebhook(request: Request, env: Env) { const payload = await request.json() const rl = new Runlater(env.RUNLATER_KEY) // Queue the webhook for reliable processing with retries await rl.send("https://my-worker.username.workers.dev/process-webhook", { body: payload, retries: 5, timeout: 30000, }) // Respond immediately to the sender return new Response("OK", { status: 200 }) }
Advanced: Declarative task management
Use rl.sync()
to define all your cron
tasks in code. Run it on deploy to keep tasks in sync:
import Runlater from "runlater-js" // Setup script runs locally — uses process.env const rl = new Runlater(process.env.RUNLATER_KEY!) const BASE = "https://my-worker.username.workers.dev" await rl.sync({ tasks: [ { name: "kv-cleanup", url: `${BASE}/cron/cleanup`, schedule: "0 2 * * *", }, { name: "hourly-usage-report", url: `${BASE}/reports/usage`, schedule: "0 * * * *", }, { name: "daily-billing-check", url: `${BASE}/billing/reconcile`, schedule: "0 6 * * *", }, ], deleteRemoved: true, })
Best practices
-
Authenticate your cron endpoints.
Anyone can call a public Worker URL. Verify requests
using Runlater's webhook signatures
(
x-runlater-signatureheader), or set a shared secret in the task headers and check it in your Worker:const secret = request.headers.get("x-cron-secret") if (secret !== env.CRON_SECRET) { return new Response("Unauthorized", { status: 401 }) }
-
Keep handlers idempotent.
Retries mean your endpoint may be called more than once.
Use KV's
put(which overwrites) instead of append-only patterns. - Return 2xx for success, 4xx/5xx for failure. Runlater uses the HTTP status code to determine if the task succeeded. A non-2xx response triggers a retry (up to the configured limit). Retries use exponential backoff.
-
Use queues for serial work.
If two tasks shouldn't run at the same time (e.g. inventory
updates), use the
queueparameter to serialize them. - Set up notifications. Configure email or Slack alerts in your organization settings so you know immediately when a task fails.