Back to Blog

Cutting OpenNext KV Storage Costs on Cloudflare Workers

2026-05-115 min read

Learn English Sounds started spending $0.50 a day on Cloudflare KV storage. That does not sound like much until you realize it is for a static-ish content site that previously cost almost nothing to host. I dug in and found a classic OpenNext on Cloudflare pitfall. If you are running Next.js on Workers via the @opennextjs/cloudflare adapter, this one likely affects you too.

What is Cloudflare KV?

Cloudflare KV (short for Key-Value) is a globally distributed key-value store designed for read-heavy workloads. You write a value once and it eventually replicates to data centers worldwide so reads are fast from anywhere. Pricing has two components: operations (reads, writes, deletes) and storage (charged per GB-month). The first 1 GB of storage is included, then it is roughly $0.50 per additional GB-month. That second number is what gets you when something quietly fills the namespace with stale data.

For OpenNext, KV is where rendered pages get stashed so future requests do not have to re-render. It is fast, cheap per operation, and ideal for caching HTML and JSON. The trap is that nothing about KV expires automatically. If you write a key, it stays until you delete it.

The setup

The site uses OpenNext to deploy Next.js to Cloudflare Workers. The incremental cache (the storage layer behind ISR and full route caching) is configured to use KV with a regional cache wrapper. The relevant config looks like this:

import { defineCloudflareConfig } from "@opennextjs/cloudflare";
import kvIncrementalCache from "@opennextjs/cloudflare/overrides/incremental-cache/kv-incremental-cache";
import { withRegionalCache } from "@opennextjs/cloudflare/overrides/incremental-cache/regional-cache";
import d1TagCache from "@opennextjs/cloudflare/overrides/tag-cache/d1-next-tag-cache";

export default defineCloudflareConfig({
  incrementalCache: withRegionalCache(kvIncrementalCache, {
    mode: "long-lived",
    bypassTagCacheOnCacheHit: true,
  }),
  tagCache: d1TagCache,
});

This is exactly what the OpenNext docs recommend. Static pages get rendered at build time, written to KV, and served from a regional Cache API layer on the way out. CPU usage drops because the worker rarely has to render anything. Storage cost was supposed to be a rounding error.

What is the regional cache?

Worth pausing on, because it explains why pruning KV does not hurt performance. Cloudflare gives every worker access to the Cache API, a per-region in-memory cache that lives at the edge data center the request hit. OpenNext's withRegionalCache wraps the KV incremental cache and uses that Cache API as a first layer.

The lookup order for any cached page is: regional Cache API first, then KV, then re-render. The first request in a region pulls the page out of KV and stores it in the local Cache. Every subsequent request in that region is served from memory at the edge without touching KV or running the worker's render code. With mode: "long-lived" the regional copies stick around as long as Cloudflare keeps them, which on a warm node is typically minutes to hours.

So KV is cold storage. It only gets read when a region's cache is cold or evicted. Pruning unreferenced KV keys does not change how often KV is read; it only changes how much you store. CPU does not move because nothing about the hot path changed.

What I actually found in KV

I listed the keys in the namespace. There were 276,637 of them. Every single one started with incremental-cache/. Grouping by the second path segment told the real story:

$ wrangler kv key list --namespace-id <id> --remote \
    | jq -r '.[].name' | awk -F/ '{print $2}' \
    | sort | uniq -c | sort -rn | head

17896 Mqxc5oa6LB-nPqyGFrVbV
13068 zMp2Qi0YY-gVHmaXRB9Ce
 9108 zMlI9hb_vFtk705L8GDKL
 8748 -KcEM0U2hAvoQH7n5dyV5
  ...
 5895 W-RYs65XPM1m0Dg899-cz

Thirty-nine distinct prefixes. Each one is a Next.js BUILD_ID from a past deploy. Every next build generates a fresh build ID. Every deploy writes a fresh universe of cache entries scoped to that ID. The live worker only ever reads its own build's prefix. Old prefixes are pure dead weight, and nothing in OpenNext or Cloudflare cleans them up for you.

That is how an active site quietly accumulates 250,000+ orphaned keys.

Why your active build keeps growing too

There is a second pattern worth noticing in that list. The largest prefix had 17,896 keys, the next-largest 13,068, and most of the rest hovered around 6,500. The big one is the currently active build. It has more keys than the others because real traffic keeps populating new long-tail pages into it over time. That is normal and fine. The problem is everything below it.

The fix

The active worker only reads from incremental-cache/<current-BUILD_ID>/. So after every successful deploy, delete every key that does not start with that prefix. The keys you are deleting are guaranteed to be unreferenced by any live code path. This is not a cache invalidation that costs CPU; it is sweeping up garbage that never gets read.

A prune script that runs after deploy

I added a Node script to the project and chained it into the deploy command. It reads the build ID from the OpenNext build output, lists all KV keys, and bulk-deletes anything outside the active prefix in 10,000-key chunks (the wrangler bulk-delete limit).

// scripts/prune-kv-cache.mjs
// ---------------------------------------------------------------
// Deletes stale OpenNext incremental-cache entries from Cloudflare KV
// after every successful deploy. Run as a post-deploy step.
// ---------------------------------------------------------------

import { readFileSync, writeFileSync, mkdtempSync, rmSync } from "node:fs";
import { spawnSync } from "node:child_process";
import { tmpdir } from "node:os";
import { join } from "node:path";

// Cloudflare KV namespace that backs OpenNext's incremental cache.
// Find this ID in wrangler.toml under [[kv_namespaces]] for NEXT_INC_CACHE_KV.
const KV_NAMESPACE_ID = "<your-namespace-id>";

// OpenNext writes cache keys under `incremental-cache/<BUILD_ID>/...`.
// The active worker only reads its own BUILD_ID, so any other prefix is garbage.
// We read the freshly-built BUILD_ID from disk so the script always matches
// the deploy that just happened.
const KEEP_BUILD_ID = readFileSync(".open-next/assets/BUILD_ID", "utf8").trim();

// wrangler's `kv bulk delete` accepts at most 10,000 keys per call.
const CHUNK_SIZE = 10000;

// 1) List every key in the namespace.
// maxBuffer is bumped to 512 MB because large sites can return tens of MB of JSON.
const list = spawnSync(
  "npx",
  ["wrangler", "kv", "key", "list", "--namespace-id", KV_NAMESPACE_ID, "--remote"],
  { encoding: "utf8", maxBuffer: 1024 * 1024 * 512 }
);

// 2) Decide what to keep vs. delete.
// Keep:   incremental-cache/<KEEP_BUILD_ID>/...   (the live worker reads from here)
// Delete: incremental-cache/<anything else>/...  (orphaned from previous deploys)
// Any non-cache keys are left alone by the second `startsWith` check.
const keepPrefix = `incremental-cache/${KEEP_BUILD_ID}/`;
const toDelete = JSON.parse(list.stdout)
  .map((k) => k.name)
  .filter((n) => n.startsWith("incremental-cache/") && !n.startsWith(keepPrefix));

// 3) Bulk-delete the orphaned keys in 10k-key chunks.
// wrangler reads each chunk from a JSON file on disk, so we use a temp directory
// and clean it up at the end regardless of success or failure.
const workDir = mkdtempSync(join(tmpdir(), "kv-prune-"));
for (let i = 0; i < toDelete.length; i += CHUNK_SIZE) {
  const chunkPath = join(workDir, `chunk_${i}.json`);
  // wrangler expects a JSON array of key names, e.g. ["key1", "key2", ...]
  writeFileSync(chunkPath, JSON.stringify(toDelete.slice(i, i + CHUNK_SIZE)));
  spawnSync(
    "npx",
    ["wrangler", "kv", "bulk", "delete", "--namespace-id", KV_NAMESPACE_ID, "--remote", chunkPath],
    { stdio: "inherit" } // stream wrangler's progress bar straight to the terminal
  );
}

// 4) Clean up the temp directory and report what we did.
rmSync(workDir, { recursive: true, force: true });
console.log(`Pruned ${toDelete.length} stale cache keys.`);

Then I wired it into the deploy chain in package.json so it runs every time, automatically:

"deploy": "opennextjs-cloudflare build && opennextjs-cloudflare deploy --cacheChunkSize 1000 && npm run prune-kv-cache",
"prune-kv-cache": "node scripts/prune-kv-cache.mjs"

The order matters. Build first so the new BUILD_ID exists on disk. Deploy second so the new worker is live and reading from the new prefix. Prune third so the old prefix becomes safe to delete the moment the rollover completes.

The result

On the site I cleaned up, KV went from 276,637 keys across 39 builds to 7,619 keys in the single active build, a roughly 97% reduction. Projected daily storage cost dropped from $0.50 to a few cents. No code path changes, no caching strategy changes, no CPU regression. Regional cache still wraps KV exactly as before, so hot reads come from the Cache API and never touch storage.

When you should not prune

Pruning is the right default for single-version deploys, but there are real situations where it can hurt you.

Instant rollback as a safety net. If you rely on wrangler rollback to flip back to a prior worker version in seconds, the prior version's cache is gone after a prune. The rollback still works, but the first wave of requests after rollback will all miss and re-render. CPU spikes for a few minutes. Modify the script to keep the last N build IDs instead of just the active one if this matters.

Rolling releases or canary deploys. If Cloudflare Rolling Releases or any kind of gradual rollout is in play, multiple worker versions are serving traffic at the same time. Each one needs its own build prefix intact. Do not prune until the rollout reaches 100%.

In-flight requests during deploy. There is always a brief window where the old worker may still serve a few stragglers right as the new one comes up. They will miss the cache and re-render once. For most sites this is invisible. Worth knowing if you handle bursty traffic.

A note on R2 as an alternative

OpenNext also ships an R2-backed incremental cache. R2 storage is roughly 33 times cheaper than KV (about $0.015 vs $0.50 per GB-month at the time of writing), and the regional cache wrapper sits in front of it the same way it sits in front of KV. So your hot-path performance does not change, but cold storage gets dramatically cheaper. Pruning still makes sense for hygiene, but cost pressure mostly disappears. If your site is large enough that even the active build's cache is sizeable, switching to R2 is worth a look.

Stay Updated

Get the latest posts and insights delivered to your inbox.

Unsubscribe anytime. No spam, ever.