Blobs after EIP-4844: Exclusive L2 Costs, Best Trade-Offs.
Article Structure

EIP-4844 moved data availability for rollups into a new lane: blobs. The result is a different cost curve for every L2, especially once traffic starts to spike and batches compete for the same limited blob space. Cheap most of the time, sometimes spiky, and highly sensitive to batch sizing and timing—that’s the new reality.
What actually changed with blobs
Before EIP-4844, rollups used calldata for data availability. That meant competing with every other L1 transaction at 16 gas per byte. Blobs decouple most L2 data from L1 gas markets and price it with a separate, EIP-1559-like mechanism. The chain targets a small number of blobs per block and nudges the blob basefee up or down depending on recent usage.
In practice, this splits L2 costs into two buckets: a small, steady fee for posting commitments and metadata to L1 and a variable, sometimes volatile fee for the blob itself. When blocks are near the blob target, prices stay low. When a few rollups push big batches at once—think major token launches or more than usual proofs—blob basefee ratchets up for a short window.
The new L2 cost curve, in plain terms
Rollup cost per transaction now looks like an S-shaped curve over load:
- Low load: blobs are cheap, big batches amortize overheads, per-tx cost drops.
- Moderate load: prices nudge up smoothly; batching helps, but savings taper.
- High load: blob basefee rises sharply; latency/batch size choices dominate outcomes.
Two tiny scenarios show the dynamics. First: a quiet hour where an L2 posts 300 kB every 12 seconds; it fits in a single blob with headroom, and per-user fees fall. Second: a hyped NFT mint draws multiple L2s to post multi-blob batches at once; for 15–30 minutes, the blob basefee jumps and smaller batches can cost less per tx than jumbo ones because the second blob is disproportionately expensive.
What drives your actual bill
Every L2 pays four big items, even if accounting varies by stack:
- Data availability: blob basefee times blob size; occasionally calldata fallback.
- Inclusion overhead: L1 gas for headers, commitments, and on-chain bookkeeping.
- Proof costs: proving and verification (dominant for many ZK systems).
- Sequencing and infra: mempools, state growth, and any MEV kickbacks or rebates.
Blobs slash the first item relative to calldata, but they don’t erase the others. If your system’s proof pipeline is heavy, the savings mainly show up as extra headroom for users or more frequent finality proofs.
Typical numbers, without hand-waving
Calldata is 16 gas per byte. At 20 gwei L1 gas price, a 1 MB batch would cost roughly 0.32 ETH purely in calldata. With blobs, that same payload sits in a blob priced by the blob basefee. Even if a blob during a busy period cost 0.02 ETH, you’re still looking at an order-of-magnitude reduction versus calldata for the same payload. When blob basefee is calm, that gap widens considerably.
The catch: blobs are capped per block. When demand crowds into a narrow time window, the basefee steps up, then decays as pressure eases. That means cost predictability improves on average but not in every minute of the day.
Cost components after EIP-4844
The table below summarises where costs sit now and how sensitive they are to traffic and design choices.
| Component | Main driver | Volatility | Design levers |
|---|---|---|---|
| Data availability (blob) | Blob basefee × blob count | Medium–high during spikes | Batch sizing, scheduling, compression |
| L1 inclusion overhead | Fixed gas for commitments/headers | Low | Efficient contracts, metadata packing |
| Proof generation | Prover hardware/time | Low–medium | Circuits, recursion, proof cadence |
| Proof verification | L1 gas for verifier | Low | Verifier optimisations, recursion |
| Operational costs | Infra and state growth | Low | Pruning, state rent policies |
For optimistic systems, proof generation is lighter, so blob price dominates. For ZK systems, the blob savings matter, but proof cadence and recursion have equal or greater impact on end-user fees.
Design trade-offs that actually change your bill
With blobs, the classic batching vs latency trade-off has sharper edges, and new levers appear. The most impactful choices tend to be these:
- Batch sizing: Larger batches amortize fixed overhead, but if you spill into an extra blob at a high basefee, you can raise average cost. Many teams cap batches just under common blob price inflection points.
- Scheduling: Posting during low-traffic windows can materially reduce spend. Simple heuristics—e.g., “wait one slot if blob basefee doubled”—often pay for themselves.
- Compression: Domain-aware compression (e.g., dictionary schemes for repetitive storage keys) keeps you inside one blob more often than generic gzip-only pipelines.
- Fallback logic: If blobs get too expensive, do you fall back to calldata, delay, or split across blocks? Each path moves the needle differently on UX and security assumptions.
- Proof cadence: ZK stacks can prove frequently for faster finality or batch proofs for lower average verification cost. Recursion lets you post many small proofs but pay once on L1.
- Data retention: Blobs are pruned after weeks. If your protocol needs longer availability, you either re-post summaries or use an external archival layer.
None of these are purely theoretical. A rollup that posts every two minutes, uncompressed, can coast at low basefee and then get hammered during coordinated launches. Another that compresses and holds batches for 24–36 seconds may stay inside one blob consistently without users noticing added latency.
A practical way to model your curve
Teams that forecast spend with blobs tend to follow a simple modelling loop and update it weekly as traffic shifts.
- Collect a month of blob basefee data and build a distribution (quiet, typical, spiky windows).
- Simulate batch sizes against that distribution with and without compression, noting how often you spill into extra blobs.
- Overlay proof cadence options (e.g., per-batch vs recursive daily) to see verification cost per tx.
- Run sensitivity checks: double traffic, add a 30-minute spike, and test a 2× gas environment.
- Pick guardrails: a max blob count per batch, a delay threshold, and a compression floor.
This doesn’t need exotic math. Even a simple spreadsheet surfaces where the curve kinks, which is where most savings hide.
Optimistic vs ZK under blob pricing
Optimistic rollups benefit directly and immediately. Lower DA cost drops per-tx fees, especially for high-throughput apps like games and DEX routers. Their main risk is short, spiky windows that coincide with user bursts. Clipping batch sizes and scheduling around peaks avoids paying for a second blob at a premium.
ZK rollups see a more nuanced picture. Blobs cut DA costs, but proving still dominates at lower volumes. As usage grows, recursion plus blobs compounds: you can post frequent small proofs and roll them up later, keeping UX snappy while paying a reasonable L1 verification bill.
Micro-optimisations that punch above their weight
Three small tweaks often deliver outsized returns:
- Deterministic padding: Pack batches to just under a pre-set limit so the second blob never triggers by accident due to minor variance.
- Content-aware delta encoding: Many state updates share prefixes; custom codecs shrink blobs by 10–25% beyond gzip in live systems.
- Two-tier posting: Publish commitments immediately, defer bulk data by one or two slots when basefee jumps; the user sees fast confirmation while you avoid peak pricing.
These aren’t exotic, and they respect the security model—data still lands on Ethereum within minutes, just not always in the most expensive slot.
What to watch going forward
Blobs are transitional toward full danksharding, but they’re already a durable primitive. The shape of the cost curve will keep evolving with:
- Application surges that temporarily saturate blob targets.
- Better compression libraries tuned for rollup traces and storage keys.
- More aggressive recursion strategies reducing verifier costs.
- Tooling that forecasts blob basefee and auto-schedules batches.
The net effect is clear: L2s become more price-efficient on average, with brief periods where design discipline matters. Teams that respect the curve—plan for spill points, compress well, and schedule—will deliver steadier user fees without sacrificing finality or data integrity.


