Disclosure: The opinions and perspectives shared here are solely those of the author and do not reflect the viewpoints of crypto.news’ editorial team.
The second quarter of 2025 has served as a wake-up call for blockchain scaling. As investment continues to flow into rollups and sidechains, the weaknesses in the layer-2 model are becoming more evident. The initial promise of L2 solutions was straightforward: to scale L1s. However, escalating costs, delays, and liquidity fragmentation are increasingly problematic.
Summary
- L2s were designed to scale Ethereum, but they have introduced new challenges, relying on centralized sequencers that can become points of failure.
- At their essence, L2s manage sequencing and state computation, employing either Optimistic or ZK Rollups for settlement on L1. Each comes with trade-offs: lengthy finality in Optimistic Rollups and substantial computational costs in ZK Rollups.
- The future of efficiency lies in separating computation from verification — using centralized supercomputers for computation while decentralized networks handle parallel verification, enabling scalability without compromising security.
- The outdated “total order” model of blockchains can be transformed; shifting to local, account-based ordering could unleash significant parallelism, resolving the “L2 compromise” and establishing a robust web3 foundation.
Emerging projects, such as stablecoin payments, are beginning to challenge the L2 paradigm, questioning whether L2s are genuinely secure and if their sequencers resemble single points of failure and censorship. This often leads to a skeptical perspective that perhaps fragmentation is an unavoidable aspect of web3.
Are we constructing a future on a stable foundation or merely a fragile structure? L2 solutions must confront these challenges. If Ethereum’s (ETH) core consensus layer were inherently fast, cost-effective, and infinitely scalable, the current L2 ecosystem would be redundant. Numerous rollups and sidechains have been proposed as “L1 enhancements” to alleviate the fundamental limitations of the underlying L1s. This constitutes a form of technical debt, a complex, fragmented solution burdening web3 users and developers.
To address these issues, it is essential to deconstruct the L2 concept down to its fundamental components, revealing a path toward a more resilient and efficient design.
An anatomy of L2s
Structure determines function; a principle that applies equally to biology and computer systems. To establish the appropriate structure and architecture of L2s, we must scrutinize their functions in detail.
Each L2 fundamentally performs two critical functions: Sequencing—ordering transactions—and computing and proving the new state. A sequencer, whether centralized or decentralized, gathers, orders, and batches user transactions. This batch is executed, generating an updated state (e.g., new token balances), which must ultimately be settled on L1 for security via either Optimistic or ZK Rollups.
Optimistic Rollups presuppose all state transitions are legitimate and involve a challenge period (commonly 7 days) during which anyone can present fraud proofs. This introduces a significant user experience trade-off due to prolonged finality times. In contrast, ZK Rollups utilize zero-knowledge proofs to mathematically verify the accuracy of each state transition before it reaches L1, enabling near-instant finality. However, they require intensive computational resources and are complex to develop. ZK provers may also exhibit bugs, leading to severe repercussions, and formal verification, if feasible, is very costly.
Sequencing is a governance and design decision for every L2. Some opt for a centralized solution for efficiency, whether for operational advantages or potential censorship, while others choose decentralization for fairness and robustness. Ultimately, L2s determine how they implement sequencing.
In terms of State Claim Generation and Verification, there is significant room for improvement in efficiency. Once a transaction batch is sequenced, computing the next state remains a purely computational task that could be performed by a single supercomputer, emphasizing speed without the burdens of decentralization. This supercomputer could potentially be shared among multiple L2s.
After claiming the new state, its verification can be executed in parallel. A vast network of verifiers can work concurrently to confirm the claim. This parallelization is also the core philosophy behind Ethereum’s stateless clients and high-performance implementations like MegaETH.
Parallel verification is infinitely scalable
Parallel verification is infinitely scalable. Regardless of how quickly L2s (and that supercomputer) produce claims, the verification network can always keep pace by increasing the number of verifiers. The latency here is solely based on verification time, which is minimal and fixed. This approach epitomizes the theoretical optimum of effective decentralization: verifying rather than computing.
Following sequencing and state verification, the L2’s responsibilities are nearly fulfilled. The final task is to publish the verified state to a decentralized network, the L1, for ultimate settlement and security.
This final step highlights a major concern: blockchains are inadequate settlement layers for L2s. The primary computational tasks are off-chain, yet L2s incur considerable costs to finalize on an L1. They endure dual overhead: the L1’s limited throughput, hampered by its entirely linear transaction ordering, results in congestion and high data posting costs. Additionally, they must tolerate the inherent latency of the L1 finality.
For ZK Rollups, this latency can be measured in minutes. For Optimistic Rollups, it’s compounded by a week-long challenge period, a necessary yet costly security trade-off.
Farewell, the “total order” myth in web3
Ever since Bitcoin (BTC), there has been strenuous effort to fit all blockchain transactions into a singular total order. Yet, this “total order” model is an expensive fallacy and is excessive for L2 settlements. It’s ironic that a leading decentralized network acts as if it were a single-threaded desktop!
It’s time for a change. The future lies in local, account-based ordering, where only the transactions interacting with the same account require sequential ordering, unlocking substantial parallelism and genuine scalability.
While global ordering implies local ordering, such a model is naive and overly simplistic. After 15 years of “blockchain,” it is high time we reassess and develop a better future. The scientific field of distributed systems has evolved from the strong consistency principle of the 1980s (which blockchains follow today) to the strong eventual consistency model of 2015, which promotes parallelism and concurrency. The web3 sector must adapt, leaving behind outdated notions to embrace progressive scientific advancements.
The era of the L2 compromise is finished. We need to establish a foundation tailored for the future, facilitating the next wave of web3 acceptance.