delvingbitcoin

Merging incomparable linearizations

Merging incomparable linearizations

Posted on: December 2, 2023 22:12 UTC

In the domain of optimizing transaction ordering for blockchains, specifically Bitcoin, a new approach to merging linearizations has been proposed and is under scrutiny.

The concept revolves around manipulating transactions within blocks to create more efficient structures without decreasing the overall feerate—a measure of the transaction fee relative to its size. A proof sketch suggests that it's possible to improve or, at least, not worsen the organization of transactions by adopting a prefix-intersection merging algorithm.

This algorithm operates by comparing two sequences of transactions, known as linearizations. It identifies common transactions at the beginning of these sequences and reorganizes them in a way that either maintains or improves the feerate efficiency. The restructuring involves moving certain transactions forward in the sequence to form a new group, or "chunk," while ensuring that this new chunk has a feerate greater than or equal to the first chunk's feerate in any given linearization. This condition is critical because it ensures that the economic incentives for miners are preserved or enhanced.

The algorithm also assumes that rearranging transactions within an individual chunk does not negatively impact the feerate; at worst, the outcome remains unchanged, and at best, it further subdivides into more chunks, potentially increasing efficiency. Each step in this iterative process is designed to move some transactions to the front of both linearizations being merged without degrading their respective feerate diagrams.

The core logic of the algorithm dictates that if the intersection of transactions is found in the first chunk of one linearization, then those transactions can be reordered according to the second linearization without any negative consequences. If a new chunk is moved to the front of the second linearization with a higher or equal feerate compared to its original first chunk, the order within the moved and non-moved transactions is maintained, validating the process.

As the algorithm progresses, the linearizations gradually become identical by consistently moving the intersecting transactions to the front. Consequently, the final linearization resulting from this algorithm is theorized to be more efficient than or at least equivalent to the original sequences. Further details and a formal proof of this concept are available for scrutiny and discussion within the community here.