delvingbitcoin

Merging incomparable linearizations

Merging incomparable linearizations

Posted on: December 4, 2023 21:36 UTC

In the field of blockchain transaction processing, developing optimal linearizations for clusters is a critical task for improving efficiency.

An interesting challenge arises when one has an existing linearization and then receives another from a peer. The key question is how to combine these two potentially different solutions to create a superior linearization. Optimal linearizations are characterized by their fee-size diagrams, which plot cumulative (size, fee) points as segmented lines; a linearization is deemed better if its diagram is above that of another.

The simplest merging technique known thus far involves comparing prefixes of unprocessed transactions from two linearizations and including the higher-fee rate transactions in the new sequence. This best-chunk merging algorithm is straightforward but computationally expensive with a quadratic time complexity, and it does not guarantee a resultant linearization better than both inputs. In practice, this can result in an output that is no better than choosing one of the input linearizations.

An extension of this method includes checking for intersections between the highest-fee rate prefixes of both linearizations. While this intersection merging algorithm maintains the same computational complexity, it still fails to consistently yield a better solution.

To address these shortcomings, a more comprehensive approach called prefix-intersection merging has been proposed. This advanced technique examines all possible intersections between prefixes of the two input linearizations, specifically focusing on those involving the best chunk of either input. By incrementally computing intersections and retaining the best set of transactions, this method promises to find a linearization that is strictly better than both given inputs if they are incomparable, or at least as good as the better input if they are comparable, without requiring the inputs to adhere to certain quality standards or connected chunks. Although this approach seems promising, it currently lacks formal proof of its efficacy. Further details and discussion on the topic can be found on the dedicated linearization post-processing thread.