How to linearize your cluster

Posted by sipa

Feb 1, 2025/18:06 UTC

The discussion revolves around the concept of reusing a computational state for further analysis after the removal of the highest feerate closure to discover subsequent closures. The initial concern raised is about the complexity that might arise due to changes in the graph structure following the removal of a subset. However, this concern is addressed with an explanation that the graph's fundamental structure doesn't necessarily change. This is clarified by suggesting that the act of removing a previously found subset can be conceptualized as adjusting the fee and size of all its transactions to zero. This approach maintains the monotonicity of capacities requirement, which is crucial for the integrity of the graph-based analysis. Essentially, while a transaction remains, its capacity is determined by the formula $f-\lambda s$, but once it is considered removed or extracted from the analysis, its capacity effectively becomes zero. This methodology suggests a potentially efficient way to iterate over the data without reconstructing the graph for each iteration, thereby possibly enhancing the efficiency of finding successive high feerate closures within the same framework.

Link to Raw Post

Thread Summary (73 replies)

Dec 20 - Apr 24, 2025

Bitcoin Logo

TLDR

Join Our Newsletter

We’ll email you summaries of the latest discussions from authoritative bitcoin sources, like bitcoin-dev, lightning-dev, and Delving Bitcoin.

Explore all Products

ChatBTC imageBitcoin searchBitcoin TranscriptsSaving SatoshiBitcoin Transcripts Review
Built with 🧡 by the Bitcoin Dev Project
View our public visitor count

We'd love to hear your feedback on this project?

Give Feedback