Posted by stefanwouldgo
Feb 10, 2025/16:50 UTC
The discussion revolves around the comparison of different algorithmic approaches, specifically PBFS, as described in a paper, and GGT. The complexity bounds of PBFS appear to be worse than those of GGT on a theoretical basis. However, experimental evidence suggests that PBFS outperforms both FP (referred to as DS in the PBFS paper) and GGT in practical settings. These findings are significant, especially considering that the experiments were conducted on instances much larger than what might typically be needed; the smallest example involved around 5,000 nodes and was processed in 11 milliseconds. Extrapolating from these results, it's suggested that an instance with 500 nodes could potentially be executed in under 50 microseconds, assuming cubic runtime.
This leads to the conclusion that while asymptotic analysis is valuable, the actual performance of these algorithms in practice can diverge significantly from theoretical predictions. This divergence is largely attributed to the constants involved in the complexity formulas, which are heavily dependent on specific implementation details. Therefore, for smaller instances, which are more relevant to the current interest, a straightforward implementation of DS/FP, coupled with an efficiently tuned min-cut algorithm, may indeed offer the most optimal solution. This assessment underscores the importance of considering practical experimentation and fine-tuning of algorithms based on specific use cases, rather than relying solely on theoretical complexity bounds.
TLDR
We’ll email you summaries of the latest discussions from authoritative bitcoin sources, like bitcoin-dev, lightning-dev, and Delving Bitcoin.
We'd love to hear your feedback on this project?
Give Feedback