Posted by rustyrussell
Nov 29, 2025/00:01 UTC
The discussion highlights the challenges and strategies involved in managing the computational costs associated with decoding processes, particularly in contexts like handling network communications where there's a risk of being flooded with unnecessary or malicious data. One practical solution mentioned is the concept of preemptively limiting the amount of data processed by trimming inputs to a manageable size. This approach helps in mitigating potential griefing attempts—where an attacker might try to overload a system with excessive data—by setting an upper limit on the data processed, thus protecting valuable computational resources.
Moreover, the conversation points towards the need for precise specifications, such as those that might be outlined in a BOLT (Basics of Lightning Technology) specification, to better manage these risks. It suggests that having detailed benchmarks could provide clearer guidelines on how to efficiently handle data, from the creation of data sets to their maintenance over time. Such benchmarks would ideally cover various aspects, including the initial build time of data sets containing all necessary gossip and their subsequent update times following each block, offering a more systematic approach to handling data in a secure and efficient manner.
Thread Summary (19 replies)
Nov 14 - Dec 18, 2025
20 messages
TLDR
We’ll email you summaries of the latest discussions from high signal bitcoin sources, like bitcoin-dev, lightning-dev, and Delving Bitcoin.
We'd love to hear your feedback on this project.
Give Feedback