New BitDevs

2023-07-30

Announcements

Please join us for our next Socratic Seminar. A special thank you to our sponsors CardCoins, Chaincode Labs and Wolf NYC for food, refreshments and event space.

If you can't make it to the main event please join us at PUBKEY around 9:30PM. Learn about this awesome new establishment here.

Presentation

Adam Jonas: BitcoinSearch.xyz

Mailing Lists, Meetings and Bitcoin Optech

Mailing Lists

bitcoin-dev

On the experiment of the Bitcoin Contracting Primitives WG and marking this community process "up for grabs"

This message is a detailed update on the progress and future plans for the development of Bitcoin consensus changes. The author begins by referencing past discussions about covenant proposals and the need for a new community process to specify covenants. They mention their goals, which include building a consistent framework to evaluate covenant proposals, finding common ground between proposals, expanding the consensus changes development process beyond Bitcoin Core, and maintaining a high-quality technical archive. The author acknowledges that other initiatives, such as the bitcoin-inquisition fork and the archiving of covenant proposals under the Optech umbrella, have also been undertaken. They mention the Bitcoin Contracting Primitives Working Group, which has held monthly meetings and documented various primitives and protocols related to Bitcoin contracting. The author explains that they launched the effort as an experiment, devoting 20% of their time to it. However, they have come to the realization that their time and energy would be better allocated to working on Lightning Network robustness. They express their belief that the scalability and smooth operation of the Lightning Network are more critical for Bitcoin's survival than extended covenant capabilities. The author encourages others who are working on covenant changes proposals to continue their work, noting that Taproot and Schnorr soft forks have proven to be beneficial for self-custody solutions. They also mention their own plans to focus on R&D works related to CoinPool, particularly in addressing interactivity issues and designing advanced Bitcoin contracts. The author concludes by acknowledging that they may have overpromised with the launch of the new process for Bitcoin consensus changes development. They emphasize the importance of having technical historians and archivists to assess, collect, and preserve consensus changes proposals, as well as QA devs to ensure proper testing before deployment. They invite others to continue the maintenance of the Bitcoin Contracting Primitives Working Group or collaborate with other organizations. Overall, this message provides detailed information about the progress, challenges, and future plans related to Bitcoin consensus changes.

In this message, the author is discussing their involvement in a community process related to Bitcoin development. They introduced the idea of a new process to specify covenants, which are conditions or agreements that can be added to Bitcoin transactions. The author explains that they will not be actively pursuing this process further, as they have decided to focus more on other Bitcoin projects. They mention that the goals of this process were to build a consistent framework for evaluating covenant proposals, identify commonalities between proposals, open up the consensus development process beyond Bitcoin Core, and maintain a high-quality technical archive. The author also mentions other initiatives that have been undertaken during the same period, such as a fork of Bitcoin Core called bitcoin-inquisition and the archiving of covenant proposals under the Optech umbrella. The author provides some details about the Bitcoin Contracting Primitives Working Group, which is a group of individuals who have been working on documenting and archiving various Bitcoin contract primitives and protocols. They mention that monthly meetings have been held, and there have been in-depth discussions on topics related to contract primitives and protocols. The author explains that they started this effort as an experiment and initially committed to dedicating 20% of their time to it. However, they have realized that there is still a lot of work to be done in other areas, such as improving the Lightning Network, which is a second-layer scaling solution for Bitcoin. They believe that working on scaling Bitcoin and improving its robustness is more critical for the survival of Bitcoin than focusing on advanced contract capabilities. The author acknowledges that they may have overpromised with the new community process but believes that enough progress has been made to demonstrate its value. They express that what Bitcoin needs is not necessarily more technical proposals but rather a focus on assessing, collecting, and preserving consensus change proposals and ensuring thorough testing before deployment. They invite others to continue the work of the Bitcoin Contracting Primitives Working Group if they are willing to commit resources and effort to it. Overall, the author is reflecting on their involvement in the community process related to Bitcoin covenant proposals and discussing their decision to shift their focus to other Bitcoin projects. They believe that there is still much work to be done in scaling and improving Bitcoin's robustness and express the need for dedicated individuals to assess and preserve consensus change proposals.

Blinded 2-party Musig2

This text describes the implementation of a version of the 2-of-2 Schnorr Musig2 protocol for statechains. Statechains involve a server (referred to as party 1) that is "blinded," meaning it holds a private key necessary to generate an aggregate signature on an aggregate public key, but it does not have access to certain information. The information that party 1 is not supposed to learn includes: 1) the aggregate public key, 2) the aggregate signature, and 3) the message being signed (denoted as "m" in the text). The security of this implementation relies on party 1 being trusted to report the number of partial signatures it has generated for a particular key, rather than being trusted to enforce rules on what it has signed in the unblinded case. The full set of signatures generated is verified on the client side. The implementation is based on the 2-of-2 musig2 protocol, which operates as follows: 1. Party 1 generates a private key, denoted as "x1," and the corresponding public key, denoted as "X1 = x1G". G is the generator point, and point multiplication is denoted as X = xG, while point addition is denoted as A = G + G. 2. Party 2 generates a private key, denoted as "x2," and the corresponding public key, denoted as "X2 = x2G". 3. The set of public keys is denoted as L = {X1, X2}. 4. The key aggregation coefficient is given by KeyAggCoef(L, X) = H(L, X), where H is a hash function. This coefficient is used to calculate the shared (aggregate) public key, denoted as X = a1X1 + a2X2, where a1 = KeyAggCoef(L, X1) and a2 = KeyAggCoef(L, X2). 5. To sign a message "m," party 1 generates a nonce "r1" and derives a point "R1 = r1G". Party 2 generates a nonce "r2" and derives a point "R2 = r2G". These points are aggregated into "R = R1 + R2". 6. Party 1 computes the challenge "c" as the hash of the concatenation of X, R, and m, i.e., c = H(X||R||m), and calculates s1 = c.a1.x1 + r1. 7. Party 2 also computes the challenge "c" using the same formula, c = H(X||R||m), and calculates s2 = c.a2.x2 + r2. 8. The final signature is represented as (R, s1 + s2). In the case of blinding party 1, the steps to prevent it from learning the full public key or final signature are as follows: 1. Key aggregation is performed solely by party 2. Party 1 only needs to send its own public key, X1, to party 2. 2. Nonce aggregation is performed solely by party 2. Party 1 only needs to send its own nonce, R1, to party 2. 3. Party 2 computes the challenge "c" using the same formula and sends it to party 1 in order to compute s1 = c.a1.x1 + r1. 4. Party 1 never learns the final value of (R, s1 + s2) or the message "m". This implementation aims to provide confidentiality for party 1 by blinding it from certain information, thereby ensuring that party 1 cannot determine the full public key, final signature, or the signed message. Any feedback or potential issues with this approach would be appreciated. The attached HTML part of the message was likely removed due to its content being irrelevant or not accessible through the text format.

In this implementation, we are using a cryptographic protocol called 2-of-2 Schnorr Musig2 for statechains. In this protocol, there are two parties involved - party 1 and party 2. The goal is to create an aggregate signature on an aggregate public key, while ensuring that party 1 remains fully "blinded" and does not learn certain information. Blinding refers to the process of preventing party 1 from gaining knowledge of the aggregate public key, the aggregate signature, and the message being signed. In this model of blinded statechains, the security relies on party 1 being trusted to report the number of partial signatures it has generated for a specific key. The actual verification of the signatures is done on the client side. Now, let's break down how the 2-of-2 musig2 protocol operates and how blinding is achieved: 1. Key Generation: - Party 1 generates a private key (x1) and a corresponding public key (X1 = x1G), where G is the generator point. - Party 2 does the same, generating a private key (x2) and a public key (X2 = x2G). - The set of public keys is represented by L = {X1, X2}. 2. Key Aggregation: - The key aggregation coefficient is calculated using the set of public keys (L) and the aggregate public key (X). - KeyAggCoef(L, X) = H(L, X), where H is a hash function. - The shared (aggregate) public key is calculated as X = a1X1 + a2X2, where a1 = KeyAggCoef(L, X1) and a2 = KeyAggCoef(L, X2). 3. Message Signing: - To sign a message (m), party 1 generates a nonce (r1) and calculates R1 = r1G. - Party 2 also generates a nonce (r2) and calculates R2 = r2G. - These nonces are aggregated to obtain R = R1 + R2. - Party 1 computes the 'challenge' (c) as c = H(X || R || m) and calculates s1 = c.a1.x1 + r1. - Party 2 also computes the 'challenge' (c) as c = H(X || R || m) and calculates s2 = c.a2.x2 + r2. - The final signature is (R, s1 + s2). Now, let's focus on the blinding aspect for party 1: To prevent party 1 from learning the full public key or the final signature, the following steps are taken: 1) Key aggregation is performed only by party 2. Party 1 simply sends its public key X1 to party 2. 2) Nonce aggregation is performed only by party 2. Party 1 sends its generated nonce R1 to party 2. 3) Party 2 computes the 'challenge' (c) as c = H(X || R || m) and sends it back to party 1. Party 1 then computes s1 = c.a1.x1 + r1. - Party 1 does not need to independently compute and verify the challenge (c) since it is already blinded from the message. By following these steps, party 1 never learns the final value of (R, s1 + s2) or the message (m). In terms of potential issues, it is important to carefully evaluate the trustworthiness of the statechain server that reports the number of partial signatures. Additionally, the full set of signatures should be verified on the client side to ensure their validity. Any comments or concerns regarding this implementation would be highly appreciated.

Computing Blinding Factors in a PTLC and Trampoline World

This passage describes a mathematical demonstration of a method for computing blinding factors in a specific way. The goal is to achieve certain properties, such as ensuring that only one blinding factor is needed for each intermediate node and the receiver, and allowing Trampoline nodes to provide blinding factors to sub-routes without the intermediate nodes being aware they are on a Trampoline route. The demonstration begins by establishing that the ultimate receiver has a secret value "r" and shares a point "R" with the ultimate sender, where R = r * G (G represents a point on an elliptic curve). In the simplest case, where the ultimate sender and receiver are directly connected, the ultimate sender chooses a random scalar "e" as the error blinding factor and constructs an onion with "e" encrypted for the ultimate receiver. Along with the onion, the ultimate sender offers a Payment-Triggered Lightning Contract (PTLC) with the point e * G + R. The ultimate receiver can claim this PTLC by revealing e + r. Next, the scenario is slightly modified to include an intermediate node named Carol. In this case, the ultimate sender still chooses a random scalar "e" as the final error factor but also generates two scalars "c" and "d" such that c + d = e. This is achieved by selecting a random "d" and computing c = e - d. The onion is then encrypted with e for the ultimate receiver and the ciphertext, along with d encrypted for Carol. The PTLC is sent to Carol with the point c * G + R. Carol adds her per-hop blinding factor times G to the input point and sends a modified PTLC with the point c * G + R + d * G to the next hop. This results in (c + d) * G + R, which is equivalent to e * G + R, as e = c + d. The ultimate receiver cannot differentiate whether the PTLC came from Carol or a direct source-to-destination route because both cases result in the same point e * G + R. When the ultimate receiver reveals e + r, Carol can compute c + r by taking e + r - d. Since c = e - d, e + r - d = e - d + r = c + r. Carol can then claim the incoming c * G + R with the scalar c + r. Carol only knows d, not c or r, so it cannot compute r. Lastly, the scenario is extended to include Carol as a Trampoline node, and the ultimate sender does not provide the detailed route from Carol to the next Trampoline hop. The ultimate sender learns R, selects a random e, and computes c and d such that c + d = e. The Trampoline-level onion includes e encrypted for the ultimate receiver and the ciphertext, along with d and the next Trampoline hop encrypted for Carol. The PTLC with the onion is sent to Carol with the point c * G + R. Carol decrypts the onion and obtains d. Carol then needs to search for a route from herself to the ultimate receiver. Let's assume the route found is Carol -> Alice -> ultimate receiver. Carol selects two scalars, a and b, such that a + b = d. It creates a new onion with the ciphertext copied from the ultimate sender and b encrypted for Alice. The PTLC with the point c * G + R + a * G is sent to Alice. Alice decrypts the onion and learns b. Alice forwards the PTLC with the point c * G + R + a * G + b * G to the next hop, the ultimate receiver. Since a + b = d, a * G + b * G = d * G. Also, c + d = e, so c * G + d * G = e * G. Therefore, c * G + R + a * G + b * G = c * G + a * G + b * G + R = c * G + d * G + R = (c + d) * G + R = e * G + R. The ultimate receiver receives the same e * G + R and cannot determine whether it was reached via a Trampoline, non-Trampoline intermediate, or direct route. Each intermediate node, both Trampoline and non-Trampoline, can claim its incoming PTLC, and only the ultimate sender knows c, allowing the recovery of r.

In this explanation, we will break down a mathematical demonstration that involves the computation of blinding factors. The purpose of this computation is to achieve certain goals, such as minimizing the number of blinding factors that intermediate nodes need to know and allowing trampoline nodes to provide blinding factors to sub-routes without revealing that they are trampoline nodes. Let's start by understanding the basic setup. We have a sender (ultimate sender) and a receiver (ultimate receiver). The ultimate receiver has a secret value called 'r'. The ultimate receiver shares a point called 'R' with the ultimate sender, where 'R' is equal to 'r' multiplied by a specific point 'G'. In the simplest case, if the ultimate sender can directly communicate with the ultimate receiver, it chooses a random value (scalar) called 'e' as the blinding factor. It constructs an onion with 'e' encryptable by the ultimate receiver and sends it along with a payment (PTLC) that contains the point 'e * G + R'. The ultimate receiver can claim this payment by revealing 'e + r' since it learns 'e' from the onion and knows 'r' (the secret value). This is possible because the contract between them requires the ultimate receiver to provide 'r' in exchange for payment. Now, let's consider a scenario where an intermediate node, Carol, exists between the ultimate sender and the ultimate receiver. In this case, the ultimate sender still needs to choose a final blinding factor 'e' randomly. However, the sender also needs to generate two other scalars, 'c' and 'd,' such that 'c + d = e'. This can be achieved by selecting a random scalar 'd' and computing 'c = e - d'. The ultimate sender then encrypts the onion in the following way: - 'e' is encrypted to the ultimate receiver. - The above ciphertext, along with 'd' encrypted, is sent to intermediate node Carol. The ultimate sender sends the payment (PTLC) with the point 'c * G + R' to Carol. At this point, each intermediate non-Trampoline node (such as Carol) takes the input point, adds its per-hop blinding factor multiplied by 'G', and uses the result as the output point to the next hop. So Carol receives 'c * G + R'. Carol then adds 'd * G' (the 'd' error obtained from the onion) and sends a PTLC with the point 'c * G + R + d * G' to the next hop. Note that 'e = c + d', so the PTLC sent by Carol to the ultimate sender can be rearranged as '(c + d) * G + R'. This is equivalent to 'e * G + R', which is the same as the direct case where there is no intermediate node. Therefore, the ultimate receiver cannot distinguish whether it received from Carol or from a further node since it sees 'e * G + R' in both cases. When the ultimate receiver releases 'e + r', Carol can compute 'c + r' by taking 'e + r - d'. Since 'c = e - d', 'e + r - d = e - d + r = c + r'. Carol can then claim the incoming 'c * G + R' with the scalar 'c + r'. It's important to note that Carol does not know 'c'; it only knows 'd' and, therefore, cannot compute 'r'. Now let's consider another scenario where Carol is a trampoline node, and the ultimate sender does not provide a detailed route from Carol to the next trampoline hop. In this case, the ultimate receiver is actually the final trampoline hop after Carol, but Carol is unaware of this fact and cannot learn it. The ultimate sender still learns 'R' but selects a random 'e' as the blinding factor. It generates 'c' and 'd' such that 'c + d = e', following the same technique as before. The ultimate sender then creates a trampoline-level onion with the following encrypted components: - 'e' encrypted to the ultimate receiver. - The above ciphertext, 'd', and the next trampoline hop (the node ID of the ultimate receiver) encrypted to Carol. The ultimate sender sends the payment (PTLC) with the above onion, containing the point 'c * G + R', to Carol. Carol decrypts the onion and obtains 'd'. Now, Carol needs to find a route from itself to the ultimate receiver, which, in this case, is the next trampoline hop. Suppose Carol finds a route Carol -> Alice -> ultimate receiver. Carol needs to make 'c * G + d * G + R' reach the ultimate receiver. It can do this by selecting two scalars, 'a' and 'b', such that 'a + b = d'. Carol knows 'd', so it randomly selects 'b' and computes 'a = d - b'. Carol creates the onion as follows: - It copies the ciphertext from the ultimate sender: 'e' encrypted to the ultimate receiver. - The above ciphertext and 'b' encrypted to Alice. Carol sends the PTLC with the point 'c * G + R + a * G' to Alice. Alice decrypts the onion and learns 'b'. Then, Alice forwards the PTLC with the point 'c * G + R + a * G + b * G' to the next hop, the ultimate receiver. Now, 'a + b = d', so 'a * G + b * G = d * G'. Also, 'c + d = e', so 'c * G + d * G = e * G'. Therefore: c * G + R + a * G + b * G = c * G + a * G + b * G + R (commutative property) = c * G + (a + b) * G + R (associative property) = c * G + d * G + R (d = a + b by construction) = (c + d) * G + R (associative property) = e * G + R (e = c + d by construction) Thus, the ultimate receiver receives the same 'e * G + R' and cannot differentiate whether it was reached via a trampoline, a non-trampoline intermediate, or directly. Similarly, when claiming, every intermediate node, both trampoline and non-trampoline, has enough data to claim its incoming PTLC. And only the ultimate sender knows 'c', which allows it to recover 'r'. I hope this detailed explanation helps you understand the mathematical demonstration and its implications.

Potential vulnerability in Lightning backends: BOLT-11 "payment hash" does not commit to payment!

In this message, the sender, Calle, is informing a list of recipients about an exploit that was discovered by their team at LNbits. The exploit allowed an attacker to create balances by taking advantage of a quirk in how invoices are handled internally. The team has already patched this issue in LNbits version 0.10.5 and recommends that everyone update as soon as possible. Calle wants to describe the attack in detail because they believe that similar exploits may be possible in other Lightning applications. They specifically mention that this information would be important for people working on custodial wallets, payment processors, account management software, and so on. The attack involves an attacker manipulating two payments, A and B, and tricking the backend into thinking that B is equal to A. Here are the steps involved: 1. The attacker creates an invoice A with an amount of 1000 sat (satoshi) in LNbits. 2. The attacker also creates a separate invoice B' with an amount of 1 sat on their own node. 3. The attacker then modifies B' by inserting the payment hash of payment A into it, effectively making B with manipulated payment details. 4. The attacker re-signs the invoice to make it look legitimate again and serializes it, creating the malicious invoice B. 5. Next, the attacker creates a new account in LNbits and pays invoice B. 6. The LNbits backend, when processing the payment, uses the payment hash of B to determine whether it's an internal payment or a payment via the Lightning Network. 7. Since the backend assumes that the payment hash of A commits to A, it finds A in its database. 8. The backend then settles the payment internally by crediting A and debiting B. 9. As a result, the attacker has effectively "created" 999 sat in their account by manipulating the payment process. To mitigate this exploit, the recommended approach is for backends to either use unique "checking ids" that they generate themselves for looking up internal payments or implement additional checks to ensure that the invoice details haven't been tampered with. For example, they could verify that the amount of A is equal to the amount of B. Calle also highlights two lessons learned from this attack. Firstly, it emphasizes the level of sophistication of attackers familiar with the Lightning Network. This particular exploit required a deep understanding of the underlying technology and the ability to create custom tools. Secondly, it underscores the importance of understanding that the "payment hash" in an invoice is actually just a "preimage" hash and doesn't commit to payment details such as amount or pubkey. Calle suggests calling it the "preimage hash" going forward to avoid any implicit assumptions. Overall, this message serves as a detailed explanation of the discovered exploit, the steps involved in carrying it out, the recommended mitigation, and the lessons learned from this experience.

Dear 15-year-old, Recently, a team called LNbits discovered an interesting issue in their software that could allow someone to exploit it. Let me explain it to you in detail. LNbits is a software that handles invoices related to Lightning Network, which is a technology used for quick and low-cost transactions of cryptocurrencies like Bitcoin. In this software, there was a loophole that allowed an attacker to create fake balances by taking advantage of how invoices are processed internally. The team at LNbits fixed this issue in their latest version, 0.10.5, and they are urging everyone to update their software as soon as possible if they haven't done so already. They are sharing the details of the attack because they believe that similar exploits might be possible in other Lightning Network applications. If you are involved in developing custodial wallets, payment processors, or account management software, this information is relevant to you. Now, let's talk about how the attack works. The attacker first creates an invoice, let's call it Invoice A, with an amount of 1000 sat (satoshis, the smallest unit of Bitcoin). Then, they create another invoice, Invoice B', with an amount of 1 sat on their own node. The attacker then modifies Invoice B' by inserting the payment hash of Invoice A into it. The payment hash is a unique identifier for each payment. By doing this, the attacker tricks the LNbits backend, the system that handles the invoices, into thinking that Invoice B is actually Invoice A. They do this by reshaping the invoice and making it look like a legitimate payment. Next, the attacker creates a new account in LNbits and pays Invoice B. The LNbits backend, which checks the payment hash to determine whether it's an internal payment or a payment through Lightning Network, finds Invoice A in its database. This is because the backend assumes that the payment hash commits to Invoice A. However, the critical part here is that payment hashes do not commit to payment details like the amount, but only to the preimage (a unique code linked to the payment). As a result, the LNbits backend settles the payment by crediting Invoice A and debiting Invoice B. By doing this, the attacker has effectively "created" 999 sat. To prevent such attacks, it is important for backends to use unique identifiers or additional checks when looking up internal payments. This ensures that the invoice details have not been tampered with. There are two lessons to learn from this incident. Firstly, it is crucial to understand that attackers who are knowledgeable about Lightning Network can be quite sophisticated. This attack required a deep understanding of technical concepts and custom tools to carry it out. Secondly, the term "payment hash" is misleading because it suggests that it commits to payment details like the amount of money or the public key. In reality, it only commits to the preimage. To mitigate confusion, the author suggests renaming it as the "preimage hash." I hope this explanation helps you understand the issue and the importance of keeping software secure and updated. Best, Calle

An Open Source Framework to Collect Lightning Network Metrics

In this message, the author is introducing a side project they have been working on. The project involves collecting data on the Lightning Network, which is a protocol built on top of blockchain technology for conducting faster and cheaper transactions. The main objective of the project is to monitor the evolution of the Lightning Network and gather relevant data. This collected data can then be used to evaluate different proposals or ideas related to the network. One specific proposal mentioned is "channel jamming," which refers to a scenario where a malicious user intentionally overloads a channel to disrupt transactions. The author highlights that collecting real data is important as it provides tangible insights into the network's behavior and allows for more informed evaluations. Simulations can only provide theoretical results, whereas real data offers a more accurate representation of the network's dynamics. Additionally, the author mentions that their project aims to support University Research that may not have access to real data. By providing this collected information, researchers can analyze and evaluate their own ideas without having to rely solely on simulations. The author provides links to further information about the project. [1] leads to a detailed document outlining the idea and methodology behind the data collection. [2] directs to an experimental explorer, a platform where users can explore and visualize the collected data. Finally, [3] is a public Graphql API (Application Programming Interface) that exposes the collected data for developers or researchers to access. In conclusion, the author hopes that their project will be useful to someone interested in studying, evaluating, or proposing solutions for the Lightning Network.

Hello! I'm happy to explain this to you in great detail. So, it seems like the person who wrote this message has a side project where they're trying to gather data on something called the lightning network. The lightning network is a system built on top of the Bitcoin blockchain that allows for faster and cheaper transactions. The goal of this project is to track how the lightning network evolves over time. They want to do this to evaluate different proposals or ideas for improving the network. They mention something called "channel jamming," which is one proposal they're interested in investigating. By collecting real data on the network, they can see how these proposals actually affect the network in practice, instead of just relying on simulation results. Additionally, they mention that they want to support university research that may not have access to this real data. By providing this data, they hope to enable more research and experimentation in the field. To achieve this, the person has come up with a way to define and collect information that can later be shared with others. They've provided links to a more detailed description of their idea, an experimental explorer where you can see the data they've collected, and a public Graphql API that allows others to access this data as well. The hope is that this project will be useful for someone who wants to study or improve the lightning network. The person who wrote this message goes by the name Vincent and they're excited about the potential impact of their project. I hope that helps! Let me know if you have any further questions.

option_simple_close for "unfailable" closing

The link provided is a pull request on the GitHub repository for the "bolts" project. The pull request is labeled as #1096. The description of the pull request indicates that it is a "can't fail!" close protocol, which was discussed at the NY Summit and on @Roasbeef's wishlist. The protocol aims to be as simple as possible, with the only complexity arising from allowing each side to indicate whether they want to omit their own output. The protocol is "taproot ready" in the sense that shutdown is always sent to trigger it, allowing nonces to be included without any persistence requirement. The pull request consists of three commits to the repository. The first commit introduces the new protocol, the second commit removes the requirement that shutdown not be sent multiple times (which was already nonsensical), and the third commit removes the older protocols. The pull request includes changes to the file "02-peer-protocol.md". The changes in this file introduce the new protocol, describe the closing negotiation process, and specify the requirements for each step of the negotiation. The file includes a section on "Closing Negotiation" that explains that once shutdown is complete and the channel is empty of Hashed Time Locked Contracts (HTLCs), each peer says what fee it will pay and the other side simply signs off on that transaction. The section includes details on the message types, the data they contain, and the requirements for each peer in the negotiation. The pull request also includes changes to the file "03-transactions.md". The changes in this file provide details on the closing transactions used in the negotiation process. The file describes the different variants of the closing transaction and outlines the requirements for each variant. Finally, the pull request includes changes to the file "09-features.md". The changes in this file add a new feature called "option_simple_close" which is related to the simplified closing negotiation described in the pull request. Overall, the pull request introduces a new closing protocol for the bolts project, provides specifications for the negotiation process, and makes changes to related files to support the new protocol.

This is a pull request on a GitHub repository called "bolts" that proposes a new protocol for closing a channel in the Lightning Network. The pull request is numbered 1096. The new protocol is called "can't fail!" close protocol and it was discussed at the NY Summit and on the wishlist of a person named Roasbeef. The goal of this protocol is to make the closing process as simple as possible, with the only complexity being the option for each side to indicate whether they want to omit their own output. The protocol is "taproot ready" in the sense that the shutdown message is always sent to trigger the closing process, and this message can contain the necessary data without requiring any persistence. The pull request is split into three commits for cleanliness and organization. The first commit introduces the new protocol, the second removes a requirement that no longer makes sense, and the third removes older protocols that are no longer needed. The pull request includes changes to the "02-peer-protocol.md" and "03-transactions.md" files. In the "02-peer-protocol.md" file, there are several sections that describe the closing process, including the closing initiation, closing negotiation, and normal operation. The "closing negotiation" section is further divided into two parts: "closing_complete" and "closing_sig". In the "closing_complete" part, each peer says what fee it is willing to pay, and the other side simply signs that transaction. The complexity arises from allowing each side to omit its own output if it is not economically viable. This process can be repeated every time a shutdown message is received, allowing for re-negotiation. The "closing_sig" part describes the requirements for this message, including the transaction data that needs to be included and the signatures that need to be provided. The requirements differ depending on whether the sender of the message is the closer or the closee. The receiver of the closing_sig message needs to validate the signatures and select one of the transactions to respond to. The "03-transactions.md" file includes the details of the closing transactions, including the classic closing transaction variant and the closing transaction variant used for closing_complete and closing_sig messages. Overall, this pull request proposes a new protocol for closing a channel in the Lightning Network that simplifies the process and allows for negotiation between the peers involved. It includes changes to the protocol specification files to describe the new protocol in detail.

LN Summit 2023 Notes

The text you provided is a detailed summary of a discussion about various topics related to the Lightning Network (LN) specification. Here is a breakdown of the key points discussed: 1. Package Relay: The discussion focused on the proposal for package relay, which involves grouping transactions into packages for more efficient processing. The current proposal is to use ancestor package relay, which allows for up to 24 ancestors for each child transaction. Other topics discussed include base package relay, commitment transactions, ephemeral anchors, HLTCs (hashed timelock contracts), and mempool policy changes. 2. Taproot: The discussion touched on the latest developments in the Taproot privacy and scalability improvement proposal. Specific points discussed include the changes related to anchors and revocation paths, as well as the implementation of nonces. 3. Gossip V1.5 vs V2: The participants discussed the differences between Gossip V1.5 and V2 in terms of script binding and amount binding. They debated whether to fully bind to the script or allow any taproot output to be a channel. The potential implications for pathfinding and capacity graphs were also discussed. 4. Reputation System for Channel Jamming: The participants explored the idea of using a reputation system to mitigate channel jamming attacks. The discussion revolved around resource bucketing, reputation scores, endorsement signals, and the impact on network quality of service. 5. Simplified Commitments: The participants discussed the concept of simplified commitments, which aims to simplify the LN state machine by introducing turn taking and a refined protocol for updates, commitments, and revocations. They also discussed the possibility of introducing NACK messages for rejecting updates and the benefits of a more streamlined state machine. 6. Meta Spec Process: The participants debated the best approach to managing the LN specification as it evolves over time. They discussed the pros and cons of a living document vs. versioning, the need for modularization, and the importance of maintaining backward compatibility. The role of extensions, cleaning up the specification, and improving communication among developers were also discussed. 7. Async Payments/Trampoline: The participants briefly discussed the use of blinded payments for trampoline payments, where nodes in the network help route payments to their destination. The concept of trampolines, radius-based gossip, and splitting multi-path payments over trampoline were mentioned. In summary, the discussion covered a range of topics related to the LN specification, including package relay, Taproot, gossip protocols, reputation systems, simplified commitments, meta spec processes, and trampoline payments. The participants provided detailed insights, shared ideas, and debated the pros and cons of various proposals and approaches.

During the annual specification meeting in New York City at the end of June, the attendees attempted to take transcript-style notes. These notes are available in a Google Docs document, which you can find at the link provided. Additionally, the full set of notes is included at the end of the email, although the formatting may be affected. The discussions at the summit covered several topics, including: 1. Package Relay: The current proposal for package relay is ancestor package relay, which allows one child to have up to 24 ancestors. Currently, only mempool transactions are scored by ancestry, so there isn't much point in other types of packages. Commitment transactions still require the minimum relay fee for base package relay. Batch bumping is not allowed to prevent pinning attacks. With one anchor, RBF can be packaged. V3 transactions will allow for dropping minimum relay fees and the restriction of one child paying for one parent transaction. 2. HLTCs (HTLCs with anchors): There are changes being made to HLTCs with the introduction of SIGHASH_ANYONECANPAY, which allows the counterparty to inflate the size of a transaction. The discussion revolved around how much the system should be changed. The proposed changes would allow for zero-fee commitment transactions and one ephemeral anchor per transaction. The use of ephemeral anchors would eliminate the need for delay and ensure eviction of the parent transaction. 3. Mempool Policy: The mempool can be organized into clusters of transactions, allowing for easier sorting and reasoning. The mining algorithm will pick one "vertical" within the mempool using the ancestor fee rate. The discussion explored the possibility of adding cluster mempool to enable package RBF. 4. Taproot: The main change in taproot is around anchors, which become more complicated with this update. The discussion covered various aspects of taproot, including revocation paths, NUMS points, and co-op close negotiation. 5. Gossip V1.5 vs. V2: The discussion revolved around the script binding and amount binding in gossip. The participants debated whether to bind to the script or allow any taproot output. The consensus was to allow any taproot output to be a channel and let people experiment. 6. Multi-Sig Channel Parties: The discussion focused on different ways to implement multi-sig for one channel party, such as using scripts, FROSTy, or recursive musig. 7. PTLCs (Point Time Locked Contracts): The conversation explored different approaches to PTLCs, such as regular musig or adaptor signatures. The potential for redundant overpayment (stuckless payments) and different options for achieving it were also discussed. 8. Hybrid Approach to Channel Jamming: The discussion centered around different approaches to mitigate jamming attacks in Lightning Network, including monetary solutions (unconditional fees), reputation-based solutions, and utilizing scarce resources (POW, stake, tokens). The participants discussed the need to combine multiple solutions for effective mitigation and the challenges associated with each approach. 9. Reputation for Channel Jamming: The participants explored the concept of reputation-based mitigation for jamming attacks. The discussion focused on resource bucketing, reputation scores, and the allocation of protected and general slots for HTLCs based on reputation and endorsement signals. 10. Simplified Commitments: The conversation revolved around simplifying the state machine for Lightning Network by implementing turn-taking and introducing the concepts of revoke and NACK. The participants explored the implications of these changes and the benefits of simplified commitments. 11. Meta Spec Process: The participants discussed the idea of moving away from a single "living document" to a versioning system for the Lightning Network specification. The proposal was to have extensions that can be added and removed as needed, allowing for modularity and easier maintenance. The participants also discussed the need for better communication channels and the importance of recommitting to the lightning-dev mailing list. 12. Async Payments/Trampoline: The final discussion focused on trampoline payments and the potential for async (asynchronous) payments. The participants explored the concept of light nodes, trampoline routing, and the ability to split MPP (multi-part payments) over trampoline. In summary, the discussions covered a wide range of topics related to Lightning Network and its specifications. The participants delved into technical details, proposed solutions, and debated the benefits and challenges of various approaches.

On the experiment of the Bitcoin Contracting Primitives WG and marking this community process "up for grabs"

This message is a detailed update on the progress and future plans for the development of Bitcoin consensus changes. The author begins by referencing past discussions about covenant proposals and the need for a new community process to specify covenants. They mention their goals, which include building a consistent framework to evaluate covenant proposals, finding common ground between proposals, expanding the consensus changes development process beyond Bitcoin Core, and maintaining a high-quality technical archive. The author acknowledges that other initiatives, such as the bitcoin-inquisition fork and the archiving of covenant proposals under the Optech umbrella, have also been undertaken. They mention the Bitcoin Contracting Primitives Working Group, which has held monthly meetings and documented various primitives and protocols related to Bitcoin contracting. The author explains that they launched the effort as an experiment, devoting 20% of their time to it. However, they have come to the realization that their time and energy would be better allocated to working on Lightning Network robustness. They express their belief that the scalability and smooth operation of the Lightning Network are more critical for Bitcoin's survival than extended covenant capabilities. The author encourages others who are working on covenant changes proposals to continue their work, noting that Taproot and Schnorr soft forks have proven to be beneficial for self-custody solutions. They also mention their own plans to focus on R&D works related to CoinPool, particularly in addressing interactivity issues and designing advanced Bitcoin contracts. The author concludes by acknowledging that they may have overpromised with the launch of the new process for Bitcoin consensus changes development. They emphasize the importance of having technical historians and archivists to assess, collect, and preserve consensus changes proposals, as well as QA devs to ensure proper testing before deployment. They invite others to continue the maintenance of the Bitcoin Contracting Primitives Working Group or collaborate with other organizations. Overall, this message provides detailed information about the progress, challenges, and future plans related to Bitcoin consensus changes.

In this message, the author is discussing their involvement in a community process related to Bitcoin development. They introduced the idea of a new process to specify covenants, which are conditions or agreements that can be added to Bitcoin transactions. The author explains that they will not be actively pursuing this process further, as they have decided to focus more on other Bitcoin projects. They mention that the goals of this process were to build a consistent framework for evaluating covenant proposals, identify commonalities between proposals, open up the consensus development process beyond Bitcoin Core, and maintain a high-quality technical archive. The author also mentions other initiatives that have been undertaken during the same period, such as a fork of Bitcoin Core called bitcoin-inquisition and the archiving of covenant proposals under the Optech umbrella. The author provides some details about the Bitcoin Contracting Primitives Working Group, which is a group of individuals who have been working on documenting and archiving various Bitcoin contract primitives and protocols. They mention that monthly meetings have been held, and there have been in-depth discussions on topics related to contract primitives and protocols. The author explains that they started this effort as an experiment and initially committed to dedicating 20% of their time to it. However, they have realized that there is still a lot of work to be done in other areas, such as improving the Lightning Network, which is a second-layer scaling solution for Bitcoin. They believe that working on scaling Bitcoin and improving its robustness is more critical for the survival of Bitcoin than focusing on advanced contract capabilities. The author acknowledges that they may have overpromised with the new community process but believes that enough progress has been made to demonstrate its value. They express that what Bitcoin needs is not necessarily more technical proposals but rather a focus on assessing, collecting, and preserving consensus change proposals and ensuring thorough testing before deployment. They invite others to continue the work of the Bitcoin Contracting Primitives Working Group if they are willing to commit resources and effort to it. Overall, the author is reflecting on their involvement in the community process related to Bitcoin covenant proposals and discussing their decision to shift their focus to other Bitcoin projects. They believe that there is still much work to be done in scaling and improving Bitcoin's robustness and express the need for dedicated individuals to assess and preserve consensus change proposals.

Blinded 2-party Musig2

This text describes the implementation of a version of the 2-of-2 Schnorr Musig2 protocol for statechains. Statechains involve a server (referred to as party 1) that is "blinded," meaning it holds a private key necessary to generate an aggregate signature on an aggregate public key, but it does not have access to certain information. The information that party 1 is not supposed to learn includes: 1) the aggregate public key, 2) the aggregate signature, and 3) the message being signed (denoted as "m" in the text). The security of this implementation relies on party 1 being trusted to report the number of partial signatures it has generated for a particular key, rather than being trusted to enforce rules on what it has signed in the unblinded case. The full set of signatures generated is verified on the client side. The implementation is based on the 2-of-2 musig2 protocol, which operates as follows: 1. Party 1 generates a private key, denoted as "x1," and the corresponding public key, denoted as "X1 = x1G". G is the generator point, and point multiplication is denoted as X = xG, while point addition is denoted as A = G + G. 2. Party 2 generates a private key, denoted as "x2," and the corresponding public key, denoted as "X2 = x2G". 3. The set of public keys is denoted as L = {X1, X2}. 4. The key aggregation coefficient is given by KeyAggCoef(L, X) = H(L, X), where H is a hash function. This coefficient is used to calculate the shared (aggregate) public key, denoted as X = a1X1 + a2X2, where a1 = KeyAggCoef(L, X1) and a2 = KeyAggCoef(L, X2). 5. To sign a message "m," party 1 generates a nonce "r1" and derives a point "R1 = r1G". Party 2 generates a nonce "r2" and derives a point "R2 = r2G". These points are aggregated into "R = R1 + R2". 6. Party 1 computes the challenge "c" as the hash of the concatenation of X, R, and m, i.e., c = H(X||R||m), and calculates s1 = c.a1.x1 + r1. 7. Party 2 also computes the challenge "c" using the same formula, c = H(X||R||m), and calculates s2 = c.a2.x2 + r2. 8. The final signature is represented as (R, s1 + s2). In the case of blinding party 1, the steps to prevent it from learning the full public key or final signature are as follows: 1. Key aggregation is performed solely by party 2. Party 1 only needs to send its own public key, X1, to party 2. 2. Nonce aggregation is performed solely by party 2. Party 1 only needs to send its own nonce, R1, to party 2. 3. Party 2 computes the challenge "c" using the same formula and sends it to party 1 in order to compute s1 = c.a1.x1 + r1. 4. Party 1 never learns the final value of (R, s1 + s2) or the message "m". This implementation aims to provide confidentiality for party 1 by blinding it from certain information, thereby ensuring that party 1 cannot determine the full public key, final signature, or the signed message. Any feedback or potential issues with this approach would be appreciated. The attached HTML part of the message was likely removed due to its content being irrelevant or not accessible through the text format.

In this implementation, we are using a cryptographic protocol called 2-of-2 Schnorr Musig2 for statechains. In this protocol, there are two parties involved - party 1 and party 2. The goal is to create an aggregate signature on an aggregate public key, while ensuring that party 1 remains fully "blinded" and does not learn certain information. Blinding refers to the process of preventing party 1 from gaining knowledge of the aggregate public key, the aggregate signature, and the message being signed. In this model of blinded statechains, the security relies on party 1 being trusted to report the number of partial signatures it has generated for a specific key. The actual verification of the signatures is done on the client side. Now, let's break down how the 2-of-2 musig2 protocol operates and how blinding is achieved: 1. Key Generation: - Party 1 generates a private key (x1) and a corresponding public key (X1 = x1G), where G is the generator point. - Party 2 does the same, generating a private key (x2) and a public key (X2 = x2G). - The set of public keys is represented by L = {X1, X2}. 2. Key Aggregation: - The key aggregation coefficient is calculated using the set of public keys (L) and the aggregate public key (X). - KeyAggCoef(L, X) = H(L, X), where H is a hash function. - The shared (aggregate) public key is calculated as X = a1X1 + a2X2, where a1 = KeyAggCoef(L, X1) and a2 = KeyAggCoef(L, X2). 3. Message Signing: - To sign a message (m), party 1 generates a nonce (r1) and calculates R1 = r1G. - Party 2 also generates a nonce (r2) and calculates R2 = r2G. - These nonces are aggregated to obtain R = R1 + R2. - Party 1 computes the 'challenge' (c) as c = H(X || R || m) and calculates s1 = c.a1.x1 + r1. - Party 2 also computes the 'challenge' (c) as c = H(X || R || m) and calculates s2 = c.a2.x2 + r2. - The final signature is (R, s1 + s2). Now, let's focus on the blinding aspect for party 1: To prevent party 1 from learning the full public key or the final signature, the following steps are taken: 1) Key aggregation is performed only by party 2. Party 1 simply sends its public key X1 to party 2. 2) Nonce aggregation is performed only by party 2. Party 1 sends its generated nonce R1 to party 2. 3) Party 2 computes the 'challenge' (c) as c = H(X || R || m) and sends it back to party 1. Party 1 then computes s1 = c.a1.x1 + r1. - Party 1 does not need to independently compute and verify the challenge (c) since it is already blinded from the message. By following these steps, party 1 never learns the final value of (R, s1 + s2) or the message (m). In terms of potential issues, it is important to carefully evaluate the trustworthiness of the statechain server that reports the number of partial signatures. Additionally, the full set of signatures should be verified on the client side to ensure their validity. Any comments or concerns regarding this implementation would be highly appreciated.

Computing Blinding Factors in a PTLC and Trampoline World

This passage describes a mathematical demonstration of a method for computing blinding factors in a specific way. The goal is to achieve certain properties, such as ensuring that only one blinding factor is needed for each intermediate node and the receiver, and allowing Trampoline nodes to provide blinding factors to sub-routes without the intermediate nodes being aware they are on a Trampoline route. The demonstration begins by establishing that the ultimate receiver has a secret value "r" and shares a point "R" with the ultimate sender, where R = r * G (G represents a point on an elliptic curve). In the simplest case, where the ultimate sender and receiver are directly connected, the ultimate sender chooses a random scalar "e" as the error blinding factor and constructs an onion with "e" encrypted for the ultimate receiver. Along with the onion, the ultimate sender offers a Payment-Triggered Lightning Contract (PTLC) with the point e * G + R. The ultimate receiver can claim this PTLC by revealing e + r. Next, the scenario is slightly modified to include an intermediate node named Carol. In this case, the ultimate sender still chooses a random scalar "e" as the final error factor but also generates two scalars "c" and "d" such that c + d = e. This is achieved by selecting a random "d" and computing c = e - d. The onion is then encrypted with e for the ultimate receiver and the ciphertext, along with d encrypted for Carol. The PTLC is sent to Carol with the point c * G + R. Carol adds her per-hop blinding factor times G to the input point and sends a modified PTLC with the point c * G + R + d * G to the next hop. This results in (c + d) * G + R, which is equivalent to e * G + R, as e = c + d. The ultimate receiver cannot differentiate whether the PTLC came from Carol or a direct source-to-destination route because both cases result in the same point e * G + R. When the ultimate receiver reveals e + r, Carol can compute c + r by taking e + r - d. Since c = e - d, e + r - d = e - d + r = c + r. Carol can then claim the incoming c * G + R with the scalar c + r. Carol only knows d, not c or r, so it cannot compute r. Lastly, the scenario is extended to include Carol as a Trampoline node, and the ultimate sender does not provide the detailed route from Carol to the next Trampoline hop. The ultimate sender learns R, selects a random e, and computes c and d such that c + d = e. The Trampoline-level onion includes e encrypted for the ultimate receiver and the ciphertext, along with d and the next Trampoline hop encrypted for Carol. The PTLC with the onion is sent to Carol with the point c * G + R. Carol decrypts the onion and obtains d. Carol then needs to search for a route from herself to the ultimate receiver. Let's assume the route found is Carol -> Alice -> ultimate receiver. Carol selects two scalars, a and b, such that a + b = d. It creates a new onion with the ciphertext copied from the ultimate sender and b encrypted for Alice. The PTLC with the point c * G + R + a * G is sent to Alice. Alice decrypts the onion and learns b. Alice forwards the PTLC with the point c * G + R + a * G + b * G to the next hop, the ultimate receiver. Since a + b = d, a * G + b * G = d * G. Also, c + d = e, so c * G + d * G = e * G. Therefore, c * G + R + a * G + b * G = c * G + a * G + b * G + R = c * G + d * G + R = (c + d) * G + R = e * G + R. The ultimate receiver receives the same e * G + R and cannot determine whether it was reached via a Trampoline, non-Trampoline intermediate, or direct route. Each intermediate node, both Trampoline and non-Trampoline, can claim its incoming PTLC, and only the ultimate sender knows c, allowing the recovery of r.

In this explanation, we will break down a mathematical demonstration that involves the computation of blinding factors. The purpose of this computation is to achieve certain goals, such as minimizing the number of blinding factors that intermediate nodes need to know and allowing trampoline nodes to provide blinding factors to sub-routes without revealing that they are trampoline nodes. Let's start by understanding the basic setup. We have a sender (ultimate sender) and a receiver (ultimate receiver). The ultimate receiver has a secret value called 'r'. The ultimate receiver shares a point called 'R' with the ultimate sender, where 'R' is equal to 'r' multiplied by a specific point 'G'. In the simplest case, if the ultimate sender can directly communicate with the ultimate receiver, it chooses a random value (scalar) called 'e' as the blinding factor. It constructs an onion with 'e' encryptable by the ultimate receiver and sends it along with a payment (PTLC) that contains the point 'e * G + R'. The ultimate receiver can claim this payment by revealing 'e + r' since it learns 'e' from the onion and knows 'r' (the secret value). This is possible because the contract between them requires the ultimate receiver to provide 'r' in exchange for payment. Now, let's consider a scenario where an intermediate node, Carol, exists between the ultimate sender and the ultimate receiver. In this case, the ultimate sender still needs to choose a final blinding factor 'e' randomly. However, the sender also needs to generate two other scalars, 'c' and 'd,' such that 'c + d = e'. This can be achieved by selecting a random scalar 'd' and computing 'c = e - d'. The ultimate sender then encrypts the onion in the following way: - 'e' is encrypted to the ultimate receiver. - The above ciphertext, along with 'd' encrypted, is sent to intermediate node Carol. The ultimate sender sends the payment (PTLC) with the point 'c * G + R' to Carol. At this point, each intermediate non-Trampoline node (such as Carol) takes the input point, adds its per-hop blinding factor multiplied by 'G', and uses the result as the output point to the next hop. So Carol receives 'c * G + R'. Carol then adds 'd * G' (the 'd' error obtained from the onion) and sends a PTLC with the point 'c * G + R + d * G' to the next hop. Note that 'e = c + d', so the PTLC sent by Carol to the ultimate sender can be rearranged as '(c + d) * G + R'. This is equivalent to 'e * G + R', which is the same as the direct case where there is no intermediate node. Therefore, the ultimate receiver cannot distinguish whether it received from Carol or from a further node since it sees 'e * G + R' in both cases. When the ultimate receiver releases 'e + r', Carol can compute 'c + r' by taking 'e + r - d'. Since 'c = e - d', 'e + r - d = e - d + r = c + r'. Carol can then claim the incoming 'c * G + R' with the scalar 'c + r'. It's important to note that Carol does not know 'c'; it only knows 'd' and, therefore, cannot compute 'r'. Now let's consider another scenario where Carol is a trampoline node, and the ultimate sender does not provide a detailed route from Carol to the next trampoline hop. In this case, the ultimate receiver is actually the final trampoline hop after Carol, but Carol is unaware of this fact and cannot learn it. The ultimate sender still learns 'R' but selects a random 'e' as the blinding factor. It generates 'c' and 'd' such that 'c + d = e', following the same technique as before. The ultimate sender then creates a trampoline-level onion with the following encrypted components: - 'e' encrypted to the ultimate receiver. - The above ciphertext, 'd', and the next trampoline hop (the node ID of the ultimate receiver) encrypted to Carol. The ultimate sender sends the payment (PTLC) with the above onion, containing the point 'c * G + R', to Carol. Carol decrypts the onion and obtains 'd'. Now, Carol needs to find a route from itself to the ultimate receiver, which, in this case, is the next trampoline hop. Suppose Carol finds a route Carol -> Alice -> ultimate receiver. Carol needs to make 'c * G + d * G + R' reach the ultimate receiver. It can do this by selecting two scalars, 'a' and 'b', such that 'a + b = d'. Carol knows 'd', so it randomly selects 'b' and computes 'a = d - b'. Carol creates the onion as follows: - It copies the ciphertext from the ultimate sender: 'e' encrypted to the ultimate receiver. - The above ciphertext and 'b' encrypted to Alice. Carol sends the PTLC with the point 'c * G + R + a * G' to Alice. Alice decrypts the onion and learns 'b'. Then, Alice forwards the PTLC with the point 'c * G + R + a * G + b * G' to the next hop, the ultimate receiver. Now, 'a + b = d', so 'a * G + b * G = d * G'. Also, 'c + d = e', so 'c * G + d * G = e * G'. Therefore: c * G + R + a * G + b * G = c * G + a * G + b * G + R (commutative property) = c * G + (a + b) * G + R (associative property) = c * G + d * G + R (d = a + b by construction) = (c + d) * G + R (associative property) = e * G + R (e = c + d by construction) Thus, the ultimate receiver receives the same 'e * G + R' and cannot differentiate whether it was reached via a trampoline, a non-trampoline intermediate, or directly. Similarly, when claiming, every intermediate node, both trampoline and non-trampoline, has enough data to claim its incoming PTLC. And only the ultimate sender knows 'c', which allows it to recover 'r'. I hope this detailed explanation helps you understand the mathematical demonstration and its implications.

Potential vulnerability in Lightning backends: BOLT-11 "payment hash" does not commit to payment!

In this message, the sender, Calle, is informing a list of recipients about an exploit that was discovered by their team at LNbits. The exploit allowed an attacker to create balances by taking advantage of a quirk in how invoices are handled internally. The team has already patched this issue in LNbits version 0.10.5 and recommends that everyone update as soon as possible. Calle wants to describe the attack in detail because they believe that similar exploits may be possible in other Lightning applications. They specifically mention that this information would be important for people working on custodial wallets, payment processors, account management software, and so on. The attack involves an attacker manipulating two payments, A and B, and tricking the backend into thinking that B is equal to A. Here are the steps involved: 1. The attacker creates an invoice A with an amount of 1000 sat (satoshi) in LNbits. 2. The attacker also creates a separate invoice B' with an amount of 1 sat on their own node. 3. The attacker then modifies B' by inserting the payment hash of payment A into it, effectively making B with manipulated payment details. 4. The attacker re-signs the invoice to make it look legitimate again and serializes it, creating the malicious invoice B. 5. Next, the attacker creates a new account in LNbits and pays invoice B. 6. The LNbits backend, when processing the payment, uses the payment hash of B to determine whether it's an internal payment or a payment via the Lightning Network. 7. Since the backend assumes that the payment hash of A commits to A, it finds A in its database. 8. The backend then settles the payment internally by crediting A and debiting B. 9. As a result, the attacker has effectively "created" 999 sat in their account by manipulating the payment process. To mitigate this exploit, the recommended approach is for backends to either use unique "checking ids" that they generate themselves for looking up internal payments or implement additional checks to ensure that the invoice details haven't been tampered with. For example, they could verify that the amount of A is equal to the amount of B. Calle also highlights two lessons learned from this attack. Firstly, it emphasizes the level of sophistication of attackers familiar with the Lightning Network. This particular exploit required a deep understanding of the underlying technology and the ability to create custom tools. Secondly, it underscores the importance of understanding that the "payment hash" in an invoice is actually just a "preimage" hash and doesn't commit to payment details such as amount or pubkey. Calle suggests calling it the "preimage hash" going forward to avoid any implicit assumptions. Overall, this message serves as a detailed explanation of the discovered exploit, the steps involved in carrying it out, the recommended mitigation, and the lessons learned from this experience.

Dear 15-year-old, Recently, a team called LNbits discovered an interesting issue in their software that could allow someone to exploit it. Let me explain it to you in detail. LNbits is a software that handles invoices related to Lightning Network, which is a technology used for quick and low-cost transactions of cryptocurrencies like Bitcoin. In this software, there was a loophole that allowed an attacker to create fake balances by taking advantage of how invoices are processed internally. The team at LNbits fixed this issue in their latest version, 0.10.5, and they are urging everyone to update their software as soon as possible if they haven't done so already. They are sharing the details of the attack because they believe that similar exploits might be possible in other Lightning Network applications. If you are involved in developing custodial wallets, payment processors, or account management software, this information is relevant to you. Now, let's talk about how the attack works. The attacker first creates an invoice, let's call it Invoice A, with an amount of 1000 sat (satoshis, the smallest unit of Bitcoin). Then, they create another invoice, Invoice B', with an amount of 1 sat on their own node. The attacker then modifies Invoice B' by inserting the payment hash of Invoice A into it. The payment hash is a unique identifier for each payment. By doing this, the attacker tricks the LNbits backend, the system that handles the invoices, into thinking that Invoice B is actually Invoice A. They do this by reshaping the invoice and making it look like a legitimate payment. Next, the attacker creates a new account in LNbits and pays Invoice B. The LNbits backend, which checks the payment hash to determine whether it's an internal payment or a payment through Lightning Network, finds Invoice A in its database. This is because the backend assumes that the payment hash commits to Invoice A. However, the critical part here is that payment hashes do not commit to payment details like the amount, but only to the preimage (a unique code linked to the payment). As a result, the LNbits backend settles the payment by crediting Invoice A and debiting Invoice B. By doing this, the attacker has effectively "created" 999 sat. To prevent such attacks, it is important for backends to use unique identifiers or additional checks when looking up internal payments. This ensures that the invoice details have not been tampered with. There are two lessons to learn from this incident. Firstly, it is crucial to understand that attackers who are knowledgeable about Lightning Network can be quite sophisticated. This attack required a deep understanding of technical concepts and custom tools to carry it out. Secondly, the term "payment hash" is misleading because it suggests that it commits to payment details like the amount of money or the public key. In reality, it only commits to the preimage. To mitigate confusion, the author suggests renaming it as the "preimage hash." I hope this explanation helps you understand the issue and the importance of keeping software secure and updated. Best, Calle

An Open Source Framework to Collect Lightning Network Metrics

In this message, the author is introducing a side project they have been working on. The project involves collecting data on the Lightning Network, which is a protocol built on top of blockchain technology for conducting faster and cheaper transactions. The main objective of the project is to monitor the evolution of the Lightning Network and gather relevant data. This collected data can then be used to evaluate different proposals or ideas related to the network. One specific proposal mentioned is "channel jamming," which refers to a scenario where a malicious user intentionally overloads a channel to disrupt transactions. The author highlights that collecting real data is important as it provides tangible insights into the network's behavior and allows for more informed evaluations. Simulations can only provide theoretical results, whereas real data offers a more accurate representation of the network's dynamics. Additionally, the author mentions that their project aims to support University Research that may not have access to real data. By providing this collected information, researchers can analyze and evaluate their own ideas without having to rely solely on simulations. The author provides links to further information about the project. [1] leads to a detailed document outlining the idea and methodology behind the data collection. [2] directs to an experimental explorer, a platform where users can explore and visualize the collected data. Finally, [3] is a public Graphql API (Application Programming Interface) that exposes the collected data for developers or researchers to access. In conclusion, the author hopes that their project will be useful to someone interested in studying, evaluating, or proposing solutions for the Lightning Network.

Hello! I'm happy to explain this to you in great detail. So, it seems like the person who wrote this message has a side project where they're trying to gather data on something called the lightning network. The lightning network is a system built on top of the Bitcoin blockchain that allows for faster and cheaper transactions. The goal of this project is to track how the lightning network evolves over time. They want to do this to evaluate different proposals or ideas for improving the network. They mention something called "channel jamming," which is one proposal they're interested in investigating. By collecting real data on the network, they can see how these proposals actually affect the network in practice, instead of just relying on simulation results. Additionally, they mention that they want to support university research that may not have access to this real data. By providing this data, they hope to enable more research and experimentation in the field. To achieve this, the person has come up with a way to define and collect information that can later be shared with others. They've provided links to a more detailed description of their idea, an experimental explorer where you can see the data they've collected, and a public Graphql API that allows others to access this data as well. The hope is that this project will be useful for someone who wants to study or improve the lightning network. The person who wrote this message goes by the name Vincent and they're excited about the potential impact of their project. I hope that helps! Let me know if you have any further questions.

option_simple_close for "unfailable" closing

The link provided is a pull request on the GitHub repository for the "bolts" project. The pull request is labeled as #1096. The description of the pull request indicates that it is a "can't fail!" close protocol, which was discussed at the NY Summit and on @Roasbeef's wishlist. The protocol aims to be as simple as possible, with the only complexity arising from allowing each side to indicate whether they want to omit their own output. The protocol is "taproot ready" in the sense that shutdown is always sent to trigger it, allowing nonces to be included without any persistence requirement. The pull request consists of three commits to the repository. The first commit introduces the new protocol, the second commit removes the requirement that shutdown not be sent multiple times (which was already nonsensical), and the third commit removes the older protocols. The pull request includes changes to the file "02-peer-protocol.md". The changes in this file introduce the new protocol, describe the closing negotiation process, and specify the requirements for each step of the negotiation. The file includes a section on "Closing Negotiation" that explains that once shutdown is complete and the channel is empty of Hashed Time Locked Contracts (HTLCs), each peer says what fee it will pay and the other side simply signs off on that transaction. The section includes details on the message types, the data they contain, and the requirements for each peer in the negotiation. The pull request also includes changes to the file "03-transactions.md". The changes in this file provide details on the closing transactions used in the negotiation process. The file describes the different variants of the closing transaction and outlines the requirements for each variant. Finally, the pull request includes changes to the file "09-features.md". The changes in this file add a new feature called "option_simple_close" which is related to the simplified closing negotiation described in the pull request. Overall, the pull request introduces a new closing protocol for the bolts project, provides specifications for the negotiation process, and makes changes to related files to support the new protocol.

This is a pull request on a GitHub repository called "bolts" that proposes a new protocol for closing a channel in the Lightning Network. The pull request is numbered 1096. The new protocol is called "can't fail!" close protocol and it was discussed at the NY Summit and on the wishlist of a person named Roasbeef. The goal of this protocol is to make the closing process as simple as possible, with the only complexity being the option for each side to indicate whether they want to omit their own output. The protocol is "taproot ready" in the sense that the shutdown message is always sent to trigger the closing process, and this message can contain the necessary data without requiring any persistence. The pull request is split into three commits for cleanliness and organization. The first commit introduces the new protocol, the second removes a requirement that no longer makes sense, and the third removes older protocols that are no longer needed. The pull request includes changes to the "02-peer-protocol.md" and "03-transactions.md" files. In the "02-peer-protocol.md" file, there are several sections that describe the closing process, including the closing initiation, closing negotiation, and normal operation. The "closing negotiation" section is further divided into two parts: "closing_complete" and "closing_sig". In the "closing_complete" part, each peer says what fee it is willing to pay, and the other side simply signs that transaction. The complexity arises from allowing each side to omit its own output if it is not economically viable. This process can be repeated every time a shutdown message is received, allowing for re-negotiation. The "closing_sig" part describes the requirements for this message, including the transaction data that needs to be included and the signatures that need to be provided. The requirements differ depending on whether the sender of the message is the closer or the closee. The receiver of the closing_sig message needs to validate the signatures and select one of the transactions to respond to. The "03-transactions.md" file includes the details of the closing transactions, including the classic closing transaction variant and the closing transaction variant used for closing_complete and closing_sig messages. Overall, this pull request proposes a new protocol for closing a channel in the Lightning Network that simplifies the process and allows for negotiation between the peers involved. It includes changes to the protocol specification files to describe the new protocol in detail.

LN Summit 2023 Notes

The text you provided is a detailed summary of a discussion about various topics related to the Lightning Network (LN) specification. Here is a breakdown of the key points discussed: 1. Package Relay: The discussion focused on the proposal for package relay, which involves grouping transactions into packages for more efficient processing. The current proposal is to use ancestor package relay, which allows for up to 24 ancestors for each child transaction. Other topics discussed include base package relay, commitment transactions, ephemeral anchors, HLTCs (hashed timelock contracts), and mempool policy changes. 2. Taproot: The discussion touched on the latest developments in the Taproot privacy and scalability improvement proposal. Specific points discussed include the changes related to anchors and revocation paths, as well as the implementation of nonces. 3. Gossip V1.5 vs V2: The participants discussed the differences between Gossip V1.5 and V2 in terms of script binding and amount binding. They debated whether to fully bind to the script or allow any taproot output to be a channel. The potential implications for pathfinding and capacity graphs were also discussed. 4. Reputation System for Channel Jamming: The participants explored the idea of using a reputation system to mitigate channel jamming attacks. The discussion revolved around resource bucketing, reputation scores, endorsement signals, and the impact on network quality of service. 5. Simplified Commitments: The participants discussed the concept of simplified commitments, which aims to simplify the LN state machine by introducing turn taking and a refined protocol for updates, commitments, and revocations. They also discussed the possibility of introducing NACK messages for rejecting updates and the benefits of a more streamlined state machine. 6. Meta Spec Process: The participants debated the best approach to managing the LN specification as it evolves over time. They discussed the pros and cons of a living document vs. versioning, the need for modularization, and the importance of maintaining backward compatibility. The role of extensions, cleaning up the specification, and improving communication among developers were also discussed. 7. Async Payments/Trampoline: The participants briefly discussed the use of blinded payments for trampoline payments, where nodes in the network help route payments to their destination. The concept of trampolines, radius-based gossip, and splitting multi-path payments over trampoline were mentioned. In summary, the discussion covered a range of topics related to the LN specification, including package relay, Taproot, gossip protocols, reputation systems, simplified commitments, meta spec processes, and trampoline payments. The participants provided detailed insights, shared ideas, and debated the pros and cons of various proposals and approaches.

During the annual specification meeting in New York City at the end of June, the attendees attempted to take transcript-style notes. These notes are available in a Google Docs document, which you can find at the link provided. Additionally, the full set of notes is included at the end of the email, although the formatting may be affected. The discussions at the summit covered several topics, including: 1. Package Relay: The current proposal for package relay is ancestor package relay, which allows one child to have up to 24 ancestors. Currently, only mempool transactions are scored by ancestry, so there isn't much point in other types of packages. Commitment transactions still require the minimum relay fee for base package relay. Batch bumping is not allowed to prevent pinning attacks. With one anchor, RBF can be packaged. V3 transactions will allow for dropping minimum relay fees and the restriction of one child paying for one parent transaction. 2. HLTCs (HTLCs with anchors): There are changes being made to HLTCs with the introduction of SIGHASH_ANYONECANPAY, which allows the counterparty to inflate the size of a transaction. The discussion revolved around how much the system should be changed. The proposed changes would allow for zero-fee commitment transactions and one ephemeral anchor per transaction. The use of ephemeral anchors would eliminate the need for delay and ensure eviction of the parent transaction. 3. Mempool Policy: The mempool can be organized into clusters of transactions, allowing for easier sorting and reasoning. The mining algorithm will pick one "vertical" within the mempool using the ancestor fee rate. The discussion explored the possibility of adding cluster mempool to enable package RBF. 4. Taproot: The main change in taproot is around anchors, which become more complicated with this update. The discussion covered various aspects of taproot, including revocation paths, NUMS points, and co-op close negotiation. 5. Gossip V1.5 vs. V2: The discussion revolved around the script binding and amount binding in gossip. The participants debated whether to bind to the script or allow any taproot output. The consensus was to allow any taproot output to be a channel and let people experiment. 6. Multi-Sig Channel Parties: The discussion focused on different ways to implement multi-sig for one channel party, such as using scripts, FROSTy, or recursive musig. 7. PTLCs (Point Time Locked Contracts): The conversation explored different approaches to PTLCs, such as regular musig or adaptor signatures. The potential for redundant overpayment (stuckless payments) and different options for achieving it were also discussed. 8. Hybrid Approach to Channel Jamming: The discussion centered around different approaches to mitigate jamming attacks in Lightning Network, including monetary solutions (unconditional fees), reputation-based solutions, and utilizing scarce resources (POW, stake, tokens). The participants discussed the need to combine multiple solutions for effective mitigation and the challenges associated with each approach. 9. Reputation for Channel Jamming: The participants explored the concept of reputation-based mitigation for jamming attacks. The discussion focused on resource bucketing, reputation scores, and the allocation of protected and general slots for HTLCs based on reputation and endorsement signals. 10. Simplified Commitments: The conversation revolved around simplifying the state machine for Lightning Network by implementing turn-taking and introducing the concepts of revoke and NACK. The participants explored the implications of these changes and the benefits of simplified commitments. 11. Meta Spec Process: The participants discussed the idea of moving away from a single "living document" to a versioning system for the Lightning Network specification. The proposal was to have extensions that can be added and removed as needed, allowing for modularity and easier maintenance. The participants also discussed the need for better communication channels and the importance of recommitting to the lightning-dev mailing list. 12. Async Payments/Trampoline: The final discussion focused on trampoline payments and the potential for async (asynchronous) payments. The participants explored the concept of light nodes, trampoline routing, and the ability to split MPP (multi-part payments) over trampoline. In summary, the discussions covered a wide range of topics related to Lightning Network and its specifications. The participants delved into technical details, proposed solutions, and debated the benefits and challenges of various approaches.

lightning-dev

On the experiment of the Bitcoin Contracting Primitives WG and marking this community process "up for grabs"

This message is a detailed update on the progress and future plans for the development of Bitcoin consensus changes. The author begins by referencing past discussions about covenant proposals and the need for a new community process to specify covenants. They mention their goals, which include building a consistent framework to evaluate covenant proposals, finding common ground between proposals, expanding the consensus changes development process beyond Bitcoin Core, and maintaining a high-quality technical archive. The author acknowledges that other initiatives, such as the bitcoin-inquisition fork and the archiving of covenant proposals under the Optech umbrella, have also been undertaken. They mention the Bitcoin Contracting Primitives Working Group, which has held monthly meetings and documented various primitives and protocols related to Bitcoin contracting. The author explains that they launched the effort as an experiment, devoting 20% of their time to it. However, they have come to the realization that their time and energy would be better allocated to working on Lightning Network robustness. They express their belief that the scalability and smooth operation of the Lightning Network are more critical for Bitcoin's survival than extended covenant capabilities. The author encourages others who are working on covenant changes proposals to continue their work, noting that Taproot and Schnorr soft forks have proven to be beneficial for self-custody solutions. They also mention their own plans to focus on R&D works related to CoinPool, particularly in addressing interactivity issues and designing advanced Bitcoin contracts. The author concludes by acknowledging that they may have overpromised with the launch of the new process for Bitcoin consensus changes development. They emphasize the importance of having technical historians and archivists to assess, collect, and preserve consensus changes proposals, as well as QA devs to ensure proper testing before deployment. They invite others to continue the maintenance of the Bitcoin Contracting Primitives Working Group or collaborate with other organizations. Overall, this message provides detailed information about the progress, challenges, and future plans related to Bitcoin consensus changes.

In this message, the author is discussing their involvement in a community process related to Bitcoin development. They introduced the idea of a new process to specify covenants, which are conditions or agreements that can be added to Bitcoin transactions. The author explains that they will not be actively pursuing this process further, as they have decided to focus more on other Bitcoin projects. They mention that the goals of this process were to build a consistent framework for evaluating covenant proposals, identify commonalities between proposals, open up the consensus development process beyond Bitcoin Core, and maintain a high-quality technical archive. The author also mentions other initiatives that have been undertaken during the same period, such as a fork of Bitcoin Core called bitcoin-inquisition and the archiving of covenant proposals under the Optech umbrella. The author provides some details about the Bitcoin Contracting Primitives Working Group, which is a group of individuals who have been working on documenting and archiving various Bitcoin contract primitives and protocols. They mention that monthly meetings have been held, and there have been in-depth discussions on topics related to contract primitives and protocols. The author explains that they started this effort as an experiment and initially committed to dedicating 20% of their time to it. However, they have realized that there is still a lot of work to be done in other areas, such as improving the Lightning Network, which is a second-layer scaling solution for Bitcoin. They believe that working on scaling Bitcoin and improving its robustness is more critical for the survival of Bitcoin than focusing on advanced contract capabilities. The author acknowledges that they may have overpromised with the new community process but believes that enough progress has been made to demonstrate its value. They express that what Bitcoin needs is not necessarily more technical proposals but rather a focus on assessing, collecting, and preserving consensus change proposals and ensuring thorough testing before deployment. They invite others to continue the work of the Bitcoin Contracting Primitives Working Group if they are willing to commit resources and effort to it. Overall, the author is reflecting on their involvement in the community process related to Bitcoin covenant proposals and discussing their decision to shift their focus to other Bitcoin projects. They believe that there is still much work to be done in scaling and improving Bitcoin's robustness and express the need for dedicated individuals to assess and preserve consensus change proposals.

Blinded 2-party Musig2

This text describes the implementation of a version of the 2-of-2 Schnorr Musig2 protocol for statechains. Statechains involve a server (referred to as party 1) that is "blinded," meaning it holds a private key necessary to generate an aggregate signature on an aggregate public key, but it does not have access to certain information. The information that party 1 is not supposed to learn includes: 1) the aggregate public key, 2) the aggregate signature, and 3) the message being signed (denoted as "m" in the text). The security of this implementation relies on party 1 being trusted to report the number of partial signatures it has generated for a particular key, rather than being trusted to enforce rules on what it has signed in the unblinded case. The full set of signatures generated is verified on the client side. The implementation is based on the 2-of-2 musig2 protocol, which operates as follows: 1. Party 1 generates a private key, denoted as "x1," and the corresponding public key, denoted as "X1 = x1G". G is the generator point, and point multiplication is denoted as X = xG, while point addition is denoted as A = G + G. 2. Party 2 generates a private key, denoted as "x2," and the corresponding public key, denoted as "X2 = x2G". 3. The set of public keys is denoted as L = {X1, X2}. 4. The key aggregation coefficient is given by KeyAggCoef(L, X) = H(L, X), where H is a hash function. This coefficient is used to calculate the shared (aggregate) public key, denoted as X = a1X1 + a2X2, where a1 = KeyAggCoef(L, X1) and a2 = KeyAggCoef(L, X2). 5. To sign a message "m," party 1 generates a nonce "r1" and derives a point "R1 = r1G". Party 2 generates a nonce "r2" and derives a point "R2 = r2G". These points are aggregated into "R = R1 + R2". 6. Party 1 computes the challenge "c" as the hash of the concatenation of X, R, and m, i.e., c = H(X||R||m), and calculates s1 = c.a1.x1 + r1. 7. Party 2 also computes the challenge "c" using the same formula, c = H(X||R||m), and calculates s2 = c.a2.x2 + r2. 8. The final signature is represented as (R, s1 + s2). In the case of blinding party 1, the steps to prevent it from learning the full public key or final signature are as follows: 1. Key aggregation is performed solely by party 2. Party 1 only needs to send its own public key, X1, to party 2. 2. Nonce aggregation is performed solely by party 2. Party 1 only needs to send its own nonce, R1, to party 2. 3. Party 2 computes the challenge "c" using the same formula and sends it to party 1 in order to compute s1 = c.a1.x1 + r1. 4. Party 1 never learns the final value of (R, s1 + s2) or the message "m". This implementation aims to provide confidentiality for party 1 by blinding it from certain information, thereby ensuring that party 1 cannot determine the full public key, final signature, or the signed message. Any feedback or potential issues with this approach would be appreciated. The attached HTML part of the message was likely removed due to its content being irrelevant or not accessible through the text format.

In this implementation, we are using a cryptographic protocol called 2-of-2 Schnorr Musig2 for statechains. In this protocol, there are two parties involved - party 1 and party 2. The goal is to create an aggregate signature on an aggregate public key, while ensuring that party 1 remains fully "blinded" and does not learn certain information. Blinding refers to the process of preventing party 1 from gaining knowledge of the aggregate public key, the aggregate signature, and the message being signed. In this model of blinded statechains, the security relies on party 1 being trusted to report the number of partial signatures it has generated for a specific key. The actual verification of the signatures is done on the client side. Now, let's break down how the 2-of-2 musig2 protocol operates and how blinding is achieved: 1. Key Generation: - Party 1 generates a private key (x1) and a corresponding public key (X1 = x1G), where G is the generator point. - Party 2 does the same, generating a private key (x2) and a public key (X2 = x2G). - The set of public keys is represented by L = {X1, X2}. 2. Key Aggregation: - The key aggregation coefficient is calculated using the set of public keys (L) and the aggregate public key (X). - KeyAggCoef(L, X) = H(L, X), where H is a hash function. - The shared (aggregate) public key is calculated as X = a1X1 + a2X2, where a1 = KeyAggCoef(L, X1) and a2 = KeyAggCoef(L, X2). 3. Message Signing: - To sign a message (m), party 1 generates a nonce (r1) and calculates R1 = r1G. - Party 2 also generates a nonce (r2) and calculates R2 = r2G. - These nonces are aggregated to obtain R = R1 + R2. - Party 1 computes the 'challenge' (c) as c = H(X || R || m) and calculates s1 = c.a1.x1 + r1. - Party 2 also computes the 'challenge' (c) as c = H(X || R || m) and calculates s2 = c.a2.x2 + r2. - The final signature is (R, s1 + s2). Now, let's focus on the blinding aspect for party 1: To prevent party 1 from learning the full public key or the final signature, the following steps are taken: 1) Key aggregation is performed only by party 2. Party 1 simply sends its public key X1 to party 2. 2) Nonce aggregation is performed only by party 2. Party 1 sends its generated nonce R1 to party 2. 3) Party 2 computes the 'challenge' (c) as c = H(X || R || m) and sends it back to party 1. Party 1 then computes s1 = c.a1.x1 + r1. - Party 1 does not need to independently compute and verify the challenge (c) since it is already blinded from the message. By following these steps, party 1 never learns the final value of (R, s1 + s2) or the message (m). In terms of potential issues, it is important to carefully evaluate the trustworthiness of the statechain server that reports the number of partial signatures. Additionally, the full set of signatures should be verified on the client side to ensure their validity. Any comments or concerns regarding this implementation would be highly appreciated.

Computing Blinding Factors in a PTLC and Trampoline World

This passage describes a mathematical demonstration of a method for computing blinding factors in a specific way. The goal is to achieve certain properties, such as ensuring that only one blinding factor is needed for each intermediate node and the receiver, and allowing Trampoline nodes to provide blinding factors to sub-routes without the intermediate nodes being aware they are on a Trampoline route. The demonstration begins by establishing that the ultimate receiver has a secret value "r" and shares a point "R" with the ultimate sender, where R = r * G (G represents a point on an elliptic curve). In the simplest case, where the ultimate sender and receiver are directly connected, the ultimate sender chooses a random scalar "e" as the error blinding factor and constructs an onion with "e" encrypted for the ultimate receiver. Along with the onion, the ultimate sender offers a Payment-Triggered Lightning Contract (PTLC) with the point e * G + R. The ultimate receiver can claim this PTLC by revealing e + r. Next, the scenario is slightly modified to include an intermediate node named Carol. In this case, the ultimate sender still chooses a random scalar "e" as the final error factor but also generates two scalars "c" and "d" such that c + d = e. This is achieved by selecting a random "d" and computing c = e - d. The onion is then encrypted with e for the ultimate receiver and the ciphertext, along with d encrypted for Carol. The PTLC is sent to Carol with the point c * G + R. Carol adds her per-hop blinding factor times G to the input point and sends a modified PTLC with the point c * G + R + d * G to the next hop. This results in (c + d) * G + R, which is equivalent to e * G + R, as e = c + d. The ultimate receiver cannot differentiate whether the PTLC came from Carol or a direct source-to-destination route because both cases result in the same point e * G + R. When the ultimate receiver reveals e + r, Carol can compute c + r by taking e + r - d. Since c = e - d, e + r - d = e - d + r = c + r. Carol can then claim the incoming c * G + R with the scalar c + r. Carol only knows d, not c or r, so it cannot compute r. Lastly, the scenario is extended to include Carol as a Trampoline node, and the ultimate sender does not provide the detailed route from Carol to the next Trampoline hop. The ultimate sender learns R, selects a random e, and computes c and d such that c + d = e. The Trampoline-level onion includes e encrypted for the ultimate receiver and the ciphertext, along with d and the next Trampoline hop encrypted for Carol. The PTLC with the onion is sent to Carol with the point c * G + R. Carol decrypts the onion and obtains d. Carol then needs to search for a route from herself to the ultimate receiver. Let's assume the route found is Carol -> Alice -> ultimate receiver. Carol selects two scalars, a and b, such that a + b = d. It creates a new onion with the ciphertext copied from the ultimate sender and b encrypted for Alice. The PTLC with the point c * G + R + a * G is sent to Alice. Alice decrypts the onion and learns b. Alice forwards the PTLC with the point c * G + R + a * G + b * G to the next hop, the ultimate receiver. Since a + b = d, a * G + b * G = d * G. Also, c + d = e, so c * G + d * G = e * G. Therefore, c * G + R + a * G + b * G = c * G + a * G + b * G + R = c * G + d * G + R = (c + d) * G + R = e * G + R. The ultimate receiver receives the same e * G + R and cannot determine whether it was reached via a Trampoline, non-Trampoline intermediate, or direct route. Each intermediate node, both Trampoline and non-Trampoline, can claim its incoming PTLC, and only the ultimate sender knows c, allowing the recovery of r.

In this explanation, we will break down a mathematical demonstration that involves the computation of blinding factors. The purpose of this computation is to achieve certain goals, such as minimizing the number of blinding factors that intermediate nodes need to know and allowing trampoline nodes to provide blinding factors to sub-routes without revealing that they are trampoline nodes. Let's start by understanding the basic setup. We have a sender (ultimate sender) and a receiver (ultimate receiver). The ultimate receiver has a secret value called 'r'. The ultimate receiver shares a point called 'R' with the ultimate sender, where 'R' is equal to 'r' multiplied by a specific point 'G'. In the simplest case, if the ultimate sender can directly communicate with the ultimate receiver, it chooses a random value (scalar) called 'e' as the blinding factor. It constructs an onion with 'e' encryptable by the ultimate receiver and sends it along with a payment (PTLC) that contains the point 'e * G + R'. The ultimate receiver can claim this payment by revealing 'e + r' since it learns 'e' from the onion and knows 'r' (the secret value). This is possible because the contract between them requires the ultimate receiver to provide 'r' in exchange for payment. Now, let's consider a scenario where an intermediate node, Carol, exists between the ultimate sender and the ultimate receiver. In this case, the ultimate sender still needs to choose a final blinding factor 'e' randomly. However, the sender also needs to generate two other scalars, 'c' and 'd,' such that 'c + d = e'. This can be achieved by selecting a random scalar 'd' and computing 'c = e - d'. The ultimate sender then encrypts the onion in the following way: - 'e' is encrypted to the ultimate receiver. - The above ciphertext, along with 'd' encrypted, is sent to intermediate node Carol. The ultimate sender sends the payment (PTLC) with the point 'c * G + R' to Carol. At this point, each intermediate non-Trampoline node (such as Carol) takes the input point, adds its per-hop blinding factor multiplied by 'G', and uses the result as the output point to the next hop. So Carol receives 'c * G + R'. Carol then adds 'd * G' (the 'd' error obtained from the onion) and sends a PTLC with the point 'c * G + R + d * G' to the next hop. Note that 'e = c + d', so the PTLC sent by Carol to the ultimate sender can be rearranged as '(c + d) * G + R'. This is equivalent to 'e * G + R', which is the same as the direct case where there is no intermediate node. Therefore, the ultimate receiver cannot distinguish whether it received from Carol or from a further node since it sees 'e * G + R' in both cases. When the ultimate receiver releases 'e + r', Carol can compute 'c + r' by taking 'e + r - d'. Since 'c = e - d', 'e + r - d = e - d + r = c + r'. Carol can then claim the incoming 'c * G + R' with the scalar 'c + r'. It's important to note that Carol does not know 'c'; it only knows 'd' and, therefore, cannot compute 'r'. Now let's consider another scenario where Carol is a trampoline node, and the ultimate sender does not provide a detailed route from Carol to the next trampoline hop. In this case, the ultimate receiver is actually the final trampoline hop after Carol, but Carol is unaware of this fact and cannot learn it. The ultimate sender still learns 'R' but selects a random 'e' as the blinding factor. It generates 'c' and 'd' such that 'c + d = e', following the same technique as before. The ultimate sender then creates a trampoline-level onion with the following encrypted components: - 'e' encrypted to the ultimate receiver. - The above ciphertext, 'd', and the next trampoline hop (the node ID of the ultimate receiver) encrypted to Carol. The ultimate sender sends the payment (PTLC) with the above onion, containing the point 'c * G + R', to Carol. Carol decrypts the onion and obtains 'd'. Now, Carol needs to find a route from itself to the ultimate receiver, which, in this case, is the next trampoline hop. Suppose Carol finds a route Carol -> Alice -> ultimate receiver. Carol needs to make 'c * G + d * G + R' reach the ultimate receiver. It can do this by selecting two scalars, 'a' and 'b', such that 'a + b = d'. Carol knows 'd', so it randomly selects 'b' and computes 'a = d - b'. Carol creates the onion as follows: - It copies the ciphertext from the ultimate sender: 'e' encrypted to the ultimate receiver. - The above ciphertext and 'b' encrypted to Alice. Carol sends the PTLC with the point 'c * G + R + a * G' to Alice. Alice decrypts the onion and learns 'b'. Then, Alice forwards the PTLC with the point 'c * G + R + a * G + b * G' to the next hop, the ultimate receiver. Now, 'a + b = d', so 'a * G + b * G = d * G'. Also, 'c + d = e', so 'c * G + d * G = e * G'. Therefore: c * G + R + a * G + b * G = c * G + a * G + b * G + R (commutative property) = c * G + (a + b) * G + R (associative property) = c * G + d * G + R (d = a + b by construction) = (c + d) * G + R (associative property) = e * G + R (e = c + d by construction) Thus, the ultimate receiver receives the same 'e * G + R' and cannot differentiate whether it was reached via a trampoline, a non-trampoline intermediate, or directly. Similarly, when claiming, every intermediate node, both trampoline and non-trampoline, has enough data to claim its incoming PTLC. And only the ultimate sender knows 'c', which allows it to recover 'r'. I hope this detailed explanation helps you understand the mathematical demonstration and its implications.

Potential vulnerability in Lightning backends: BOLT-11 "payment hash" does not commit to payment!

In this message, the sender, Calle, is informing a list of recipients about an exploit that was discovered by their team at LNbits. The exploit allowed an attacker to create balances by taking advantage of a quirk in how invoices are handled internally. The team has already patched this issue in LNbits version 0.10.5 and recommends that everyone update as soon as possible. Calle wants to describe the attack in detail because they believe that similar exploits may be possible in other Lightning applications. They specifically mention that this information would be important for people working on custodial wallets, payment processors, account management software, and so on. The attack involves an attacker manipulating two payments, A and B, and tricking the backend into thinking that B is equal to A. Here are the steps involved: 1. The attacker creates an invoice A with an amount of 1000 sat (satoshi) in LNbits. 2. The attacker also creates a separate invoice B' with an amount of 1 sat on their own node. 3. The attacker then modifies B' by inserting the payment hash of payment A into it, effectively making B with manipulated payment details. 4. The attacker re-signs the invoice to make it look legitimate again and serializes it, creating the malicious invoice B. 5. Next, the attacker creates a new account in LNbits and pays invoice B. 6. The LNbits backend, when processing the payment, uses the payment hash of B to determine whether it's an internal payment or a payment via the Lightning Network. 7. Since the backend assumes that the payment hash of A commits to A, it finds A in its database. 8. The backend then settles the payment internally by crediting A and debiting B. 9. As a result, the attacker has effectively "created" 999 sat in their account by manipulating the payment process. To mitigate this exploit, the recommended approach is for backends to either use unique "checking ids" that they generate themselves for looking up internal payments or implement additional checks to ensure that the invoice details haven't been tampered with. For example, they could verify that the amount of A is equal to the amount of B. Calle also highlights two lessons learned from this attack. Firstly, it emphasizes the level of sophistication of attackers familiar with the Lightning Network. This particular exploit required a deep understanding of the underlying technology and the ability to create custom tools. Secondly, it underscores the importance of understanding that the "payment hash" in an invoice is actually just a "preimage" hash and doesn't commit to payment details such as amount or pubkey. Calle suggests calling it the "preimage hash" going forward to avoid any implicit assumptions. Overall, this message serves as a detailed explanation of the discovered exploit, the steps involved in carrying it out, the recommended mitigation, and the lessons learned from this experience.

Dear 15-year-old, Recently, a team called LNbits discovered an interesting issue in their software that could allow someone to exploit it. Let me explain it to you in detail. LNbits is a software that handles invoices related to Lightning Network, which is a technology used for quick and low-cost transactions of cryptocurrencies like Bitcoin. In this software, there was a loophole that allowed an attacker to create fake balances by taking advantage of how invoices are processed internally. The team at LNbits fixed this issue in their latest version, 0.10.5, and they are urging everyone to update their software as soon as possible if they haven't done so already. They are sharing the details of the attack because they believe that similar exploits might be possible in other Lightning Network applications. If you are involved in developing custodial wallets, payment processors, or account management software, this information is relevant to you. Now, let's talk about how the attack works. The attacker first creates an invoice, let's call it Invoice A, with an amount of 1000 sat (satoshis, the smallest unit of Bitcoin). Then, they create another invoice, Invoice B', with an amount of 1 sat on their own node. The attacker then modifies Invoice B' by inserting the payment hash of Invoice A into it. The payment hash is a unique identifier for each payment. By doing this, the attacker tricks the LNbits backend, the system that handles the invoices, into thinking that Invoice B is actually Invoice A. They do this by reshaping the invoice and making it look like a legitimate payment. Next, the attacker creates a new account in LNbits and pays Invoice B. The LNbits backend, which checks the payment hash to determine whether it's an internal payment or a payment through Lightning Network, finds Invoice A in its database. This is because the backend assumes that the payment hash commits to Invoice A. However, the critical part here is that payment hashes do not commit to payment details like the amount, but only to the preimage (a unique code linked to the payment). As a result, the LNbits backend settles the payment by crediting Invoice A and debiting Invoice B. By doing this, the attacker has effectively "created" 999 sat. To prevent such attacks, it is important for backends to use unique identifiers or additional checks when looking up internal payments. This ensures that the invoice details have not been tampered with. There are two lessons to learn from this incident. Firstly, it is crucial to understand that attackers who are knowledgeable about Lightning Network can be quite sophisticated. This attack required a deep understanding of technical concepts and custom tools to carry it out. Secondly, the term "payment hash" is misleading because it suggests that it commits to payment details like the amount of money or the public key. In reality, it only commits to the preimage. To mitigate confusion, the author suggests renaming it as the "preimage hash." I hope this explanation helps you understand the issue and the importance of keeping software secure and updated. Best, Calle

An Open Source Framework to Collect Lightning Network Metrics

In this message, the author is introducing a side project they have been working on. The project involves collecting data on the Lightning Network, which is a protocol built on top of blockchain technology for conducting faster and cheaper transactions. The main objective of the project is to monitor the evolution of the Lightning Network and gather relevant data. This collected data can then be used to evaluate different proposals or ideas related to the network. One specific proposal mentioned is "channel jamming," which refers to a scenario where a malicious user intentionally overloads a channel to disrupt transactions. The author highlights that collecting real data is important as it provides tangible insights into the network's behavior and allows for more informed evaluations. Simulations can only provide theoretical results, whereas real data offers a more accurate representation of the network's dynamics. Additionally, the author mentions that their project aims to support University Research that may not have access to real data. By providing this collected information, researchers can analyze and evaluate their own ideas without having to rely solely on simulations. The author provides links to further information about the project. [1] leads to a detailed document outlining the idea and methodology behind the data collection. [2] directs to an experimental explorer, a platform where users can explore and visualize the collected data. Finally, [3] is a public Graphql API (Application Programming Interface) that exposes the collected data for developers or researchers to access. In conclusion, the author hopes that their project will be useful to someone interested in studying, evaluating, or proposing solutions for the Lightning Network.

Hello! I'm happy to explain this to you in great detail. So, it seems like the person who wrote this message has a side project where they're trying to gather data on something called the lightning network. The lightning network is a system built on top of the Bitcoin blockchain that allows for faster and cheaper transactions. The goal of this project is to track how the lightning network evolves over time. They want to do this to evaluate different proposals or ideas for improving the network. They mention something called "channel jamming," which is one proposal they're interested in investigating. By collecting real data on the network, they can see how these proposals actually affect the network in practice, instead of just relying on simulation results. Additionally, they mention that they want to support university research that may not have access to this real data. By providing this data, they hope to enable more research and experimentation in the field. To achieve this, the person has come up with a way to define and collect information that can later be shared with others. They've provided links to a more detailed description of their idea, an experimental explorer where you can see the data they've collected, and a public Graphql API that allows others to access this data as well. The hope is that this project will be useful for someone who wants to study or improve the lightning network. The person who wrote this message goes by the name Vincent and they're excited about the potential impact of their project. I hope that helps! Let me know if you have any further questions.

option_simple_close for "unfailable" closing

The link provided is a pull request on the GitHub repository for the "bolts" project. The pull request is labeled as #1096. The description of the pull request indicates that it is a "can't fail!" close protocol, which was discussed at the NY Summit and on @Roasbeef's wishlist. The protocol aims to be as simple as possible, with the only complexity arising from allowing each side to indicate whether they want to omit their own output. The protocol is "taproot ready" in the sense that shutdown is always sent to trigger it, allowing nonces to be included without any persistence requirement. The pull request consists of three commits to the repository. The first commit introduces the new protocol, the second commit removes the requirement that shutdown not be sent multiple times (which was already nonsensical), and the third commit removes the older protocols. The pull request includes changes to the file "02-peer-protocol.md". The changes in this file introduce the new protocol, describe the closing negotiation process, and specify the requirements for each step of the negotiation. The file includes a section on "Closing Negotiation" that explains that once shutdown is complete and the channel is empty of Hashed Time Locked Contracts (HTLCs), each peer says what fee it will pay and the other side simply signs off on that transaction. The section includes details on the message types, the data they contain, and the requirements for each peer in the negotiation. The pull request also includes changes to the file "03-transactions.md". The changes in this file provide details on the closing transactions used in the negotiation process. The file describes the different variants of the closing transaction and outlines the requirements for each variant. Finally, the pull request includes changes to the file "09-features.md". The changes in this file add a new feature called "option_simple_close" which is related to the simplified closing negotiation described in the pull request. Overall, the pull request introduces a new closing protocol for the bolts project, provides specifications for the negotiation process, and makes changes to related files to support the new protocol.

This is a pull request on a GitHub repository called "bolts" that proposes a new protocol for closing a channel in the Lightning Network. The pull request is numbered 1096. The new protocol is called "can't fail!" close protocol and it was discussed at the NY Summit and on the wishlist of a person named Roasbeef. The goal of this protocol is to make the closing process as simple as possible, with the only complexity being the option for each side to indicate whether they want to omit their own output. The protocol is "taproot ready" in the sense that the shutdown message is always sent to trigger the closing process, and this message can contain the necessary data without requiring any persistence. The pull request is split into three commits for cleanliness and organization. The first commit introduces the new protocol, the second removes a requirement that no longer makes sense, and the third removes older protocols that are no longer needed. The pull request includes changes to the "02-peer-protocol.md" and "03-transactions.md" files. In the "02-peer-protocol.md" file, there are several sections that describe the closing process, including the closing initiation, closing negotiation, and normal operation. The "closing negotiation" section is further divided into two parts: "closing_complete" and "closing_sig". In the "closing_complete" part, each peer says what fee it is willing to pay, and the other side simply signs that transaction. The complexity arises from allowing each side to omit its own output if it is not economically viable. This process can be repeated every time a shutdown message is received, allowing for re-negotiation. The "closing_sig" part describes the requirements for this message, including the transaction data that needs to be included and the signatures that need to be provided. The requirements differ depending on whether the sender of the message is the closer or the closee. The receiver of the closing_sig message needs to validate the signatures and select one of the transactions to respond to. The "03-transactions.md" file includes the details of the closing transactions, including the classic closing transaction variant and the closing transaction variant used for closing_complete and closing_sig messages. Overall, this pull request proposes a new protocol for closing a channel in the Lightning Network that simplifies the process and allows for negotiation between the peers involved. It includes changes to the protocol specification files to describe the new protocol in detail.

LN Summit 2023 Notes

The text you provided is a detailed summary of a discussion about various topics related to the Lightning Network (LN) specification. Here is a breakdown of the key points discussed: 1. Package Relay: The discussion focused on the proposal for package relay, which involves grouping transactions into packages for more efficient processing. The current proposal is to use ancestor package relay, which allows for up to 24 ancestors for each child transaction. Other topics discussed include base package relay, commitment transactions, ephemeral anchors, HLTCs (hashed timelock contracts), and mempool policy changes. 2. Taproot: The discussion touched on the latest developments in the Taproot privacy and scalability improvement proposal. Specific points discussed include the changes related to anchors and revocation paths, as well as the implementation of nonces. 3. Gossip V1.5 vs V2: The participants discussed the differences between Gossip V1.5 and V2 in terms of script binding and amount binding. They debated whether to fully bind to the script or allow any taproot output to be a channel. The potential implications for pathfinding and capacity graphs were also discussed. 4. Reputation System for Channel Jamming: The participants explored the idea of using a reputation system to mitigate channel jamming attacks. The discussion revolved around resource bucketing, reputation scores, endorsement signals, and the impact on network quality of service. 5. Simplified Commitments: The participants discussed the concept of simplified commitments, which aims to simplify the LN state machine by introducing turn taking and a refined protocol for updates, commitments, and revocations. They also discussed the possibility of introducing NACK messages for rejecting updates and the benefits of a more streamlined state machine. 6. Meta Spec Process: The participants debated the best approach to managing the LN specification as it evolves over time. They discussed the pros and cons of a living document vs. versioning, the need for modularization, and the importance of maintaining backward compatibility. The role of extensions, cleaning up the specification, and improving communication among developers were also discussed. 7. Async Payments/Trampoline: The participants briefly discussed the use of blinded payments for trampoline payments, where nodes in the network help route payments to their destination. The concept of trampolines, radius-based gossip, and splitting multi-path payments over trampoline were mentioned. In summary, the discussion covered a range of topics related to the LN specification, including package relay, Taproot, gossip protocols, reputation systems, simplified commitments, meta spec processes, and trampoline payments. The participants provided detailed insights, shared ideas, and debated the pros and cons of various proposals and approaches.

During the annual specification meeting in New York City at the end of June, the attendees attempted to take transcript-style notes. These notes are available in a Google Docs document, which you can find at the link provided. Additionally, the full set of notes is included at the end of the email, although the formatting may be affected. The discussions at the summit covered several topics, including: 1. Package Relay: The current proposal for package relay is ancestor package relay, which allows one child to have up to 24 ancestors. Currently, only mempool transactions are scored by ancestry, so there isn't much point in other types of packages. Commitment transactions still require the minimum relay fee for base package relay. Batch bumping is not allowed to prevent pinning attacks. With one anchor, RBF can be packaged. V3 transactions will allow for dropping minimum relay fees and the restriction of one child paying for one parent transaction. 2. HLTCs (HTLCs with anchors): There are changes being made to HLTCs with the introduction of SIGHASH_ANYONECANPAY, which allows the counterparty to inflate the size of a transaction. The discussion revolved around how much the system should be changed. The proposed changes would allow for zero-fee commitment transactions and one ephemeral anchor per transaction. The use of ephemeral anchors would eliminate the need for delay and ensure eviction of the parent transaction. 3. Mempool Policy: The mempool can be organized into clusters of transactions, allowing for easier sorting and reasoning. The mining algorithm will pick one "vertical" within the mempool using the ancestor fee rate. The discussion explored the possibility of adding cluster mempool to enable package RBF. 4. Taproot: The main change in taproot is around anchors, which become more complicated with this update. The discussion covered various aspects of taproot, including revocation paths, NUMS points, and co-op close negotiation. 5. Gossip V1.5 vs. V2: The discussion revolved around the script binding and amount binding in gossip. The participants debated whether to bind to the script or allow any taproot output. The consensus was to allow any taproot output to be a channel and let people experiment. 6. Multi-Sig Channel Parties: The discussion focused on different ways to implement multi-sig for one channel party, such as using scripts, FROSTy, or recursive musig. 7. PTLCs (Point Time Locked Contracts): The conversation explored different approaches to PTLCs, such as regular musig or adaptor signatures. The potential for redundant overpayment (stuckless payments) and different options for achieving it were also discussed. 8. Hybrid Approach to Channel Jamming: The discussion centered around different approaches to mitigate jamming attacks in Lightning Network, including monetary solutions (unconditional fees), reputation-based solutions, and utilizing scarce resources (POW, stake, tokens). The participants discussed the need to combine multiple solutions for effective mitigation and the challenges associated with each approach. 9. Reputation for Channel Jamming: The participants explored the concept of reputation-based mitigation for jamming attacks. The discussion focused on resource bucketing, reputation scores, and the allocation of protected and general slots for HTLCs based on reputation and endorsement signals. 10. Simplified Commitments: The conversation revolved around simplifying the state machine for Lightning Network by implementing turn-taking and introducing the concepts of revoke and NACK. The participants explored the implications of these changes and the benefits of simplified commitments. 11. Meta Spec Process: The participants discussed the idea of moving away from a single "living document" to a versioning system for the Lightning Network specification. The proposal was to have extensions that can be added and removed as needed, allowing for modularity and easier maintenance. The participants also discussed the need for better communication channels and the importance of recommitting to the lightning-dev mailing list. 12. Async Payments/Trampoline: The final discussion focused on trampoline payments and the potential for async (asynchronous) payments. The participants explored the concept of light nodes, trampoline routing, and the ability to split MPP (multi-part payments) over trampoline. In summary, the discussions covered a wide range of topics related to Lightning Network and its specifications. The participants delved into technical details, proposed solutions, and debated the benefits and challenges of various approaches.

On the experiment of the Bitcoin Contracting Primitives WG and marking this community process "up for grabs"

This message is a detailed update on the progress and future plans for the development of Bitcoin consensus changes. The author begins by referencing past discussions about covenant proposals and the need for a new community process to specify covenants. They mention their goals, which include building a consistent framework to evaluate covenant proposals, finding common ground between proposals, expanding the consensus changes development process beyond Bitcoin Core, and maintaining a high-quality technical archive. The author acknowledges that other initiatives, such as the bitcoin-inquisition fork and the archiving of covenant proposals under the Optech umbrella, have also been undertaken. They mention the Bitcoin Contracting Primitives Working Group, which has held monthly meetings and documented various primitives and protocols related to Bitcoin contracting. The author explains that they launched the effort as an experiment, devoting 20% of their time to it. However, they have come to the realization that their time and energy would be better allocated to working on Lightning Network robustness. They express their belief that the scalability and smooth operation of the Lightning Network are more critical for Bitcoin's survival than extended covenant capabilities. The author encourages others who are working on covenant changes proposals to continue their work, noting that Taproot and Schnorr soft forks have proven to be beneficial for self-custody solutions. They also mention their own plans to focus on R&D works related to CoinPool, particularly in addressing interactivity issues and designing advanced Bitcoin contracts. The author concludes by acknowledging that they may have overpromised with the launch of the new process for Bitcoin consensus changes development. They emphasize the importance of having technical historians and archivists to assess, collect, and preserve consensus changes proposals, as well as QA devs to ensure proper testing before deployment. They invite others to continue the maintenance of the Bitcoin Contracting Primitives Working Group or collaborate with other organizations. Overall, this message provides detailed information about the progress, challenges, and future plans related to Bitcoin consensus changes.

In this message, the author is discussing their involvement in a community process related to Bitcoin development. They introduced the idea of a new process to specify covenants, which are conditions or agreements that can be added to Bitcoin transactions. The author explains that they will not be actively pursuing this process further, as they have decided to focus more on other Bitcoin projects. They mention that the goals of this process were to build a consistent framework for evaluating covenant proposals, identify commonalities between proposals, open up the consensus development process beyond Bitcoin Core, and maintain a high-quality technical archive. The author also mentions other initiatives that have been undertaken during the same period, such as a fork of Bitcoin Core called bitcoin-inquisition and the archiving of covenant proposals under the Optech umbrella. The author provides some details about the Bitcoin Contracting Primitives Working Group, which is a group of individuals who have been working on documenting and archiving various Bitcoin contract primitives and protocols. They mention that monthly meetings have been held, and there have been in-depth discussions on topics related to contract primitives and protocols. The author explains that they started this effort as an experiment and initially committed to dedicating 20% of their time to it. However, they have realized that there is still a lot of work to be done in other areas, such as improving the Lightning Network, which is a second-layer scaling solution for Bitcoin. They believe that working on scaling Bitcoin and improving its robustness is more critical for the survival of Bitcoin than focusing on advanced contract capabilities. The author acknowledges that they may have overpromised with the new community process but believes that enough progress has been made to demonstrate its value. They express that what Bitcoin needs is not necessarily more technical proposals but rather a focus on assessing, collecting, and preserving consensus change proposals and ensuring thorough testing before deployment. They invite others to continue the work of the Bitcoin Contracting Primitives Working Group if they are willing to commit resources and effort to it. Overall, the author is reflecting on their involvement in the community process related to Bitcoin covenant proposals and discussing their decision to shift their focus to other Bitcoin projects. They believe that there is still much work to be done in scaling and improving Bitcoin's robustness and express the need for dedicated individuals to assess and preserve consensus change proposals.

Blinded 2-party Musig2

This text describes the implementation of a version of the 2-of-2 Schnorr Musig2 protocol for statechains. Statechains involve a server (referred to as party 1) that is "blinded," meaning it holds a private key necessary to generate an aggregate signature on an aggregate public key, but it does not have access to certain information. The information that party 1 is not supposed to learn includes: 1) the aggregate public key, 2) the aggregate signature, and 3) the message being signed (denoted as "m" in the text). The security of this implementation relies on party 1 being trusted to report the number of partial signatures it has generated for a particular key, rather than being trusted to enforce rules on what it has signed in the unblinded case. The full set of signatures generated is verified on the client side. The implementation is based on the 2-of-2 musig2 protocol, which operates as follows: 1. Party 1 generates a private key, denoted as "x1," and the corresponding public key, denoted as "X1 = x1G". G is the generator point, and point multiplication is denoted as X = xG, while point addition is denoted as A = G + G. 2. Party 2 generates a private key, denoted as "x2," and the corresponding public key, denoted as "X2 = x2G". 3. The set of public keys is denoted as L = {X1, X2}. 4. The key aggregation coefficient is given by KeyAggCoef(L, X) = H(L, X), where H is a hash function. This coefficient is used to calculate the shared (aggregate) public key, denoted as X = a1X1 + a2X2, where a1 = KeyAggCoef(L, X1) and a2 = KeyAggCoef(L, X2). 5. To sign a message "m," party 1 generates a nonce "r1" and derives a point "R1 = r1G". Party 2 generates a nonce "r2" and derives a point "R2 = r2G". These points are aggregated into "R = R1 + R2". 6. Party 1 computes the challenge "c" as the hash of the concatenation of X, R, and m, i.e., c = H(X||R||m), and calculates s1 = c.a1.x1 + r1. 7. Party 2 also computes the challenge "c" using the same formula, c = H(X||R||m), and calculates s2 = c.a2.x2 + r2. 8. The final signature is represented as (R, s1 + s2). In the case of blinding party 1, the steps to prevent it from learning the full public key or final signature are as follows: 1. Key aggregation is performed solely by party 2. Party 1 only needs to send its own public key, X1, to party 2. 2. Nonce aggregation is performed solely by party 2. Party 1 only needs to send its own nonce, R1, to party 2. 3. Party 2 computes the challenge "c" using the same formula and sends it to party 1 in order to compute s1 = c.a1.x1 + r1. 4. Party 1 never learns the final value of (R, s1 + s2) or the message "m". This implementation aims to provide confidentiality for party 1 by blinding it from certain information, thereby ensuring that party 1 cannot determine the full public key, final signature, or the signed message. Any feedback or potential issues with this approach would be appreciated. The attached HTML part of the message was likely removed due to its content being irrelevant or not accessible through the text format.

In this implementation, we are using a cryptographic protocol called 2-of-2 Schnorr Musig2 for statechains. In this protocol, there are two parties involved - party 1 and party 2. The goal is to create an aggregate signature on an aggregate public key, while ensuring that party 1 remains fully "blinded" and does not learn certain information. Blinding refers to the process of preventing party 1 from gaining knowledge of the aggregate public key, the aggregate signature, and the message being signed. In this model of blinded statechains, the security relies on party 1 being trusted to report the number of partial signatures it has generated for a specific key. The actual verification of the signatures is done on the client side. Now, let's break down how the 2-of-2 musig2 protocol operates and how blinding is achieved: 1. Key Generation: - Party 1 generates a private key (x1) and a corresponding public key (X1 = x1G), where G is the generator point. - Party 2 does the same, generating a private key (x2) and a public key (X2 = x2G). - The set of public keys is represented by L = {X1, X2}. 2. Key Aggregation: - The key aggregation coefficient is calculated using the set of public keys (L) and the aggregate public key (X). - KeyAggCoef(L, X) = H(L, X), where H is a hash function. - The shared (aggregate) public key is calculated as X = a1X1 + a2X2, where a1 = KeyAggCoef(L, X1) and a2 = KeyAggCoef(L, X2). 3. Message Signing: - To sign a message (m), party 1 generates a nonce (r1) and calculates R1 = r1G. - Party 2 also generates a nonce (r2) and calculates R2 = r2G. - These nonces are aggregated to obtain R = R1 + R2. - Party 1 computes the 'challenge' (c) as c = H(X || R || m) and calculates s1 = c.a1.x1 + r1. - Party 2 also computes the 'challenge' (c) as c = H(X || R || m) and calculates s2 = c.a2.x2 + r2. - The final signature is (R, s1 + s2). Now, let's focus on the blinding aspect for party 1: To prevent party 1 from learning the full public key or the final signature, the following steps are taken: 1) Key aggregation is performed only by party 2. Party 1 simply sends its public key X1 to party 2. 2) Nonce aggregation is performed only by party 2. Party 1 sends its generated nonce R1 to party 2. 3) Party 2 computes the 'challenge' (c) as c = H(X || R || m) and sends it back to party 1. Party 1 then computes s1 = c.a1.x1 + r1. - Party 1 does not need to independently compute and verify the challenge (c) since it is already blinded from the message. By following these steps, party 1 never learns the final value of (R, s1 + s2) or the message (m). In terms of potential issues, it is important to carefully evaluate the trustworthiness of the statechain server that reports the number of partial signatures. Additionally, the full set of signatures should be verified on the client side to ensure their validity. Any comments or concerns regarding this implementation would be highly appreciated.

Computing Blinding Factors in a PTLC and Trampoline World

This passage describes a mathematical demonstration of a method for computing blinding factors in a specific way. The goal is to achieve certain properties, such as ensuring that only one blinding factor is needed for each intermediate node and the receiver, and allowing Trampoline nodes to provide blinding factors to sub-routes without the intermediate nodes being aware they are on a Trampoline route. The demonstration begins by establishing that the ultimate receiver has a secret value "r" and shares a point "R" with the ultimate sender, where R = r * G (G represents a point on an elliptic curve). In the simplest case, where the ultimate sender and receiver are directly connected, the ultimate sender chooses a random scalar "e" as the error blinding factor and constructs an onion with "e" encrypted for the ultimate receiver. Along with the onion, the ultimate sender offers a Payment-Triggered Lightning Contract (PTLC) with the point e * G + R. The ultimate receiver can claim this PTLC by revealing e + r. Next, the scenario is slightly modified to include an intermediate node named Carol. In this case, the ultimate sender still chooses a random scalar "e" as the final error factor but also generates two scalars "c" and "d" such that c + d = e. This is achieved by selecting a random "d" and computing c = e - d. The onion is then encrypted with e for the ultimate receiver and the ciphertext, along with d encrypted for Carol. The PTLC is sent to Carol with the point c * G + R. Carol adds her per-hop blinding factor times G to the input point and sends a modified PTLC with the point c * G + R + d * G to the next hop. This results in (c + d) * G + R, which is equivalent to e * G + R, as e = c + d. The ultimate receiver cannot differentiate whether the PTLC came from Carol or a direct source-to-destination route because both cases result in the same point e * G + R. When the ultimate receiver reveals e + r, Carol can compute c + r by taking e + r - d. Since c = e - d, e + r - d = e - d + r = c + r. Carol can then claim the incoming c * G + R with the scalar c + r. Carol only knows d, not c or r, so it cannot compute r. Lastly, the scenario is extended to include Carol as a Trampoline node, and the ultimate sender does not provide the detailed route from Carol to the next Trampoline hop. The ultimate sender learns R, selects a random e, and computes c and d such that c + d = e. The Trampoline-level onion includes e encrypted for the ultimate receiver and the ciphertext, along with d and the next Trampoline hop encrypted for Carol. The PTLC with the onion is sent to Carol with the point c * G + R. Carol decrypts the onion and obtains d. Carol then needs to search for a route from herself to the ultimate receiver. Let's assume the route found is Carol -> Alice -> ultimate receiver. Carol selects two scalars, a and b, such that a + b = d. It creates a new onion with the ciphertext copied from the ultimate sender and b encrypted for Alice. The PTLC with the point c * G + R + a * G is sent to Alice. Alice decrypts the onion and learns b. Alice forwards the PTLC with the point c * G + R + a * G + b * G to the next hop, the ultimate receiver. Since a + b = d, a * G + b * G = d * G. Also, c + d = e, so c * G + d * G = e * G. Therefore, c * G + R + a * G + b * G = c * G + a * G + b * G + R = c * G + d * G + R = (c + d) * G + R = e * G + R. The ultimate receiver receives the same e * G + R and cannot determine whether it was reached via a Trampoline, non-Trampoline intermediate, or direct route. Each intermediate node, both Trampoline and non-Trampoline, can claim its incoming PTLC, and only the ultimate sender knows c, allowing the recovery of r.

In this explanation, we will break down a mathematical demonstration that involves the computation of blinding factors. The purpose of this computation is to achieve certain goals, such as minimizing the number of blinding factors that intermediate nodes need to know and allowing trampoline nodes to provide blinding factors to sub-routes without revealing that they are trampoline nodes. Let's start by understanding the basic setup. We have a sender (ultimate sender) and a receiver (ultimate receiver). The ultimate receiver has a secret value called 'r'. The ultimate receiver shares a point called 'R' with the ultimate sender, where 'R' is equal to 'r' multiplied by a specific point 'G'. In the simplest case, if the ultimate sender can directly communicate with the ultimate receiver, it chooses a random value (scalar) called 'e' as the blinding factor. It constructs an onion with 'e' encryptable by the ultimate receiver and sends it along with a payment (PTLC) that contains the point 'e * G + R'. The ultimate receiver can claim this payment by revealing 'e + r' since it learns 'e' from the onion and knows 'r' (the secret value). This is possible because the contract between them requires the ultimate receiver to provide 'r' in exchange for payment. Now, let's consider a scenario where an intermediate node, Carol, exists between the ultimate sender and the ultimate receiver. In this case, the ultimate sender still needs to choose a final blinding factor 'e' randomly. However, the sender also needs to generate two other scalars, 'c' and 'd,' such that 'c + d = e'. This can be achieved by selecting a random scalar 'd' and computing 'c = e - d'. The ultimate sender then encrypts the onion in the following way: - 'e' is encrypted to the ultimate receiver. - The above ciphertext, along with 'd' encrypted, is sent to intermediate node Carol. The ultimate sender sends the payment (PTLC) with the point 'c * G + R' to Carol. At this point, each intermediate non-Trampoline node (such as Carol) takes the input point, adds its per-hop blinding factor multiplied by 'G', and uses the result as the output point to the next hop. So Carol receives 'c * G + R'. Carol then adds 'd * G' (the 'd' error obtained from the onion) and sends a PTLC with the point 'c * G + R + d * G' to the next hop. Note that 'e = c + d', so the PTLC sent by Carol to the ultimate sender can be rearranged as '(c + d) * G + R'. This is equivalent to 'e * G + R', which is the same as the direct case where there is no intermediate node. Therefore, the ultimate receiver cannot distinguish whether it received from Carol or from a further node since it sees 'e * G + R' in both cases. When the ultimate receiver releases 'e + r', Carol can compute 'c + r' by taking 'e + r - d'. Since 'c = e - d', 'e + r - d = e - d + r = c + r'. Carol can then claim the incoming 'c * G + R' with the scalar 'c + r'. It's important to note that Carol does not know 'c'; it only knows 'd' and, therefore, cannot compute 'r'. Now let's consider another scenario where Carol is a trampoline node, and the ultimate sender does not provide a detailed route from Carol to the next trampoline hop. In this case, the ultimate receiver is actually the final trampoline hop after Carol, but Carol is unaware of this fact and cannot learn it. The ultimate sender still learns 'R' but selects a random 'e' as the blinding factor. It generates 'c' and 'd' such that 'c + d = e', following the same technique as before. The ultimate sender then creates a trampoline-level onion with the following encrypted components: - 'e' encrypted to the ultimate receiver. - The above ciphertext, 'd', and the next trampoline hop (the node ID of the ultimate receiver) encrypted to Carol. The ultimate sender sends the payment (PTLC) with the above onion, containing the point 'c * G + R', to Carol. Carol decrypts the onion and obtains 'd'. Now, Carol needs to find a route from itself to the ultimate receiver, which, in this case, is the next trampoline hop. Suppose Carol finds a route Carol -> Alice -> ultimate receiver. Carol needs to make 'c * G + d * G + R' reach the ultimate receiver. It can do this by selecting two scalars, 'a' and 'b', such that 'a + b = d'. Carol knows 'd', so it randomly selects 'b' and computes 'a = d - b'. Carol creates the onion as follows: - It copies the ciphertext from the ultimate sender: 'e' encrypted to the ultimate receiver. - The above ciphertext and 'b' encrypted to Alice. Carol sends the PTLC with the point 'c * G + R + a * G' to Alice. Alice decrypts the onion and learns 'b'. Then, Alice forwards the PTLC with the point 'c * G + R + a * G + b * G' to the next hop, the ultimate receiver. Now, 'a + b = d', so 'a * G + b * G = d * G'. Also, 'c + d = e', so 'c * G + d * G = e * G'. Therefore: c * G + R + a * G + b * G = c * G + a * G + b * G + R (commutative property) = c * G + (a + b) * G + R (associative property) = c * G + d * G + R (d = a + b by construction) = (c + d) * G + R (associative property) = e * G + R (e = c + d by construction) Thus, the ultimate receiver receives the same 'e * G + R' and cannot differentiate whether it was reached via a trampoline, a non-trampoline intermediate, or directly. Similarly, when claiming, every intermediate node, both trampoline and non-trampoline, has enough data to claim its incoming PTLC. And only the ultimate sender knows 'c', which allows it to recover 'r'. I hope this detailed explanation helps you understand the mathematical demonstration and its implications.

Potential vulnerability in Lightning backends: BOLT-11 "payment hash" does not commit to payment!

In this message, the sender, Calle, is informing a list of recipients about an exploit that was discovered by their team at LNbits. The exploit allowed an attacker to create balances by taking advantage of a quirk in how invoices are handled internally. The team has already patched this issue in LNbits version 0.10.5 and recommends that everyone update as soon as possible. Calle wants to describe the attack in detail because they believe that similar exploits may be possible in other Lightning applications. They specifically mention that this information would be important for people working on custodial wallets, payment processors, account management software, and so on. The attack involves an attacker manipulating two payments, A and B, and tricking the backend into thinking that B is equal to A. Here are the steps involved: 1. The attacker creates an invoice A with an amount of 1000 sat (satoshi) in LNbits. 2. The attacker also creates a separate invoice B' with an amount of 1 sat on their own node. 3. The attacker then modifies B' by inserting the payment hash of payment A into it, effectively making B with manipulated payment details. 4. The attacker re-signs the invoice to make it look legitimate again and serializes it, creating the malicious invoice B. 5. Next, the attacker creates a new account in LNbits and pays invoice B. 6. The LNbits backend, when processing the payment, uses the payment hash of B to determine whether it's an internal payment or a payment via the Lightning Network. 7. Since the backend assumes that the payment hash of A commits to A, it finds A in its database. 8. The backend then settles the payment internally by crediting A and debiting B. 9. As a result, the attacker has effectively "created" 999 sat in their account by manipulating the payment process. To mitigate this exploit, the recommended approach is for backends to either use unique "checking ids" that they generate themselves for looking up internal payments or implement additional checks to ensure that the invoice details haven't been tampered with. For example, they could verify that the amount of A is equal to the amount of B. Calle also highlights two lessons learned from this attack. Firstly, it emphasizes the level of sophistication of attackers familiar with the Lightning Network. This particular exploit required a deep understanding of the underlying technology and the ability to create custom tools. Secondly, it underscores the importance of understanding that the "payment hash" in an invoice is actually just a "preimage" hash and doesn't commit to payment details such as amount or pubkey. Calle suggests calling it the "preimage hash" going forward to avoid any implicit assumptions. Overall, this message serves as a detailed explanation of the discovered exploit, the steps involved in carrying it out, the recommended mitigation, and the lessons learned from this experience.

Dear 15-year-old, Recently, a team called LNbits discovered an interesting issue in their software that could allow someone to exploit it. Let me explain it to you in detail. LNbits is a software that handles invoices related to Lightning Network, which is a technology used for quick and low-cost transactions of cryptocurrencies like Bitcoin. In this software, there was a loophole that allowed an attacker to create fake balances by taking advantage of how invoices are processed internally. The team at LNbits fixed this issue in their latest version, 0.10.5, and they are urging everyone to update their software as soon as possible if they haven't done so already. They are sharing the details of the attack because they believe that similar exploits might be possible in other Lightning Network applications. If you are involved in developing custodial wallets, payment processors, or account management software, this information is relevant to you. Now, let's talk about how the attack works. The attacker first creates an invoice, let's call it Invoice A, with an amount of 1000 sat (satoshis, the smallest unit of Bitcoin). Then, they create another invoice, Invoice B', with an amount of 1 sat on their own node. The attacker then modifies Invoice B' by inserting the payment hash of Invoice A into it. The payment hash is a unique identifier for each payment. By doing this, the attacker tricks the LNbits backend, the system that handles the invoices, into thinking that Invoice B is actually Invoice A. They do this by reshaping the invoice and making it look like a legitimate payment. Next, the attacker creates a new account in LNbits and pays Invoice B. The LNbits backend, which checks the payment hash to determine whether it's an internal payment or a payment through Lightning Network, finds Invoice A in its database. This is because the backend assumes that the payment hash commits to Invoice A. However, the critical part here is that payment hashes do not commit to payment details like the amount, but only to the preimage (a unique code linked to the payment). As a result, the LNbits backend settles the payment by crediting Invoice A and debiting Invoice B. By doing this, the attacker has effectively "created" 999 sat. To prevent such attacks, it is important for backends to use unique identifiers or additional checks when looking up internal payments. This ensures that the invoice details have not been tampered with. There are two lessons to learn from this incident. Firstly, it is crucial to understand that attackers who are knowledgeable about Lightning Network can be quite sophisticated. This attack required a deep understanding of technical concepts and custom tools to carry it out. Secondly, the term "payment hash" is misleading because it suggests that it commits to payment details like the amount of money or the public key. In reality, it only commits to the preimage. To mitigate confusion, the author suggests renaming it as the "preimage hash." I hope this explanation helps you understand the issue and the importance of keeping software secure and updated. Best, Calle

An Open Source Framework to Collect Lightning Network Metrics

In this message, the author is introducing a side project they have been working on. The project involves collecting data on the Lightning Network, which is a protocol built on top of blockchain technology for conducting faster and cheaper transactions. The main objective of the project is to monitor the evolution of the Lightning Network and gather relevant data. This collected data can then be used to evaluate different proposals or ideas related to the network. One specific proposal mentioned is "channel jamming," which refers to a scenario where a malicious user intentionally overloads a channel to disrupt transactions. The author highlights that collecting real data is important as it provides tangible insights into the network's behavior and allows for more informed evaluations. Simulations can only provide theoretical results, whereas real data offers a more accurate representation of the network's dynamics. Additionally, the author mentions that their project aims to support University Research that may not have access to real data. By providing this collected information, researchers can analyze and evaluate their own ideas without having to rely solely on simulations. The author provides links to further information about the project. [1] leads to a detailed document outlining the idea and methodology behind the data collection. [2] directs to an experimental explorer, a platform where users can explore and visualize the collected data. Finally, [3] is a public Graphql API (Application Programming Interface) that exposes the collected data for developers or researchers to access. In conclusion, the author hopes that their project will be useful to someone interested in studying, evaluating, or proposing solutions for the Lightning Network.

Hello! I'm happy to explain this to you in great detail. So, it seems like the person who wrote this message has a side project where they're trying to gather data on something called the lightning network. The lightning network is a system built on top of the Bitcoin blockchain that allows for faster and cheaper transactions. The goal of this project is to track how the lightning network evolves over time. They want to do this to evaluate different proposals or ideas for improving the network. They mention something called "channel jamming," which is one proposal they're interested in investigating. By collecting real data on the network, they can see how these proposals actually affect the network in practice, instead of just relying on simulation results. Additionally, they mention that they want to support university research that may not have access to this real data. By providing this data, they hope to enable more research and experimentation in the field. To achieve this, the person has come up with a way to define and collect information that can later be shared with others. They've provided links to a more detailed description of their idea, an experimental explorer where you can see the data they've collected, and a public Graphql API that allows others to access this data as well. The hope is that this project will be useful for someone who wants to study or improve the lightning network. The person who wrote this message goes by the name Vincent and they're excited about the potential impact of their project. I hope that helps! Let me know if you have any further questions.

option_simple_close for "unfailable" closing

The link provided is a pull request on the GitHub repository for the "bolts" project. The pull request is labeled as #1096. The description of the pull request indicates that it is a "can't fail!" close protocol, which was discussed at the NY Summit and on @Roasbeef's wishlist. The protocol aims to be as simple as possible, with the only complexity arising from allowing each side to indicate whether they want to omit their own output. The protocol is "taproot ready" in the sense that shutdown is always sent to trigger it, allowing nonces to be included without any persistence requirement. The pull request consists of three commits to the repository. The first commit introduces the new protocol, the second commit removes the requirement that shutdown not be sent multiple times (which was already nonsensical), and the third commit removes the older protocols. The pull request includes changes to the file "02-peer-protocol.md". The changes in this file introduce the new protocol, describe the closing negotiation process, and specify the requirements for each step of the negotiation. The file includes a section on "Closing Negotiation" that explains that once shutdown is complete and the channel is empty of Hashed Time Locked Contracts (HTLCs), each peer says what fee it will pay and the other side simply signs off on that transaction. The section includes details on the message types, the data they contain, and the requirements for each peer in the negotiation. The pull request also includes changes to the file "03-transactions.md". The changes in this file provide details on the closing transactions used in the negotiation process. The file describes the different variants of the closing transaction and outlines the requirements for each variant. Finally, the pull request includes changes to the file "09-features.md". The changes in this file add a new feature called "option_simple_close" which is related to the simplified closing negotiation described in the pull request. Overall, the pull request introduces a new closing protocol for the bolts project, provides specifications for the negotiation process, and makes changes to related files to support the new protocol.

This is a pull request on a GitHub repository called "bolts" that proposes a new protocol for closing a channel in the Lightning Network. The pull request is numbered 1096. The new protocol is called "can't fail!" close protocol and it was discussed at the NY Summit and on the wishlist of a person named Roasbeef. The goal of this protocol is to make the closing process as simple as possible, with the only complexity being the option for each side to indicate whether they want to omit their own output. The protocol is "taproot ready" in the sense that the shutdown message is always sent to trigger the closing process, and this message can contain the necessary data without requiring any persistence. The pull request is split into three commits for cleanliness and organization. The first commit introduces the new protocol, the second removes a requirement that no longer makes sense, and the third removes older protocols that are no longer needed. The pull request includes changes to the "02-peer-protocol.md" and "03-transactions.md" files. In the "02-peer-protocol.md" file, there are several sections that describe the closing process, including the closing initiation, closing negotiation, and normal operation. The "closing negotiation" section is further divided into two parts: "closing_complete" and "closing_sig". In the "closing_complete" part, each peer says what fee it is willing to pay, and the other side simply signs that transaction. The complexity arises from allowing each side to omit its own output if it is not economically viable. This process can be repeated every time a shutdown message is received, allowing for re-negotiation. The "closing_sig" part describes the requirements for this message, including the transaction data that needs to be included and the signatures that need to be provided. The requirements differ depending on whether the sender of the message is the closer or the closee. The receiver of the closing_sig message needs to validate the signatures and select one of the transactions to respond to. The "03-transactions.md" file includes the details of the closing transactions, including the classic closing transaction variant and the closing transaction variant used for closing_complete and closing_sig messages. Overall, this pull request proposes a new protocol for closing a channel in the Lightning Network that simplifies the process and allows for negotiation between the peers involved. It includes changes to the protocol specification files to describe the new protocol in detail.

LN Summit 2023 Notes

The text you provided is a detailed summary of a discussion about various topics related to the Lightning Network (LN) specification. Here is a breakdown of the key points discussed: 1. Package Relay: The discussion focused on the proposal for package relay, which involves grouping transactions into packages for more efficient processing. The current proposal is to use ancestor package relay, which allows for up to 24 ancestors for each child transaction. Other topics discussed include base package relay, commitment transactions, ephemeral anchors, HLTCs (hashed timelock contracts), and mempool policy changes. 2. Taproot: The discussion touched on the latest developments in the Taproot privacy and scalability improvement proposal. Specific points discussed include the changes related to anchors and revocation paths, as well as the implementation of nonces. 3. Gossip V1.5 vs V2: The participants discussed the differences between Gossip V1.5 and V2 in terms of script binding and amount binding. They debated whether to fully bind to the script or allow any taproot output to be a channel. The potential implications for pathfinding and capacity graphs were also discussed. 4. Reputation System for Channel Jamming: The participants explored the idea of using a reputation system to mitigate channel jamming attacks. The discussion revolved around resource bucketing, reputation scores, endorsement signals, and the impact on network quality of service. 5. Simplified Commitments: The participants discussed the concept of simplified commitments, which aims to simplify the LN state machine by introducing turn taking and a refined protocol for updates, commitments, and revocations. They also discussed the possibility of introducing NACK messages for rejecting updates and the benefits of a more streamlined state machine. 6. Meta Spec Process: The participants debated the best approach to managing the LN specification as it evolves over time. They discussed the pros and cons of a living document vs. versioning, the need for modularization, and the importance of maintaining backward compatibility. The role of extensions, cleaning up the specification, and improving communication among developers were also discussed. 7. Async Payments/Trampoline: The participants briefly discussed the use of blinded payments for trampoline payments, where nodes in the network help route payments to their destination. The concept of trampolines, radius-based gossip, and splitting multi-path payments over trampoline were mentioned. In summary, the discussion covered a range of topics related to the LN specification, including package relay, Taproot, gossip protocols, reputation systems, simplified commitments, meta spec processes, and trampoline payments. The participants provided detailed insights, shared ideas, and debated the pros and cons of various proposals and approaches.

During the annual specification meeting in New York City at the end of June, the attendees attempted to take transcript-style notes. These notes are available in a Google Docs document, which you can find at the link provided. Additionally, the full set of notes is included at the end of the email, although the formatting may be affected. The discussions at the summit covered several topics, including: 1. Package Relay: The current proposal for package relay is ancestor package relay, which allows one child to have up to 24 ancestors. Currently, only mempool transactions are scored by ancestry, so there isn't much point in other types of packages. Commitment transactions still require the minimum relay fee for base package relay. Batch bumping is not allowed to prevent pinning attacks. With one anchor, RBF can be packaged. V3 transactions will allow for dropping minimum relay fees and the restriction of one child paying for one parent transaction. 2. HLTCs (HTLCs with anchors): There are changes being made to HLTCs with the introduction of SIGHASH_ANYONECANPAY, which allows the counterparty to inflate the size of a transaction. The discussion revolved around how much the system should be changed. The proposed changes would allow for zero-fee commitment transactions and one ephemeral anchor per transaction. The use of ephemeral anchors would eliminate the need for delay and ensure eviction of the parent transaction. 3. Mempool Policy: The mempool can be organized into clusters of transactions, allowing for easier sorting and reasoning. The mining algorithm will pick one "vertical" within the mempool using the ancestor fee rate. The discussion explored the possibility of adding cluster mempool to enable package RBF. 4. Taproot: The main change in taproot is around anchors, which become more complicated with this update. The discussion covered various aspects of taproot, including revocation paths, NUMS points, and co-op close negotiation. 5. Gossip V1.5 vs. V2: The discussion revolved around the script binding and amount binding in gossip. The participants debated whether to bind to the script or allow any taproot output. The consensus was to allow any taproot output to be a channel and let people experiment. 6. Multi-Sig Channel Parties: The discussion focused on different ways to implement multi-sig for one channel party, such as using scripts, FROSTy, or recursive musig. 7. PTLCs (Point Time Locked Contracts): The conversation explored different approaches to PTLCs, such as regular musig or adaptor signatures. The potential for redundant overpayment (stuckless payments) and different options for achieving it were also discussed. 8. Hybrid Approach to Channel Jamming: The discussion centered around different approaches to mitigate jamming attacks in Lightning Network, including monetary solutions (unconditional fees), reputation-based solutions, and utilizing scarce resources (POW, stake, tokens). The participants discussed the need to combine multiple solutions for effective mitigation and the challenges associated with each approach. 9. Reputation for Channel Jamming: The participants explored the concept of reputation-based mitigation for jamming attacks. The discussion focused on resource bucketing, reputation scores, and the allocation of protected and general slots for HTLCs based on reputation and endorsement signals. 10. Simplified Commitments: The conversation revolved around simplifying the state machine for Lightning Network by implementing turn-taking and introducing the concepts of revoke and NACK. The participants explored the implications of these changes and the benefits of simplified commitments. 11. Meta Spec Process: The participants discussed the idea of moving away from a single "living document" to a versioning system for the Lightning Network specification. The proposal was to have extensions that can be added and removed as needed, allowing for modularity and easier maintenance. The participants also discussed the need for better communication channels and the importance of recommitting to the lightning-dev mailing list. 12. Async Payments/Trampoline: The final discussion focused on trampoline payments and the potential for async (asynchronous) payments. The participants explored the concept of light nodes, trampoline routing, and the ability to split MPP (multi-part payments) over trampoline. In summary, the discussions covered a wide range of topics related to Lightning Network and its specifications. The participants delved into technical details, proposed solutions, and debated the benefits and challenges of various approaches.

On the experiment of the Bitcoin Contracting Primitives WG and marking this community process "up for grabs"

This message is a detailed update on the progress and future plans for the development of Bitcoin consensus changes. The author begins by referencing past discussions about covenant proposals and the need for a new community process to specify covenants. They mention their goals, which include building a consistent framework to evaluate covenant proposals, finding common ground between proposals, expanding the consensus changes development process beyond Bitcoin Core, and maintaining a high-quality technical archive. The author acknowledges that other initiatives, such as the bitcoin-inquisition fork and the archiving of covenant proposals under the Optech umbrella, have also been undertaken. They mention the Bitcoin Contracting Primitives Working Group, which has held monthly meetings and documented various primitives and protocols related to Bitcoin contracting. The author explains that they launched the effort as an experiment, devoting 20% of their time to it. However, they have come to the realization that their time and energy would be better allocated to working on Lightning Network robustness. They express their belief that the scalability and smooth operation of the Lightning Network are more critical for Bitcoin's survival than extended covenant capabilities. The author encourages others who are working on covenant changes proposals to continue their work, noting that Taproot and Schnorr soft forks have proven to be beneficial for self-custody solutions. They also mention their own plans to focus on R&D works related to CoinPool, particularly in addressing interactivity issues and designing advanced Bitcoin contracts. The author concludes by acknowledging that they may have overpromised with the launch of the new process for Bitcoin consensus changes development. They emphasize the importance of having technical historians and archivists to assess, collect, and preserve consensus changes proposals, as well as QA devs to ensure proper testing before deployment. They invite others to continue the maintenance of the Bitcoin Contracting Primitives Working Group or collaborate with other organizations. Overall, this message provides detailed information about the progress, challenges, and future plans related to Bitcoin consensus changes.

In this message, the author is discussing their involvement in a community process related to Bitcoin development. They introduced the idea of a new process to specify covenants, which are conditions or agreements that can be added to Bitcoin transactions. The author explains that they will not be actively pursuing this process further, as they have decided to focus more on other Bitcoin projects. They mention that the goals of this process were to build a consistent framework for evaluating covenant proposals, identify commonalities between proposals, open up the consensus development process beyond Bitcoin Core, and maintain a high-quality technical archive. The author also mentions other initiatives that have been undertaken during the same period, such as a fork of Bitcoin Core called bitcoin-inquisition and the archiving of covenant proposals under the Optech umbrella. The author provides some details about the Bitcoin Contracting Primitives Working Group, which is a group of individuals who have been working on documenting and archiving various Bitcoin contract primitives and protocols. They mention that monthly meetings have been held, and there have been in-depth discussions on topics related to contract primitives and protocols. The author explains that they started this effort as an experiment and initially committed to dedicating 20% of their time to it. However, they have realized that there is still a lot of work to be done in other areas, such as improving the Lightning Network, which is a second-layer scaling solution for Bitcoin. They believe that working on scaling Bitcoin and improving its robustness is more critical for the survival of Bitcoin than focusing on advanced contract capabilities. The author acknowledges that they may have overpromised with the new community process but believes that enough progress has been made to demonstrate its value. They express that what Bitcoin needs is not necessarily more technical proposals but rather a focus on assessing, collecting, and preserving consensus change proposals and ensuring thorough testing before deployment. They invite others to continue the work of the Bitcoin Contracting Primitives Working Group if they are willing to commit resources and effort to it. Overall, the author is reflecting on their involvement in the community process related to Bitcoin covenant proposals and discussing their decision to shift their focus to other Bitcoin projects. They believe that there is still much work to be done in scaling and improving Bitcoin's robustness and express the need for dedicated individuals to assess and preserve consensus change proposals.

Blinded 2-party Musig2

This text describes the implementation of a version of the 2-of-2 Schnorr Musig2 protocol for statechains. Statechains involve a server (referred to as party 1) that is "blinded," meaning it holds a private key necessary to generate an aggregate signature on an aggregate public key, but it does not have access to certain information. The information that party 1 is not supposed to learn includes: 1) the aggregate public key, 2) the aggregate signature, and 3) the message being signed (denoted as "m" in the text). The security of this implementation relies on party 1 being trusted to report the number of partial signatures it has generated for a particular key, rather than being trusted to enforce rules on what it has signed in the unblinded case. The full set of signatures generated is verified on the client side. The implementation is based on the 2-of-2 musig2 protocol, which operates as follows: 1. Party 1 generates a private key, denoted as "x1," and the corresponding public key, denoted as "X1 = x1G". G is the generator point, and point multiplication is denoted as X = xG, while point addition is denoted as A = G + G. 2. Party 2 generates a private key, denoted as "x2," and the corresponding public key, denoted as "X2 = x2G". 3. The set of public keys is denoted as L = {X1, X2}. 4. The key aggregation coefficient is given by KeyAggCoef(L, X) = H(L, X), where H is a hash function. This coefficient is used to calculate the shared (aggregate) public key, denoted as X = a1X1 + a2X2, where a1 = KeyAggCoef(L, X1) and a2 = KeyAggCoef(L, X2). 5. To sign a message "m," party 1 generates a nonce "r1" and derives a point "R1 = r1G". Party 2 generates a nonce "r2" and derives a point "R2 = r2G". These points are aggregated into "R = R1 + R2". 6. Party 1 computes the challenge "c" as the hash of the concatenation of X, R, and m, i.e., c = H(X||R||m), and calculates s1 = c.a1.x1 + r1. 7. Party 2 also computes the challenge "c" using the same formula, c = H(X||R||m), and calculates s2 = c.a2.x2 + r2. 8. The final signature is represented as (R, s1 + s2). In the case of blinding party 1, the steps to prevent it from learning the full public key or final signature are as follows: 1. Key aggregation is performed solely by party 2. Party 1 only needs to send its own public key, X1, to party 2. 2. Nonce aggregation is performed solely by party 2. Party 1 only needs to send its own nonce, R1, to party 2. 3. Party 2 computes the challenge "c" using the same formula and sends it to party 1 in order to compute s1 = c.a1.x1 + r1. 4. Party 1 never learns the final value of (R, s1 + s2) or the message "m". This implementation aims to provide confidentiality for party 1 by blinding it from certain information, thereby ensuring that party 1 cannot determine the full public key, final signature, or the signed message. Any feedback or potential issues with this approach would be appreciated. The attached HTML part of the message was likely removed due to its content being irrelevant or not accessible through the text format.

In this implementation, we are using a cryptographic protocol called 2-of-2 Schnorr Musig2 for statechains. In this protocol, there are two parties involved - party 1 and party 2. The goal is to create an aggregate signature on an aggregate public key, while ensuring that party 1 remains fully "blinded" and does not learn certain information. Blinding refers to the process of preventing party 1 from gaining knowledge of the aggregate public key, the aggregate signature, and the message being signed. In this model of blinded statechains, the security relies on party 1 being trusted to report the number of partial signatures it has generated for a specific key. The actual verification of the signatures is done on the client side. Now, let's break down how the 2-of-2 musig2 protocol operates and how blinding is achieved: 1. Key Generation: - Party 1 generates a private key (x1) and a corresponding public key (X1 = x1G), where G is the generator point. - Party 2 does the same, generating a private key (x2) and a public key (X2 = x2G). - The set of public keys is represented by L = {X1, X2}. 2. Key Aggregation: - The key aggregation coefficient is calculated using the set of public keys (L) and the aggregate public key (X). - KeyAggCoef(L, X) = H(L, X), where H is a hash function. - The shared (aggregate) public key is calculated as X = a1X1 + a2X2, where a1 = KeyAggCoef(L, X1) and a2 = KeyAggCoef(L, X2). 3. Message Signing: - To sign a message (m), party 1 generates a nonce (r1) and calculates R1 = r1G. - Party 2 also generates a nonce (r2) and calculates R2 = r2G. - These nonces are aggregated to obtain R = R1 + R2. - Party 1 computes the 'challenge' (c) as c = H(X || R || m) and calculates s1 = c.a1.x1 + r1. - Party 2 also computes the 'challenge' (c) as c = H(X || R || m) and calculates s2 = c.a2.x2 + r2. - The final signature is (R, s1 + s2). Now, let's focus on the blinding aspect for party 1: To prevent party 1 from learning the full public key or the final signature, the following steps are taken: 1) Key aggregation is performed only by party 2. Party 1 simply sends its public key X1 to party 2. 2) Nonce aggregation is performed only by party 2. Party 1 sends its generated nonce R1 to party 2. 3) Party 2 computes the 'challenge' (c) as c = H(X || R || m) and sends it back to party 1. Party 1 then computes s1 = c.a1.x1 + r1. - Party 1 does not need to independently compute and verify the challenge (c) since it is already blinded from the message. By following these steps, party 1 never learns the final value of (R, s1 + s2) or the message (m). In terms of potential issues, it is important to carefully evaluate the trustworthiness of the statechain server that reports the number of partial signatures. Additionally, the full set of signatures should be verified on the client side to ensure their validity. Any comments or concerns regarding this implementation would be highly appreciated.

Computing Blinding Factors in a PTLC and Trampoline World

This passage describes a mathematical demonstration of a method for computing blinding factors in a specific way. The goal is to achieve certain properties, such as ensuring that only one blinding factor is needed for each intermediate node and the receiver, and allowing Trampoline nodes to provide blinding factors to sub-routes without the intermediate nodes being aware they are on a Trampoline route. The demonstration begins by establishing that the ultimate receiver has a secret value "r" and shares a point "R" with the ultimate sender, where R = r * G (G represents a point on an elliptic curve). In the simplest case, where the ultimate sender and receiver are directly connected, the ultimate sender chooses a random scalar "e" as the error blinding factor and constructs an onion with "e" encrypted for the ultimate receiver. Along with the onion, the ultimate sender offers a Payment-Triggered Lightning Contract (PTLC) with the point e * G + R. The ultimate receiver can claim this PTLC by revealing e + r. Next, the scenario is slightly modified to include an intermediate node named Carol. In this case, the ultimate sender still chooses a random scalar "e" as the final error factor but also generates two scalars "c" and "d" such that c + d = e. This is achieved by selecting a random "d" and computing c = e - d. The onion is then encrypted with e for the ultimate receiver and the ciphertext, along with d encrypted for Carol. The PTLC is sent to Carol with the point c * G + R. Carol adds her per-hop blinding factor times G to the input point and sends a modified PTLC with the point c * G + R + d * G to the next hop. This results in (c + d) * G + R, which is equivalent to e * G + R, as e = c + d. The ultimate receiver cannot differentiate whether the PTLC came from Carol or a direct source-to-destination route because both cases result in the same point e * G + R. When the ultimate receiver reveals e + r, Carol can compute c + r by taking e + r - d. Since c = e - d, e + r - d = e - d + r = c + r. Carol can then claim the incoming c * G + R with the scalar c + r. Carol only knows d, not c or r, so it cannot compute r. Lastly, the scenario is extended to include Carol as a Trampoline node, and the ultimate sender does not provide the detailed route from Carol to the next Trampoline hop. The ultimate sender learns R, selects a random e, and computes c and d such that c + d = e. The Trampoline-level onion includes e encrypted for the ultimate receiver and the ciphertext, along with d and the next Trampoline hop encrypted for Carol. The PTLC with the onion is sent to Carol with the point c * G + R. Carol decrypts the onion and obtains d. Carol then needs to search for a route from herself to the ultimate receiver. Let's assume the route found is Carol -> Alice -> ultimate receiver. Carol selects two scalars, a and b, such that a + b = d. It creates a new onion with the ciphertext copied from the ultimate sender and b encrypted for Alice. The PTLC with the point c * G + R + a * G is sent to Alice. Alice decrypts the onion and learns b. Alice forwards the PTLC with the point c * G + R + a * G + b * G to the next hop, the ultimate receiver. Since a + b = d, a * G + b * G = d * G. Also, c + d = e, so c * G + d * G = e * G. Therefore, c * G + R + a * G + b * G = c * G + a * G + b * G + R = c * G + d * G + R = (c + d) * G + R = e * G + R. The ultimate receiver receives the same e * G + R and cannot determine whether it was reached via a Trampoline, non-Trampoline intermediate, or direct route. Each intermediate node, both Trampoline and non-Trampoline, can claim its incoming PTLC, and only the ultimate sender knows c, allowing the recovery of r.

In this explanation, we will break down a mathematical demonstration that involves the computation of blinding factors. The purpose of this computation is to achieve certain goals, such as minimizing the number of blinding factors that intermediate nodes need to know and allowing trampoline nodes to provide blinding factors to sub-routes without revealing that they are trampoline nodes. Let's start by understanding the basic setup. We have a sender (ultimate sender) and a receiver (ultimate receiver). The ultimate receiver has a secret value called 'r'. The ultimate receiver shares a point called 'R' with the ultimate sender, where 'R' is equal to 'r' multiplied by a specific point 'G'. In the simplest case, if the ultimate sender can directly communicate with the ultimate receiver, it chooses a random value (scalar) called 'e' as the blinding factor. It constructs an onion with 'e' encryptable by the ultimate receiver and sends it along with a payment (PTLC) that contains the point 'e * G + R'. The ultimate receiver can claim this payment by revealing 'e + r' since it learns 'e' from the onion and knows 'r' (the secret value). This is possible because the contract between them requires the ultimate receiver to provide 'r' in exchange for payment. Now, let's consider a scenario where an intermediate node, Carol, exists between the ultimate sender and the ultimate receiver. In this case, the ultimate sender still needs to choose a final blinding factor 'e' randomly. However, the sender also needs to generate two other scalars, 'c' and 'd,' such that 'c + d = e'. This can be achieved by selecting a random scalar 'd' and computing 'c = e - d'. The ultimate sender then encrypts the onion in the following way: - 'e' is encrypted to the ultimate receiver. - The above ciphertext, along with 'd' encrypted, is sent to intermediate node Carol. The ultimate sender sends the payment (PTLC) with the point 'c * G + R' to Carol. At this point, each intermediate non-Trampoline node (such as Carol) takes the input point, adds its per-hop blinding factor multiplied by 'G', and uses the result as the output point to the next hop. So Carol receives 'c * G + R'. Carol then adds 'd * G' (the 'd' error obtained from the onion) and sends a PTLC with the point 'c * G + R + d * G' to the next hop. Note that 'e = c + d', so the PTLC sent by Carol to the ultimate sender can be rearranged as '(c + d) * G + R'. This is equivalent to 'e * G + R', which is the same as the direct case where there is no intermediate node. Therefore, the ultimate receiver cannot distinguish whether it received from Carol or from a further node since it sees 'e * G + R' in both cases. When the ultimate receiver releases 'e + r', Carol can compute 'c + r' by taking 'e + r - d'. Since 'c = e - d', 'e + r - d = e - d + r = c + r'. Carol can then claim the incoming 'c * G + R' with the scalar 'c + r'. It's important to note that Carol does not know 'c'; it only knows 'd' and, therefore, cannot compute 'r'. Now let's consider another scenario where Carol is a trampoline node, and the ultimate sender does not provide a detailed route from Carol to the next trampoline hop. In this case, the ultimate receiver is actually the final trampoline hop after Carol, but Carol is unaware of this fact and cannot learn it. The ultimate sender still learns 'R' but selects a random 'e' as the blinding factor. It generates 'c' and 'd' such that 'c + d = e', following the same technique as before. The ultimate sender then creates a trampoline-level onion with the following encrypted components: - 'e' encrypted to the ultimate receiver. - The above ciphertext, 'd', and the next trampoline hop (the node ID of the ultimate receiver) encrypted to Carol. The ultimate sender sends the payment (PTLC) with the above onion, containing the point 'c * G + R', to Carol. Carol decrypts the onion and obtains 'd'. Now, Carol needs to find a route from itself to the ultimate receiver, which, in this case, is the next trampoline hop. Suppose Carol finds a route Carol -> Alice -> ultimate receiver. Carol needs to make 'c * G + d * G + R' reach the ultimate receiver. It can do this by selecting two scalars, 'a' and 'b', such that 'a + b = d'. Carol knows 'd', so it randomly selects 'b' and computes 'a = d - b'. Carol creates the onion as follows: - It copies the ciphertext from the ultimate sender: 'e' encrypted to the ultimate receiver. - The above ciphertext and 'b' encrypted to Alice. Carol sends the PTLC with the point 'c * G + R + a * G' to Alice. Alice decrypts the onion and learns 'b'. Then, Alice forwards the PTLC with the point 'c * G + R + a * G + b * G' to the next hop, the ultimate receiver. Now, 'a + b = d', so 'a * G + b * G = d * G'. Also, 'c + d = e', so 'c * G + d * G = e * G'. Therefore: c * G + R + a * G + b * G = c * G + a * G + b * G + R (commutative property) = c * G + (a + b) * G + R (associative property) = c * G + d * G + R (d = a + b by construction) = (c + d) * G + R (associative property) = e * G + R (e = c + d by construction) Thus, the ultimate receiver receives the same 'e * G + R' and cannot differentiate whether it was reached via a trampoline, a non-trampoline intermediate, or directly. Similarly, when claiming, every intermediate node, both trampoline and non-trampoline, has enough data to claim its incoming PTLC. And only the ultimate sender knows 'c', which allows it to recover 'r'. I hope this detailed explanation helps you understand the mathematical demonstration and its implications.

Potential vulnerability in Lightning backends: BOLT-11 "payment hash" does not commit to payment!

In this message, the sender, Calle, is informing a list of recipients about an exploit that was discovered by their team at LNbits. The exploit allowed an attacker to create balances by taking advantage of a quirk in how invoices are handled internally. The team has already patched this issue in LNbits version 0.10.5 and recommends that everyone update as soon as possible. Calle wants to describe the attack in detail because they believe that similar exploits may be possible in other Lightning applications. They specifically mention that this information would be important for people working on custodial wallets, payment processors, account management software, and so on. The attack involves an attacker manipulating two payments, A and B, and tricking the backend into thinking that B is equal to A. Here are the steps involved: 1. The attacker creates an invoice A with an amount of 1000 sat (satoshi) in LNbits. 2. The attacker also creates a separate invoice B' with an amount of 1 sat on their own node. 3. The attacker then modifies B' by inserting the payment hash of payment A into it, effectively making B with manipulated payment details. 4. The attacker re-signs the invoice to make it look legitimate again and serializes it, creating the malicious invoice B. 5. Next, the attacker creates a new account in LNbits and pays invoice B. 6. The LNbits backend, when processing the payment, uses the payment hash of B to determine whether it's an internal payment or a payment via the Lightning Network. 7. Since the backend assumes that the payment hash of A commits to A, it finds A in its database. 8. The backend then settles the payment internally by crediting A and debiting B. 9. As a result, the attacker has effectively "created" 999 sat in their account by manipulating the payment process. To mitigate this exploit, the recommended approach is for backends to either use unique "checking ids" that they generate themselves for looking up internal payments or implement additional checks to ensure that the invoice details haven't been tampered with. For example, they could verify that the amount of A is equal to the amount of B. Calle also highlights two lessons learned from this attack. Firstly, it emphasizes the level of sophistication of attackers familiar with the Lightning Network. This particular exploit required a deep understanding of the underlying technology and the ability to create custom tools. Secondly, it underscores the importance of understanding that the "payment hash" in an invoice is actually just a "preimage" hash and doesn't commit to payment details such as amount or pubkey. Calle suggests calling it the "preimage hash" going forward to avoid any implicit assumptions. Overall, this message serves as a detailed explanation of the discovered exploit, the steps involved in carrying it out, the recommended mitigation, and the lessons learned from this experience.

Dear 15-year-old, Recently, a team called LNbits discovered an interesting issue in their software that could allow someone to exploit it. Let me explain it to you in detail. LNbits is a software that handles invoices related to Lightning Network, which is a technology used for quick and low-cost transactions of cryptocurrencies like Bitcoin. In this software, there was a loophole that allowed an attacker to create fake balances by taking advantage of how invoices are processed internally. The team at LNbits fixed this issue in their latest version, 0.10.5, and they are urging everyone to update their software as soon as possible if they haven't done so already. They are sharing the details of the attack because they believe that similar exploits might be possible in other Lightning Network applications. If you are involved in developing custodial wallets, payment processors, or account management software, this information is relevant to you. Now, let's talk about how the attack works. The attacker first creates an invoice, let's call it Invoice A, with an amount of 1000 sat (satoshis, the smallest unit of Bitcoin). Then, they create another invoice, Invoice B', with an amount of 1 sat on their own node. The attacker then modifies Invoice B' by inserting the payment hash of Invoice A into it. The payment hash is a unique identifier for each payment. By doing this, the attacker tricks the LNbits backend, the system that handles the invoices, into thinking that Invoice B is actually Invoice A. They do this by reshaping the invoice and making it look like a legitimate payment. Next, the attacker creates a new account in LNbits and pays Invoice B. The LNbits backend, which checks the payment hash to determine whether it's an internal payment or a payment through Lightning Network, finds Invoice A in its database. This is because the backend assumes that the payment hash commits to Invoice A. However, the critical part here is that payment hashes do not commit to payment details like the amount, but only to the preimage (a unique code linked to the payment). As a result, the LNbits backend settles the payment by crediting Invoice A and debiting Invoice B. By doing this, the attacker has effectively "created" 999 sat. To prevent such attacks, it is important for backends to use unique identifiers or additional checks when looking up internal payments. This ensures that the invoice details have not been tampered with. There are two lessons to learn from this incident. Firstly, it is crucial to understand that attackers who are knowledgeable about Lightning Network can be quite sophisticated. This attack required a deep understanding of technical concepts and custom tools to carry it out. Secondly, the term "payment hash" is misleading because it suggests that it commits to payment details like the amount of money or the public key. In reality, it only commits to the preimage. To mitigate confusion, the author suggests renaming it as the "preimage hash." I hope this explanation helps you understand the issue and the importance of keeping software secure and updated. Best, Calle

An Open Source Framework to Collect Lightning Network Metrics

In this message, the author is introducing a side project they have been working on. The project involves collecting data on the Lightning Network, which is a protocol built on top of blockchain technology for conducting faster and cheaper transactions. The main objective of the project is to monitor the evolution of the Lightning Network and gather relevant data. This collected data can then be used to evaluate different proposals or ideas related to the network. One specific proposal mentioned is "channel jamming," which refers to a scenario where a malicious user intentionally overloads a channel to disrupt transactions. The author highlights that collecting real data is important as it provides tangible insights into the network's behavior and allows for more informed evaluations. Simulations can only provide theoretical results, whereas real data offers a more accurate representation of the network's dynamics. Additionally, the author mentions that their project aims to support University Research that may not have access to real data. By providing this collected information, researchers can analyze and evaluate their own ideas without having to rely solely on simulations. The author provides links to further information about the project. [1] leads to a detailed document outlining the idea and methodology behind the data collection. [2] directs to an experimental explorer, a platform where users can explore and visualize the collected data. Finally, [3] is a public Graphql API (Application Programming Interface) that exposes the collected data for developers or researchers to access. In conclusion, the author hopes that their project will be useful to someone interested in studying, evaluating, or proposing solutions for the Lightning Network.

Hello! I'm happy to explain this to you in great detail. So, it seems like the person who wrote this message has a side project where they're trying to gather data on something called the lightning network. The lightning network is a system built on top of the Bitcoin blockchain that allows for faster and cheaper transactions. The goal of this project is to track how the lightning network evolves over time. They want to do this to evaluate different proposals or ideas for improving the network. They mention something called "channel jamming," which is one proposal they're interested in investigating. By collecting real data on the network, they can see how these proposals actually affect the network in practice, instead of just relying on simulation results. Additionally, they mention that they want to support university research that may not have access to this real data. By providing this data, they hope to enable more research and experimentation in the field. To achieve this, the person has come up with a way to define and collect information that can later be shared with others. They've provided links to a more detailed description of their idea, an experimental explorer where you can see the data they've collected, and a public Graphql API that allows others to access this data as well. The hope is that this project will be useful for someone who wants to study or improve the lightning network. The person who wrote this message goes by the name Vincent and they're excited about the potential impact of their project. I hope that helps! Let me know if you have any further questions.

option_simple_close for "unfailable" closing

The link provided is a pull request on the GitHub repository for the "bolts" project. The pull request is labeled as #1096. The description of the pull request indicates that it is a "can't fail!" close protocol, which was discussed at the NY Summit and on @Roasbeef's wishlist. The protocol aims to be as simple as possible, with the only complexity arising from allowing each side to indicate whether they want to omit their own output. The protocol is "taproot ready" in the sense that shutdown is always sent to trigger it, allowing nonces to be included without any persistence requirement. The pull request consists of three commits to the repository. The first commit introduces the new protocol, the second commit removes the requirement that shutdown not be sent multiple times (which was already nonsensical), and the third commit removes the older protocols. The pull request includes changes to the file "02-peer-protocol.md". The changes in this file introduce the new protocol, describe the closing negotiation process, and specify the requirements for each step of the negotiation. The file includes a section on "Closing Negotiation" that explains that once shutdown is complete and the channel is empty of Hashed Time Locked Contracts (HTLCs), each peer says what fee it will pay and the other side simply signs off on that transaction. The section includes details on the message types, the data they contain, and the requirements for each peer in the negotiation. The pull request also includes changes to the file "03-transactions.md". The changes in this file provide details on the closing transactions used in the negotiation process. The file describes the different variants of the closing transaction and outlines the requirements for each variant. Finally, the pull request includes changes to the file "09-features.md". The changes in this file add a new feature called "option_simple_close" which is related to the simplified closing negotiation described in the pull request. Overall, the pull request introduces a new closing protocol for the bolts project, provides specifications for the negotiation process, and makes changes to related files to support the new protocol.

This is a pull request on a GitHub repository called "bolts" that proposes a new protocol for closing a channel in the Lightning Network. The pull request is numbered 1096. The new protocol is called "can't fail!" close protocol and it was discussed at the NY Summit and on the wishlist of a person named Roasbeef. The goal of this protocol is to make the closing process as simple as possible, with the only complexity being the option for each side to indicate whether they want to omit their own output. The protocol is "taproot ready" in the sense that the shutdown message is always sent to trigger the closing process, and this message can contain the necessary data without requiring any persistence. The pull request is split into three commits for cleanliness and organization. The first commit introduces the new protocol, the second removes a requirement that no longer makes sense, and the third removes older protocols that are no longer needed. The pull request includes changes to the "02-peer-protocol.md" and "03-transactions.md" files. In the "02-peer-protocol.md" file, there are several sections that describe the closing process, including the closing initiation, closing negotiation, and normal operation. The "closing negotiation" section is further divided into two parts: "closing_complete" and "closing_sig". In the "closing_complete" part, each peer says what fee it is willing to pay, and the other side simply signs that transaction. The complexity arises from allowing each side to omit its own output if it is not economically viable. This process can be repeated every time a shutdown message is received, allowing for re-negotiation. The "closing_sig" part describes the requirements for this message, including the transaction data that needs to be included and the signatures that need to be provided. The requirements differ depending on whether the sender of the message is the closer or the closee. The receiver of the closing_sig message needs to validate the signatures and select one of the transactions to respond to. The "03-transactions.md" file includes the details of the closing transactions, including the classic closing transaction variant and the closing transaction variant used for closing_complete and closing_sig messages. Overall, this pull request proposes a new protocol for closing a channel in the Lightning Network that simplifies the process and allows for negotiation between the peers involved. It includes changes to the protocol specification files to describe the new protocol in detail.

LN Summit 2023 Notes

The text you provided is a detailed summary of a discussion about various topics related to the Lightning Network (LN) specification. Here is a breakdown of the key points discussed: 1. Package Relay: The discussion focused on the proposal for package relay, which involves grouping transactions into packages for more efficient processing. The current proposal is to use ancestor package relay, which allows for up to 24 ancestors for each child transaction. Other topics discussed include base package relay, commitment transactions, ephemeral anchors, HLTCs (hashed timelock contracts), and mempool policy changes. 2. Taproot: The discussion touched on the latest developments in the Taproot privacy and scalability improvement proposal. Specific points discussed include the changes related to anchors and revocation paths, as well as the implementation of nonces. 3. Gossip V1.5 vs V2: The participants discussed the differences between Gossip V1.5 and V2 in terms of script binding and amount binding. They debated whether to fully bind to the script or allow any taproot output to be a channel. The potential implications for pathfinding and capacity graphs were also discussed. 4. Reputation System for Channel Jamming: The participants explored the idea of using a reputation system to mitigate channel jamming attacks. The discussion revolved around resource bucketing, reputation scores, endorsement signals, and the impact on network quality of service. 5. Simplified Commitments: The participants discussed the concept of simplified commitments, which aims to simplify the LN state machine by introducing turn taking and a refined protocol for updates, commitments, and revocations. They also discussed the possibility of introducing NACK messages for rejecting updates and the benefits of a more streamlined state machine. 6. Meta Spec Process: The participants debated the best approach to managing the LN specification as it evolves over time. They discussed the pros and cons of a living document vs. versioning, the need for modularization, and the importance of maintaining backward compatibility. The role of extensions, cleaning up the specification, and improving communication among developers were also discussed. 7. Async Payments/Trampoline: The participants briefly discussed the use of blinded payments for trampoline payments, where nodes in the network help route payments to their destination. The concept of trampolines, radius-based gossip, and splitting multi-path payments over trampoline were mentioned. In summary, the discussion covered a range of topics related to the LN specification, including package relay, Taproot, gossip protocols, reputation systems, simplified commitments, meta spec processes, and trampoline payments. The participants provided detailed insights, shared ideas, and debated the pros and cons of various proposals and approaches.

During the annual specification meeting in New York City at the end of June, the attendees attempted to take transcript-style notes. These notes are available in a Google Docs document, which you can find at the link provided. Additionally, the full set of notes is included at the end of the email, although the formatting may be affected. The discussions at the summit covered several topics, including: 1. Package Relay: The current proposal for package relay is ancestor package relay, which allows one child to have up to 24 ancestors. Currently, only mempool transactions are scored by ancestry, so there isn't much point in other types of packages. Commitment transactions still require the minimum relay fee for base package relay. Batch bumping is not allowed to prevent pinning attacks. With one anchor, RBF can be packaged. V3 transactions will allow for dropping minimum relay fees and the restriction of one child paying for one parent transaction. 2. HLTCs (HTLCs with anchors): There are changes being made to HLTCs with the introduction of SIGHASH_ANYONECANPAY, which allows the counterparty to inflate the size of a transaction. The discussion revolved around how much the system should be changed. The proposed changes would allow for zero-fee commitment transactions and one ephemeral anchor per transaction. The use of ephemeral anchors would eliminate the need for delay and ensure eviction of the parent transaction. 3. Mempool Policy: The mempool can be organized into clusters of transactions, allowing for easier sorting and reasoning. The mining algorithm will pick one "vertical" within the mempool using the ancestor fee rate. The discussion explored the possibility of adding cluster mempool to enable package RBF. 4. Taproot: The main change in taproot is around anchors, which become more complicated with this update. The discussion covered various aspects of taproot, including revocation paths, NUMS points, and co-op close negotiation. 5. Gossip V1.5 vs. V2: The discussion revolved around the script binding and amount binding in gossip. The participants debated whether to bind to the script or allow any taproot output. The consensus was to allow any taproot output to be a channel and let people experiment. 6. Multi-Sig Channel Parties: The discussion focused on different ways to implement multi-sig for one channel party, such as using scripts, FROSTy, or recursive musig. 7. PTLCs (Point Time Locked Contracts): The conversation explored different approaches to PTLCs, such as regular musig or adaptor signatures. The potential for redundant overpayment (stuckless payments) and different options for achieving it were also discussed. 8. Hybrid Approach to Channel Jamming: The discussion centered around different approaches to mitigate jamming attacks in Lightning Network, including monetary solutions (unconditional fees), reputation-based solutions, and utilizing scarce resources (POW, stake, tokens). The participants discussed the need to combine multiple solutions for effective mitigation and the challenges associated with each approach. 9. Reputation for Channel Jamming: The participants explored the concept of reputation-based mitigation for jamming attacks. The discussion focused on resource bucketing, reputation scores, and the allocation of protected and general slots for HTLCs based on reputation and endorsement signals. 10. Simplified Commitments: The conversation revolved around simplifying the state machine for Lightning Network by implementing turn-taking and introducing the concepts of revoke and NACK. The participants explored the implications of these changes and the benefits of simplified commitments. 11. Meta Spec Process: The participants discussed the idea of moving away from a single "living document" to a versioning system for the Lightning Network specification. The proposal was to have extensions that can be added and removed as needed, allowing for modularity and easier maintenance. The participants also discussed the need for better communication channels and the importance of recommitting to the lightning-dev mailing list. 12. Async Payments/Trampoline: The final discussion focused on trampoline payments and the potential for async (asynchronous) payments. The participants explored the concept of light nodes, trampoline routing, and the ability to split MPP (multi-part payments) over trampoline. In summary, the discussions covered a wide range of topics related to Lightning Network and its specifications. The participants delved into technical details, proposed solutions, and debated the benefits and challenges of various approaches.

On the experiment of the Bitcoin Contracting Primitives WG and marking this community process "up for grabs"

This message is a detailed update on the progress and future plans for the development of Bitcoin consensus changes. The author begins by referencing past discussions about covenant proposals and the need for a new community process to specify covenants. They mention their goals, which include building a consistent framework to evaluate covenant proposals, finding common ground between proposals, expanding the consensus changes development process beyond Bitcoin Core, and maintaining a high-quality technical archive. The author acknowledges that other initiatives, such as the bitcoin-inquisition fork and the archiving of covenant proposals under the Optech umbrella, have also been undertaken. They mention the Bitcoin Contracting Primitives Working Group, which has held monthly meetings and documented various primitives and protocols related to Bitcoin contracting. The author explains that they launched the effort as an experiment, devoting 20% of their time to it. However, they have come to the realization that their time and energy would be better allocated to working on Lightning Network robustness. They express their belief that the scalability and smooth operation of the Lightning Network are more critical for Bitcoin's survival than extended covenant capabilities. The author encourages others who are working on covenant changes proposals to continue their work, noting that Taproot and Schnorr soft forks have proven to be beneficial for self-custody solutions. They also mention their own plans to focus on R&D works related to CoinPool, particularly in addressing interactivity issues and designing advanced Bitcoin contracts. The author concludes by acknowledging that they may have overpromised with the launch of the new process for Bitcoin consensus changes development. They emphasize the importance of having technical historians and archivists to assess, collect, and preserve consensus changes proposals, as well as QA devs to ensure proper testing before deployment. They invite others to continue the maintenance of the Bitcoin Contracting Primitives Working Group or collaborate with other organizations. Overall, this message provides detailed information about the progress, challenges, and future plans related to Bitcoin consensus changes.

In this message, the author is discussing their involvement in a community process related to Bitcoin development. They introduced the idea of a new process to specify covenants, which are conditions or agreements that can be added to Bitcoin transactions. The author explains that they will not be actively pursuing this process further, as they have decided to focus more on other Bitcoin projects. They mention that the goals of this process were to build a consistent framework for evaluating covenant proposals, identify commonalities between proposals, open up the consensus development process beyond Bitcoin Core, and maintain a high-quality technical archive. The author also mentions other initiatives that have been undertaken during the same period, such as a fork of Bitcoin Core called bitcoin-inquisition and the archiving of covenant proposals under the Optech umbrella. The author provides some details about the Bitcoin Contracting Primitives Working Group, which is a group of individuals who have been working on documenting and archiving various Bitcoin contract primitives and protocols. They mention that monthly meetings have been held, and there have been in-depth discussions on topics related to contract primitives and protocols. The author explains that they started this effort as an experiment and initially committed to dedicating 20% of their time to it. However, they have realized that there is still a lot of work to be done in other areas, such as improving the Lightning Network, which is a second-layer scaling solution for Bitcoin. They believe that working on scaling Bitcoin and improving its robustness is more critical for the survival of Bitcoin than focusing on advanced contract capabilities. The author acknowledges that they may have overpromised with the new community process but believes that enough progress has been made to demonstrate its value. They express that what Bitcoin needs is not necessarily more technical proposals but rather a focus on assessing, collecting, and preserving consensus change proposals and ensuring thorough testing before deployment. They invite others to continue the work of the Bitcoin Contracting Primitives Working Group if they are willing to commit resources and effort to it. Overall, the author is reflecting on their involvement in the community process related to Bitcoin covenant proposals and discussing their decision to shift their focus to other Bitcoin projects. They believe that there is still much work to be done in scaling and improving Bitcoin's robustness and express the need for dedicated individuals to assess and preserve consensus change proposals.

Blinded 2-party Musig2

This text describes the implementation of a version of the 2-of-2 Schnorr Musig2 protocol for statechains. Statechains involve a server (referred to as party 1) that is "blinded," meaning it holds a private key necessary to generate an aggregate signature on an aggregate public key, but it does not have access to certain information. The information that party 1 is not supposed to learn includes: 1) the aggregate public key, 2) the aggregate signature, and 3) the message being signed (denoted as "m" in the text). The security of this implementation relies on party 1 being trusted to report the number of partial signatures it has generated for a particular key, rather than being trusted to enforce rules on what it has signed in the unblinded case. The full set of signatures generated is verified on the client side. The implementation is based on the 2-of-2 musig2 protocol, which operates as follows: 1. Party 1 generates a private key, denoted as "x1," and the corresponding public key, denoted as "X1 = x1G". G is the generator point, and point multiplication is denoted as X = xG, while point addition is denoted as A = G + G. 2. Party 2 generates a private key, denoted as "x2," and the corresponding public key, denoted as "X2 = x2G". 3. The set of public keys is denoted as L = {X1, X2}. 4. The key aggregation coefficient is given by KeyAggCoef(L, X) = H(L, X), where H is a hash function. This coefficient is used to calculate the shared (aggregate) public key, denoted as X = a1X1 + a2X2, where a1 = KeyAggCoef(L, X1) and a2 = KeyAggCoef(L, X2). 5. To sign a message "m," party 1 generates a nonce "r1" and derives a point "R1 = r1G". Party 2 generates a nonce "r2" and derives a point "R2 = r2G". These points are aggregated into "R = R1 + R2". 6. Party 1 computes the challenge "c" as the hash of the concatenation of X, R, and m, i.e., c = H(X||R||m), and calculates s1 = c.a1.x1 + r1. 7. Party 2 also computes the challenge "c" using the same formula, c = H(X||R||m), and calculates s2 = c.a2.x2 + r2. 8. The final signature is represented as (R, s1 + s2). In the case of blinding party 1, the steps to prevent it from learning the full public key or final signature are as follows: 1. Key aggregation is performed solely by party 2. Party 1 only needs to send its own public key, X1, to party 2. 2. Nonce aggregation is performed solely by party 2. Party 1 only needs to send its own nonce, R1, to party 2. 3. Party 2 computes the challenge "c" using the same formula and sends it to party 1 in order to compute s1 = c.a1.x1 + r1. 4. Party 1 never learns the final value of (R, s1 + s2) or the message "m". This implementation aims to provide confidentiality for party 1 by blinding it from certain information, thereby ensuring that party 1 cannot determine the full public key, final signature, or the signed message. Any feedback or potential issues with this approach would be appreciated. The attached HTML part of the message was likely removed due to its content being irrelevant or not accessible through the text format.

In this implementation, we are using a cryptographic protocol called 2-of-2 Schnorr Musig2 for statechains. In this protocol, there are two parties involved - party 1 and party 2. The goal is to create an aggregate signature on an aggregate public key, while ensuring that party 1 remains fully "blinded" and does not learn certain information. Blinding refers to the process of preventing party 1 from gaining knowledge of the aggregate public key, the aggregate signature, and the message being signed. In this model of blinded statechains, the security relies on party 1 being trusted to report the number of partial signatures it has generated for a specific key. The actual verification of the signatures is done on the client side. Now, let's break down how the 2-of-2 musig2 protocol operates and how blinding is achieved: 1. Key Generation: - Party 1 generates a private key (x1) and a corresponding public key (X1 = x1G), where G is the generator point. - Party 2 does the same, generating a private key (x2) and a public key (X2 = x2G). - The set of public keys is represented by L = {X1, X2}. 2. Key Aggregation: - The key aggregation coefficient is calculated using the set of public keys (L) and the aggregate public key (X). - KeyAggCoef(L, X) = H(L, X), where H is a hash function. - The shared (aggregate) public key is calculated as X = a1X1 + a2X2, where a1 = KeyAggCoef(L, X1) and a2 = KeyAggCoef(L, X2). 3. Message Signing: - To sign a message (m), party 1 generates a nonce (r1) and calculates R1 = r1G. - Party 2 also generates a nonce (r2) and calculates R2 = r2G. - These nonces are aggregated to obtain R = R1 + R2. - Party 1 computes the 'challenge' (c) as c = H(X || R || m) and calculates s1 = c.a1.x1 + r1. - Party 2 also computes the 'challenge' (c) as c = H(X || R || m) and calculates s2 = c.a2.x2 + r2. - The final signature is (R, s1 + s2). Now, let's focus on the blinding aspect for party 1: To prevent party 1 from learning the full public key or the final signature, the following steps are taken: 1) Key aggregation is performed only by party 2. Party 1 simply sends its public key X1 to party 2. 2) Nonce aggregation is performed only by party 2. Party 1 sends its generated nonce R1 to party 2. 3) Party 2 computes the 'challenge' (c) as c = H(X || R || m) and sends it back to party 1. Party 1 then computes s1 = c.a1.x1 + r1. - Party 1 does not need to independently compute and verify the challenge (c) since it is already blinded from the message. By following these steps, party 1 never learns the final value of (R, s1 + s2) or the message (m). In terms of potential issues, it is important to carefully evaluate the trustworthiness of the statechain server that reports the number of partial signatures. Additionally, the full set of signatures should be verified on the client side to ensure their validity. Any comments or concerns regarding this implementation would be highly appreciated.

Computing Blinding Factors in a PTLC and Trampoline World

This passage describes a mathematical demonstration of a method for computing blinding factors in a specific way. The goal is to achieve certain properties, such as ensuring that only one blinding factor is needed for each intermediate node and the receiver, and allowing Trampoline nodes to provide blinding factors to sub-routes without the intermediate nodes being aware they are on a Trampoline route. The demonstration begins by establishing that the ultimate receiver has a secret value "r" and shares a point "R" with the ultimate sender, where R = r * G (G represents a point on an elliptic curve). In the simplest case, where the ultimate sender and receiver are directly connected, the ultimate sender chooses a random scalar "e" as the error blinding factor and constructs an onion with "e" encrypted for the ultimate receiver. Along with the onion, the ultimate sender offers a Payment-Triggered Lightning Contract (PTLC) with the point e * G + R. The ultimate receiver can claim this PTLC by revealing e + r. Next, the scenario is slightly modified to include an intermediate node named Carol. In this case, the ultimate sender still chooses a random scalar "e" as the final error factor but also generates two scalars "c" and "d" such that c + d = e. This is achieved by selecting a random "d" and computing c = e - d. The onion is then encrypted with e for the ultimate receiver and the ciphertext, along with d encrypted for Carol. The PTLC is sent to Carol with the point c * G + R. Carol adds her per-hop blinding factor times G to the input point and sends a modified PTLC with the point c * G + R + d * G to the next hop. This results in (c + d) * G + R, which is equivalent to e * G + R, as e = c + d. The ultimate receiver cannot differentiate whether the PTLC came from Carol or a direct source-to-destination route because both cases result in the same point e * G + R. When the ultimate receiver reveals e + r, Carol can compute c + r by taking e + r - d. Since c = e - d, e + r - d = e - d + r = c + r. Carol can then claim the incoming c * G + R with the scalar c + r. Carol only knows d, not c or r, so it cannot compute r. Lastly, the scenario is extended to include Carol as a Trampoline node, and the ultimate sender does not provide the detailed route from Carol to the next Trampoline hop. The ultimate sender learns R, selects a random e, and computes c and d such that c + d = e. The Trampoline-level onion includes e encrypted for the ultimate receiver and the ciphertext, along with d and the next Trampoline hop encrypted for Carol. The PTLC with the onion is sent to Carol with the point c * G + R. Carol decrypts the onion and obtains d. Carol then needs to search for a route from herself to the ultimate receiver. Let's assume the route found is Carol -> Alice -> ultimate receiver. Carol selects two scalars, a and b, such that a + b = d. It creates a new onion with the ciphertext copied from the ultimate sender and b encrypted for Alice. The PTLC with the point c * G + R + a * G is sent to Alice. Alice decrypts the onion and learns b. Alice forwards the PTLC with the point c * G + R + a * G + b * G to the next hop, the ultimate receiver. Since a + b = d, a * G + b * G = d * G. Also, c + d = e, so c * G + d * G = e * G. Therefore, c * G + R + a * G + b * G = c * G + a * G + b * G + R = c * G + d * G + R = (c + d) * G + R = e * G + R. The ultimate receiver receives the same e * G + R and cannot determine whether it was reached via a Trampoline, non-Trampoline intermediate, or direct route. Each intermediate node, both Trampoline and non-Trampoline, can claim its incoming PTLC, and only the ultimate sender knows c, allowing the recovery of r.

In this explanation, we will break down a mathematical demonstration that involves the computation of blinding factors. The purpose of this computation is to achieve certain goals, such as minimizing the number of blinding factors that intermediate nodes need to know and allowing trampoline nodes to provide blinding factors to sub-routes without revealing that they are trampoline nodes. Let's start by understanding the basic setup. We have a sender (ultimate sender) and a receiver (ultimate receiver). The ultimate receiver has a secret value called 'r'. The ultimate receiver shares a point called 'R' with the ultimate sender, where 'R' is equal to 'r' multiplied by a specific point 'G'. In the simplest case, if the ultimate sender can directly communicate with the ultimate receiver, it chooses a random value (scalar) called 'e' as the blinding factor. It constructs an onion with 'e' encryptable by the ultimate receiver and sends it along with a payment (PTLC) that contains the point 'e * G + R'. The ultimate receiver can claim this payment by revealing 'e + r' since it learns 'e' from the onion and knows 'r' (the secret value). This is possible because the contract between them requires the ultimate receiver to provide 'r' in exchange for payment. Now, let's consider a scenario where an intermediate node, Carol, exists between the ultimate sender and the ultimate receiver. In this case, the ultimate sender still needs to choose a final blinding factor 'e' randomly. However, the sender also needs to generate two other scalars, 'c' and 'd,' such that 'c + d = e'. This can be achieved by selecting a random scalar 'd' and computing 'c = e - d'. The ultimate sender then encrypts the onion in the following way: - 'e' is encrypted to the ultimate receiver. - The above ciphertext, along with 'd' encrypted, is sent to intermediate node Carol. The ultimate sender sends the payment (PTLC) with the point 'c * G + R' to Carol. At this point, each intermediate non-Trampoline node (such as Carol) takes the input point, adds its per-hop blinding factor multiplied by 'G', and uses the result as the output point to the next hop. So Carol receives 'c * G + R'. Carol then adds 'd * G' (the 'd' error obtained from the onion) and sends a PTLC with the point 'c * G + R + d * G' to the next hop. Note that 'e = c + d', so the PTLC sent by Carol to the ultimate sender can be rearranged as '(c + d) * G + R'. This is equivalent to 'e * G + R', which is the same as the direct case where there is no intermediate node. Therefore, the ultimate receiver cannot distinguish whether it received from Carol or from a further node since it sees 'e * G + R' in both cases. When the ultimate receiver releases 'e + r', Carol can compute 'c + r' by taking 'e + r - d'. Since 'c = e - d', 'e + r - d = e - d + r = c + r'. Carol can then claim the incoming 'c * G + R' with the scalar 'c + r'. It's important to note that Carol does not know 'c'; it only knows 'd' and, therefore, cannot compute 'r'. Now let's consider another scenario where Carol is a trampoline node, and the ultimate sender does not provide a detailed route from Carol to the next trampoline hop. In this case, the ultimate receiver is actually the final trampoline hop after Carol, but Carol is unaware of this fact and cannot learn it. The ultimate sender still learns 'R' but selects a random 'e' as the blinding factor. It generates 'c' and 'd' such that 'c + d = e', following the same technique as before. The ultimate sender then creates a trampoline-level onion with the following encrypted components: - 'e' encrypted to the ultimate receiver. - The above ciphertext, 'd', and the next trampoline hop (the node ID of the ultimate receiver) encrypted to Carol. The ultimate sender sends the payment (PTLC) with the above onion, containing the point 'c * G + R', to Carol. Carol decrypts the onion and obtains 'd'. Now, Carol needs to find a route from itself to the ultimate receiver, which, in this case, is the next trampoline hop. Suppose Carol finds a route Carol -> Alice -> ultimate receiver. Carol needs to make 'c * G + d * G + R' reach the ultimate receiver. It can do this by selecting two scalars, 'a' and 'b', such that 'a + b = d'. Carol knows 'd', so it randomly selects 'b' and computes 'a = d - b'. Carol creates the onion as follows: - It copies the ciphertext from the ultimate sender: 'e' encrypted to the ultimate receiver. - The above ciphertext and 'b' encrypted to Alice. Carol sends the PTLC with the point 'c * G + R + a * G' to Alice. Alice decrypts the onion and learns 'b'. Then, Alice forwards the PTLC with the point 'c * G + R + a * G + b * G' to the next hop, the ultimate receiver. Now, 'a + b = d', so 'a * G + b * G = d * G'. Also, 'c + d = e', so 'c * G + d * G = e * G'. Therefore: c * G + R + a * G + b * G = c * G + a * G + b * G + R (commutative property) = c * G + (a + b) * G + R (associative property) = c * G + d * G + R (d = a + b by construction) = (c + d) * G + R (associative property) = e * G + R (e = c + d by construction) Thus, the ultimate receiver receives the same 'e * G + R' and cannot differentiate whether it was reached via a trampoline, a non-trampoline intermediate, or directly. Similarly, when claiming, every intermediate node, both trampoline and non-trampoline, has enough data to claim its incoming PTLC. And only the ultimate sender knows 'c', which allows it to recover 'r'. I hope this detailed explanation helps you understand the mathematical demonstration and its implications.

Potential vulnerability in Lightning backends: BOLT-11 "payment hash" does not commit to payment!

In this message, the sender, Calle, is informing a list of recipients about an exploit that was discovered by their team at LNbits. The exploit allowed an attacker to create balances by taking advantage of a quirk in how invoices are handled internally. The team has already patched this issue in LNbits version 0.10.5 and recommends that everyone update as soon as possible. Calle wants to describe the attack in detail because they believe that similar exploits may be possible in other Lightning applications. They specifically mention that this information would be important for people working on custodial wallets, payment processors, account management software, and so on. The attack involves an attacker manipulating two payments, A and B, and tricking the backend into thinking that B is equal to A. Here are the steps involved: 1. The attacker creates an invoice A with an amount of 1000 sat (satoshi) in LNbits. 2. The attacker also creates a separate invoice B' with an amount of 1 sat on their own node. 3. The attacker then modifies B' by inserting the payment hash of payment A into it, effectively making B with manipulated payment details. 4. The attacker re-signs the invoice to make it look legitimate again and serializes it, creating the malicious invoice B. 5. Next, the attacker creates a new account in LNbits and pays invoice B. 6. The LNbits backend, when processing the payment, uses the payment hash of B to determine whether it's an internal payment or a payment via the Lightning Network. 7. Since the backend assumes that the payment hash of A commits to A, it finds A in its database. 8. The backend then settles the payment internally by crediting A and debiting B. 9. As a result, the attacker has effectively "created" 999 sat in their account by manipulating the payment process. To mitigate this exploit, the recommended approach is for backends to either use unique "checking ids" that they generate themselves for looking up internal payments or implement additional checks to ensure that the invoice details haven't been tampered with. For example, they could verify that the amount of A is equal to the amount of B. Calle also highlights two lessons learned from this attack. Firstly, it emphasizes the level of sophistication of attackers familiar with the Lightning Network. This particular exploit required a deep understanding of the underlying technology and the ability to create custom tools. Secondly, it underscores the importance of understanding that the "payment hash" in an invoice is actually just a "preimage" hash and doesn't commit to payment details such as amount or pubkey. Calle suggests calling it the "preimage hash" going forward to avoid any implicit assumptions. Overall, this message serves as a detailed explanation of the discovered exploit, the steps involved in carrying it out, the recommended mitigation, and the lessons learned from this experience.

Dear 15-year-old, Recently, a team called LNbits discovered an interesting issue in their software that could allow someone to exploit it. Let me explain it to you in detail. LNbits is a software that handles invoices related to Lightning Network, which is a technology used for quick and low-cost transactions of cryptocurrencies like Bitcoin. In this software, there was a loophole that allowed an attacker to create fake balances by taking advantage of how invoices are processed internally. The team at LNbits fixed this issue in their latest version, 0.10.5, and they are urging everyone to update their software as soon as possible if they haven't done so already. They are sharing the details of the attack because they believe that similar exploits might be possible in other Lightning Network applications. If you are involved in developing custodial wallets, payment processors, or account management software, this information is relevant to you. Now, let's talk about how the attack works. The attacker first creates an invoice, let's call it Invoice A, with an amount of 1000 sat (satoshis, the smallest unit of Bitcoin). Then, they create another invoice, Invoice B', with an amount of 1 sat on their own node. The attacker then modifies Invoice B' by inserting the payment hash of Invoice A into it. The payment hash is a unique identifier for each payment. By doing this, the attacker tricks the LNbits backend, the system that handles the invoices, into thinking that Invoice B is actually Invoice A. They do this by reshaping the invoice and making it look like a legitimate payment. Next, the attacker creates a new account in LNbits and pays Invoice B. The LNbits backend, which checks the payment hash to determine whether it's an internal payment or a payment through Lightning Network, finds Invoice A in its database. This is because the backend assumes that the payment hash commits to Invoice A. However, the critical part here is that payment hashes do not commit to payment details like the amount, but only to the preimage (a unique code linked to the payment). As a result, the LNbits backend settles the payment by crediting Invoice A and debiting Invoice B. By doing this, the attacker has effectively "created" 999 sat. To prevent such attacks, it is important for backends to use unique identifiers or additional checks when looking up internal payments. This ensures that the invoice details have not been tampered with. There are two lessons to learn from this incident. Firstly, it is crucial to understand that attackers who are knowledgeable about Lightning Network can be quite sophisticated. This attack required a deep understanding of technical concepts and custom tools to carry it out. Secondly, the term "payment hash" is misleading because it suggests that it commits to payment details like the amount of money or the public key. In reality, it only commits to the preimage. To mitigate confusion, the author suggests renaming it as the "preimage hash." I hope this explanation helps you understand the issue and the importance of keeping software secure and updated. Best, Calle

An Open Source Framework to Collect Lightning Network Metrics

In this message, the author is introducing a side project they have been working on. The project involves collecting data on the Lightning Network, which is a protocol built on top of blockchain technology for conducting faster and cheaper transactions. The main objective of the project is to monitor the evolution of the Lightning Network and gather relevant data. This collected data can then be used to evaluate different proposals or ideas related to the network. One specific proposal mentioned is "channel jamming," which refers to a scenario where a malicious user intentionally overloads a channel to disrupt transactions. The author highlights that collecting real data is important as it provides tangible insights into the network's behavior and allows for more informed evaluations. Simulations can only provide theoretical results, whereas real data offers a more accurate representation of the network's dynamics. Additionally, the author mentions that their project aims to support University Research that may not have access to real data. By providing this collected information, researchers can analyze and evaluate their own ideas without having to rely solely on simulations. The author provides links to further information about the project. [1] leads to a detailed document outlining the idea and methodology behind the data collection. [2] directs to an experimental explorer, a platform where users can explore and visualize the collected data. Finally, [3] is a public Graphql API (Application Programming Interface) that exposes the collected data for developers or researchers to access. In conclusion, the author hopes that their project will be useful to someone interested in studying, evaluating, or proposing solutions for the Lightning Network.

Hello! I'm happy to explain this to you in great detail. So, it seems like the person who wrote this message has a side project where they're trying to gather data on something called the lightning network. The lightning network is a system built on top of the Bitcoin blockchain that allows for faster and cheaper transactions. The goal of this project is to track how the lightning network evolves over time. They want to do this to evaluate different proposals or ideas for improving the network. They mention something called "channel jamming," which is one proposal they're interested in investigating. By collecting real data on the network, they can see how these proposals actually affect the network in practice, instead of just relying on simulation results. Additionally, they mention that they want to support university research that may not have access to this real data. By providing this data, they hope to enable more research and experimentation in the field. To achieve this, the person has come up with a way to define and collect information that can later be shared with others. They've provided links to a more detailed description of their idea, an experimental explorer where you can see the data they've collected, and a public Graphql API that allows others to access this data as well. The hope is that this project will be useful for someone who wants to study or improve the lightning network. The person who wrote this message goes by the name Vincent and they're excited about the potential impact of their project. I hope that helps! Let me know if you have any further questions.

option_simple_close for "unfailable" closing

The link provided is a pull request on the GitHub repository for the "bolts" project. The pull request is labeled as #1096. The description of the pull request indicates that it is a "can't fail!" close protocol, which was discussed at the NY Summit and on @Roasbeef's wishlist. The protocol aims to be as simple as possible, with the only complexity arising from allowing each side to indicate whether they want to omit their own output. The protocol is "taproot ready" in the sense that shutdown is always sent to trigger it, allowing nonces to be included without any persistence requirement. The pull request consists of three commits to the repository. The first commit introduces the new protocol, the second commit removes the requirement that shutdown not be sent multiple times (which was already nonsensical), and the third commit removes the older protocols. The pull request includes changes to the file "02-peer-protocol.md". The changes in this file introduce the new protocol, describe the closing negotiation process, and specify the requirements for each step of the negotiation. The file includes a section on "Closing Negotiation" that explains that once shutdown is complete and the channel is empty of Hashed Time Locked Contracts (HTLCs), each peer says what fee it will pay and the other side simply signs off on that transaction. The section includes details on the message types, the data they contain, and the requirements for each peer in the negotiation. The pull request also includes changes to the file "03-transactions.md". The changes in this file provide details on the closing transactions used in the negotiation process. The file describes the different variants of the closing transaction and outlines the requirements for each variant. Finally, the pull request includes changes to the file "09-features.md". The changes in this file add a new feature called "option_simple_close" which is related to the simplified closing negotiation described in the pull request. Overall, the pull request introduces a new closing protocol for the bolts project, provides specifications for the negotiation process, and makes changes to related files to support the new protocol.

This is a pull request on a GitHub repository called "bolts" that proposes a new protocol for closing a channel in the Lightning Network. The pull request is numbered 1096. The new protocol is called "can't fail!" close protocol and it was discussed at the NY Summit and on the wishlist of a person named Roasbeef. The goal of this protocol is to make the closing process as simple as possible, with the only complexity being the option for each side to indicate whether they want to omit their own output. The protocol is "taproot ready" in the sense that the shutdown message is always sent to trigger the closing process, and this message can contain the necessary data without requiring any persistence. The pull request is split into three commits for cleanliness and organization. The first commit introduces the new protocol, the second removes a requirement that no longer makes sense, and the third removes older protocols that are no longer needed. The pull request includes changes to the "02-peer-protocol.md" and "03-transactions.md" files. In the "02-peer-protocol.md" file, there are several sections that describe the closing process, including the closing initiation, closing negotiation, and normal operation. The "closing negotiation" section is further divided into two parts: "closing_complete" and "closing_sig". In the "closing_complete" part, each peer says what fee it is willing to pay, and the other side simply signs that transaction. The complexity arises from allowing each side to omit its own output if it is not economically viable. This process can be repeated every time a shutdown message is received, allowing for re-negotiation. The "closing_sig" part describes the requirements for this message, including the transaction data that needs to be included and the signatures that need to be provided. The requirements differ depending on whether the sender of the message is the closer or the closee. The receiver of the closing_sig message needs to validate the signatures and select one of the transactions to respond to. The "03-transactions.md" file includes the details of the closing transactions, including the classic closing transaction variant and the closing transaction variant used for closing_complete and closing_sig messages. Overall, this pull request proposes a new protocol for closing a channel in the Lightning Network that simplifies the process and allows for negotiation between the peers involved. It includes changes to the protocol specification files to describe the new protocol in detail.

LN Summit 2023 Notes

The text you provided is a detailed summary of a discussion about various topics related to the Lightning Network (LN) specification. Here is a breakdown of the key points discussed: 1. Package Relay: The discussion focused on the proposal for package relay, which involves grouping transactions into packages for more efficient processing. The current proposal is to use ancestor package relay, which allows for up to 24 ancestors for each child transaction. Other topics discussed include base package relay, commitment transactions, ephemeral anchors, HLTCs (hashed timelock contracts), and mempool policy changes. 2. Taproot: The discussion touched on the latest developments in the Taproot privacy and scalability improvement proposal. Specific points discussed include the changes related to anchors and revocation paths, as well as the implementation of nonces. 3. Gossip V1.5 vs V2: The participants discussed the differences between Gossip V1.5 and V2 in terms of script binding and amount binding. They debated whether to fully bind to the script or allow any taproot output to be a channel. The potential implications for pathfinding and capacity graphs were also discussed. 4. Reputation System for Channel Jamming: The participants explored the idea of using a reputation system to mitigate channel jamming attacks. The discussion revolved around resource bucketing, reputation scores, endorsement signals, and the impact on network quality of service. 5. Simplified Commitments: The participants discussed the concept of simplified commitments, which aims to simplify the LN state machine by introducing turn taking and a refined protocol for updates, commitments, and revocations. They also discussed the possibility of introducing NACK messages for rejecting updates and the benefits of a more streamlined state machine. 6. Meta Spec Process: The participants debated the best approach to managing the LN specification as it evolves over time. They discussed the pros and cons of a living document vs. versioning, the need for modularization, and the importance of maintaining backward compatibility. The role of extensions, cleaning up the specification, and improving communication among developers were also discussed. 7. Async Payments/Trampoline: The participants briefly discussed the use of blinded payments for trampoline payments, where nodes in the network help route payments to their destination. The concept of trampolines, radius-based gossip, and splitting multi-path payments over trampoline were mentioned. In summary, the discussion covered a range of topics related to the LN specification, including package relay, Taproot, gossip protocols, reputation systems, simplified commitments, meta spec processes, and trampoline payments. The participants provided detailed insights, shared ideas, and debated the pros and cons of various proposals and approaches.

During the annual specification meeting in New York City at the end of June, the attendees attempted to take transcript-style notes. These notes are available in a Google Docs document, which you can find at the link provided. Additionally, the full set of notes is included at the end of the email, although the formatting may be affected. The discussions at the summit covered several topics, including: 1. Package Relay: The current proposal for package relay is ancestor package relay, which allows one child to have up to 24 ancestors. Currently, only mempool transactions are scored by ancestry, so there isn't much point in other types of packages. Commitment transactions still require the minimum relay fee for base package relay. Batch bumping is not allowed to prevent pinning attacks. With one anchor, RBF can be packaged. V3 transactions will allow for dropping minimum relay fees and the restriction of one child paying for one parent transaction. 2. HLTCs (HTLCs with anchors): There are changes being made to HLTCs with the introduction of SIGHASH_ANYONECANPAY, which allows the counterparty to inflate the size of a transaction. The discussion revolved around how much the system should be changed. The proposed changes would allow for zero-fee commitment transactions and one ephemeral anchor per transaction. The use of ephemeral anchors would eliminate the need for delay and ensure eviction of the parent transaction. 3. Mempool Policy: The mempool can be organized into clusters of transactions, allowing for easier sorting and reasoning. The mining algorithm will pick one "vertical" within the mempool using the ancestor fee rate. The discussion explored the possibility of adding cluster mempool to enable package RBF. 4. Taproot: The main change in taproot is around anchors, which become more complicated with this update. The discussion covered various aspects of taproot, including revocation paths, NUMS points, and co-op close negotiation. 5. Gossip V1.5 vs. V2: The discussion revolved around the script binding and amount binding in gossip. The participants debated whether to bind to the script or allow any taproot output. The consensus was to allow any taproot output to be a channel and let people experiment. 6. Multi-Sig Channel Parties: The discussion focused on different ways to implement multi-sig for one channel party, such as using scripts, FROSTy, or recursive musig. 7. PTLCs (Point Time Locked Contracts): The conversation explored different approaches to PTLCs, such as regular musig or adaptor signatures. The potential for redundant overpayment (stuckless payments) and different options for achieving it were also discussed. 8. Hybrid Approach to Channel Jamming: The discussion centered around different approaches to mitigate jamming attacks in Lightning Network, including monetary solutions (unconditional fees), reputation-based solutions, and utilizing scarce resources (POW, stake, tokens). The participants discussed the need to combine multiple solutions for effective mitigation and the challenges associated with each approach. 9. Reputation for Channel Jamming: The participants explored the concept of reputation-based mitigation for jamming attacks. The discussion focused on resource bucketing, reputation scores, and the allocation of protected and general slots for HTLCs based on reputation and endorsement signals. 10. Simplified Commitments: The conversation revolved around simplifying the state machine for Lightning Network by implementing turn-taking and introducing the concepts of revoke and NACK. The participants explored the implications of these changes and the benefits of simplified commitments. 11. Meta Spec Process: The participants discussed the idea of moving away from a single "living document" to a versioning system for the Lightning Network specification. The proposal was to have extensions that can be added and removed as needed, allowing for modularity and easier maintenance. The participants also discussed the need for better communication channels and the importance of recommitting to the lightning-dev mailing list. 12. Async Payments/Trampoline: The final discussion focused on trampoline payments and the potential for async (asynchronous) payments. The participants explored the concept of light nodes, trampoline routing, and the ability to split MPP (multi-part payments) over trampoline. In summary, the discussions covered a wide range of topics related to Lightning Network and its specifications. The participants delved into technical details, proposed solutions, and debated the benefits and challenges of various approaches.

On the experiment of the Bitcoin Contracting Primitives WG and marking this community process "up for grabs"

This message is a detailed update on the progress and future plans for the development of Bitcoin consensus changes. The author begins by referencing past discussions about covenant proposals and the need for a new community process to specify covenants. They mention their goals, which include building a consistent framework to evaluate covenant proposals, finding common ground between proposals, expanding the consensus changes development process beyond Bitcoin Core, and maintaining a high-quality technical archive. The author acknowledges that other initiatives, such as the bitcoin-inquisition fork and the archiving of covenant proposals under the Optech umbrella, have also been undertaken. They mention the Bitcoin Contracting Primitives Working Group, which has held monthly meetings and documented various primitives and protocols related to Bitcoin contracting. The author explains that they launched the effort as an experiment, devoting 20% of their time to it. However, they have come to the realization that their time and energy would be better allocated to working on Lightning Network robustness. They express their belief that the scalability and smooth operation of the Lightning Network are more critical for Bitcoin's survival than extended covenant capabilities. The author encourages others who are working on covenant changes proposals to continue their work, noting that Taproot and Schnorr soft forks have proven to be beneficial for self-custody solutions. They also mention their own plans to focus on R&D works related to CoinPool, particularly in addressing interactivity issues and designing advanced Bitcoin contracts. The author concludes by acknowledging that they may have overpromised with the launch of the new process for Bitcoin consensus changes development. They emphasize the importance of having technical historians and archivists to assess, collect, and preserve consensus changes proposals, as well as QA devs to ensure proper testing before deployment. They invite others to continue the maintenance of the Bitcoin Contracting Primitives Working Group or collaborate with other organizations. Overall, this message provides detailed information about the progress, challenges, and future plans related to Bitcoin consensus changes.

In this message, the author is discussing their involvement in a community process related to Bitcoin development. They introduced the idea of a new process to specify covenants, which are conditions or agreements that can be added to Bitcoin transactions. The author explains that they will not be actively pursuing this process further, as they have decided to focus more on other Bitcoin projects. They mention that the goals of this process were to build a consistent framework for evaluating covenant proposals, identify commonalities between proposals, open up the consensus development process beyond Bitcoin Core, and maintain a high-quality technical archive. The author also mentions other initiatives that have been undertaken during the same period, such as a fork of Bitcoin Core called bitcoin-inquisition and the archiving of covenant proposals under the Optech umbrella. The author provides some details about the Bitcoin Contracting Primitives Working Group, which is a group of individuals who have been working on documenting and archiving various Bitcoin contract primitives and protocols. They mention that monthly meetings have been held, and there have been in-depth discussions on topics related to contract primitives and protocols. The author explains that they started this effort as an experiment and initially committed to dedicating 20% of their time to it. However, they have realized that there is still a lot of work to be done in other areas, such as improving the Lightning Network, which is a second-layer scaling solution for Bitcoin. They believe that working on scaling Bitcoin and improving its robustness is more critical for the survival of Bitcoin than focusing on advanced contract capabilities. The author acknowledges that they may have overpromised with the new community process but believes that enough progress has been made to demonstrate its value. They express that what Bitcoin needs is not necessarily more technical proposals but rather a focus on assessing, collecting, and preserving consensus change proposals and ensuring thorough testing before deployment. They invite others to continue the work of the Bitcoin Contracting Primitives Working Group if they are willing to commit resources and effort to it. Overall, the author is reflecting on their involvement in the community process related to Bitcoin covenant proposals and discussing their decision to shift their focus to other Bitcoin projects. They believe that there is still much work to be done in scaling and improving Bitcoin's robustness and express the need for dedicated individuals to assess and preserve consensus change proposals.

Blinded 2-party Musig2

This text describes the implementation of a version of the 2-of-2 Schnorr Musig2 protocol for statechains. Statechains involve a server (referred to as party 1) that is "blinded," meaning it holds a private key necessary to generate an aggregate signature on an aggregate public key, but it does not have access to certain information. The information that party 1 is not supposed to learn includes: 1) the aggregate public key, 2) the aggregate signature, and 3) the message being signed (denoted as "m" in the text). The security of this implementation relies on party 1 being trusted to report the number of partial signatures it has generated for a particular key, rather than being trusted to enforce rules on what it has signed in the unblinded case. The full set of signatures generated is verified on the client side. The implementation is based on the 2-of-2 musig2 protocol, which operates as follows: 1. Party 1 generates a private key, denoted as "x1," and the corresponding public key, denoted as "X1 = x1G". G is the generator point, and point multiplication is denoted as X = xG, while point addition is denoted as A = G + G. 2. Party 2 generates a private key, denoted as "x2," and the corresponding public key, denoted as "X2 = x2G". 3. The set of public keys is denoted as L = {X1, X2}. 4. The key aggregation coefficient is given by KeyAggCoef(L, X) = H(L, X), where H is a hash function. This coefficient is used to calculate the shared (aggregate) public key, denoted as X = a1X1 + a2X2, where a1 = KeyAggCoef(L, X1) and a2 = KeyAggCoef(L, X2). 5. To sign a message "m," party 1 generates a nonce "r1" and derives a point "R1 = r1G". Party 2 generates a nonce "r2" and derives a point "R2 = r2G". These points are aggregated into "R = R1 + R2". 6. Party 1 computes the challenge "c" as the hash of the concatenation of X, R, and m, i.e., c = H(X||R||m), and calculates s1 = c.a1.x1 + r1. 7. Party 2 also computes the challenge "c" using the same formula, c = H(X||R||m), and calculates s2 = c.a2.x2 + r2. 8. The final signature is represented as (R, s1 + s2). In the case of blinding party 1, the steps to prevent it from learning the full public key or final signature are as follows: 1. Key aggregation is performed solely by party 2. Party 1 only needs to send its own public key, X1, to party 2. 2. Nonce aggregation is performed solely by party 2. Party 1 only needs to send its own nonce, R1, to party 2. 3. Party 2 computes the challenge "c" using the same formula and sends it to party 1 in order to compute s1 = c.a1.x1 + r1. 4. Party 1 never learns the final value of (R, s1 + s2) or the message "m". This implementation aims to provide confidentiality for party 1 by blinding it from certain information, thereby ensuring that party 1 cannot determine the full public key, final signature, or the signed message. Any feedback or potential issues with this approach would be appreciated. The attached HTML part of the message was likely removed due to its content being irrelevant or not accessible through the text format.

In this implementation, we are using a cryptographic protocol called 2-of-2 Schnorr Musig2 for statechains. In this protocol, there are two parties involved - party 1 and party 2. The goal is to create an aggregate signature on an aggregate public key, while ensuring that party 1 remains fully "blinded" and does not learn certain information. Blinding refers to the process of preventing party 1 from gaining knowledge of the aggregate public key, the aggregate signature, and the message being signed. In this model of blinded statechains, the security relies on party 1 being trusted to report the number of partial signatures it has generated for a specific key. The actual verification of the signatures is done on the client side. Now, let's break down how the 2-of-2 musig2 protocol operates and how blinding is achieved: 1. Key Generation: - Party 1 generates a private key (x1) and a corresponding public key (X1 = x1G), where G is the generator point. - Party 2 does the same, generating a private key (x2) and a public key (X2 = x2G). - The set of public keys is represented by L = {X1, X2}. 2. Key Aggregation: - The key aggregation coefficient is calculated using the set of public keys (L) and the aggregate public key (X). - KeyAggCoef(L, X) = H(L, X), where H is a hash function. - The shared (aggregate) public key is calculated as X = a1X1 + a2X2, where a1 = KeyAggCoef(L, X1) and a2 = KeyAggCoef(L, X2). 3. Message Signing: - To sign a message (m), party 1 generates a nonce (r1) and calculates R1 = r1G. - Party 2 also generates a nonce (r2) and calculates R2 = r2G. - These nonces are aggregated to obtain R = R1 + R2. - Party 1 computes the 'challenge' (c) as c = H(X || R || m) and calculates s1 = c.a1.x1 + r1. - Party 2 also computes the 'challenge' (c) as c = H(X || R || m) and calculates s2 = c.a2.x2 + r2. - The final signature is (R, s1 + s2). Now, let's focus on the blinding aspect for party 1: To prevent party 1 from learning the full public key or the final signature, the following steps are taken: 1) Key aggregation is performed only by party 2. Party 1 simply sends its public key X1 to party 2. 2) Nonce aggregation is performed only by party 2. Party 1 sends its generated nonce R1 to party 2. 3) Party 2 computes the 'challenge' (c) as c = H(X || R || m) and sends it back to party 1. Party 1 then computes s1 = c.a1.x1 + r1. - Party 1 does not need to independently compute and verify the challenge (c) since it is already blinded from the message. By following these steps, party 1 never learns the final value of (R, s1 + s2) or the message (m). In terms of potential issues, it is important to carefully evaluate the trustworthiness of the statechain server that reports the number of partial signatures. Additionally, the full set of signatures should be verified on the client side to ensure their validity. Any comments or concerns regarding this implementation would be highly appreciated.

Computing Blinding Factors in a PTLC and Trampoline World

This passage describes a mathematical demonstration of a method for computing blinding factors in a specific way. The goal is to achieve certain properties, such as ensuring that only one blinding factor is needed for each intermediate node and the receiver, and allowing Trampoline nodes to provide blinding factors to sub-routes without the intermediate nodes being aware they are on a Trampoline route. The demonstration begins by establishing that the ultimate receiver has a secret value "r" and shares a point "R" with the ultimate sender, where R = r * G (G represents a point on an elliptic curve). In the simplest case, where the ultimate sender and receiver are directly connected, the ultimate sender chooses a random scalar "e" as the error blinding factor and constructs an onion with "e" encrypted for the ultimate receiver. Along with the onion, the ultimate sender offers a Payment-Triggered Lightning Contract (PTLC) with the point e * G + R. The ultimate receiver can claim this PTLC by revealing e + r. Next, the scenario is slightly modified to include an intermediate node named Carol. In this case, the ultimate sender still chooses a random scalar "e" as the final error factor but also generates two scalars "c" and "d" such that c + d = e. This is achieved by selecting a random "d" and computing c = e - d. The onion is then encrypted with e for the ultimate receiver and the ciphertext, along with d encrypted for Carol. The PTLC is sent to Carol with the point c * G + R. Carol adds her per-hop blinding factor times G to the input point and sends a modified PTLC with the point c * G + R + d * G to the next hop. This results in (c + d) * G + R, which is equivalent to e * G + R, as e = c + d. The ultimate receiver cannot differentiate whether the PTLC came from Carol or a direct source-to-destination route because both cases result in the same point e * G + R. When the ultimate receiver reveals e + r, Carol can compute c + r by taking e + r - d. Since c = e - d, e + r - d = e - d + r = c + r. Carol can then claim the incoming c * G + R with the scalar c + r. Carol only knows d, not c or r, so it cannot compute r. Lastly, the scenario is extended to include Carol as a Trampoline node, and the ultimate sender does not provide the detailed route from Carol to the next Trampoline hop. The ultimate sender learns R, selects a random e, and computes c and d such that c + d = e. The Trampoline-level onion includes e encrypted for the ultimate receiver and the ciphertext, along with d and the next Trampoline hop encrypted for Carol. The PTLC with the onion is sent to Carol with the point c * G + R. Carol decrypts the onion and obtains d. Carol then needs to search for a route from herself to the ultimate receiver. Let's assume the route found is Carol -> Alice -> ultimate receiver. Carol selects two scalars, a and b, such that a + b = d. It creates a new onion with the ciphertext copied from the ultimate sender and b encrypted for Alice. The PTLC with the point c * G + R + a * G is sent to Alice. Alice decrypts the onion and learns b. Alice forwards the PTLC with the point c * G + R + a * G + b * G to the next hop, the ultimate receiver. Since a + b = d, a * G + b * G = d * G. Also, c + d = e, so c * G + d * G = e * G. Therefore, c * G + R + a * G + b * G = c * G + a * G + b * G + R = c * G + d * G + R = (c + d) * G + R = e * G + R. The ultimate receiver receives the same e * G + R and cannot determine whether it was reached via a Trampoline, non-Trampoline intermediate, or direct route. Each intermediate node, both Trampoline and non-Trampoline, can claim its incoming PTLC, and only the ultimate sender knows c, allowing the recovery of r.

In this explanation, we will break down a mathematical demonstration that involves the computation of blinding factors. The purpose of this computation is to achieve certain goals, such as minimizing the number of blinding factors that intermediate nodes need to know and allowing trampoline nodes to provide blinding factors to sub-routes without revealing that they are trampoline nodes. Let's start by understanding the basic setup. We have a sender (ultimate sender) and a receiver (ultimate receiver). The ultimate receiver has a secret value called 'r'. The ultimate receiver shares a point called 'R' with the ultimate sender, where 'R' is equal to 'r' multiplied by a specific point 'G'. In the simplest case, if the ultimate sender can directly communicate with the ultimate receiver, it chooses a random value (scalar) called 'e' as the blinding factor. It constructs an onion with 'e' encryptable by the ultimate receiver and sends it along with a payment (PTLC) that contains the point 'e * G + R'. The ultimate receiver can claim this payment by revealing 'e + r' since it learns 'e' from the onion and knows 'r' (the secret value). This is possible because the contract between them requires the ultimate receiver to provide 'r' in exchange for payment. Now, let's consider a scenario where an intermediate node, Carol, exists between the ultimate sender and the ultimate receiver. In this case, the ultimate sender still needs to choose a final blinding factor 'e' randomly. However, the sender also needs to generate two other scalars, 'c' and 'd,' such that 'c + d = e'. This can be achieved by selecting a random scalar 'd' and computing 'c = e - d'. The ultimate sender then encrypts the onion in the following way: - 'e' is encrypted to the ultimate receiver. - The above ciphertext, along with 'd' encrypted, is sent to intermediate node Carol. The ultimate sender sends the payment (PTLC) with the point 'c * G + R' to Carol. At this point, each intermediate non-Trampoline node (such as Carol) takes the input point, adds its per-hop blinding factor multiplied by 'G', and uses the result as the output point to the next hop. So Carol receives 'c * G + R'. Carol then adds 'd * G' (the 'd' error obtained from the onion) and sends a PTLC with the point 'c * G + R + d * G' to the next hop. Note that 'e = c + d', so the PTLC sent by Carol to the ultimate sender can be rearranged as '(c + d) * G + R'. This is equivalent to 'e * G + R', which is the same as the direct case where there is no intermediate node. Therefore, the ultimate receiver cannot distinguish whether it received from Carol or from a further node since it sees 'e * G + R' in both cases. When the ultimate receiver releases 'e + r', Carol can compute 'c + r' by taking 'e + r - d'. Since 'c = e - d', 'e + r - d = e - d + r = c + r'. Carol can then claim the incoming 'c * G + R' with the scalar 'c + r'. It's important to note that Carol does not know 'c'; it only knows 'd' and, therefore, cannot compute 'r'. Now let's consider another scenario where Carol is a trampoline node, and the ultimate sender does not provide a detailed route from Carol to the next trampoline hop. In this case, the ultimate receiver is actually the final trampoline hop after Carol, but Carol is unaware of this fact and cannot learn it. The ultimate sender still learns 'R' but selects a random 'e' as the blinding factor. It generates 'c' and 'd' such that 'c + d = e', following the same technique as before. The ultimate sender then creates a trampoline-level onion with the following encrypted components: - 'e' encrypted to the ultimate receiver. - The above ciphertext, 'd', and the next trampoline hop (the node ID of the ultimate receiver) encrypted to Carol. The ultimate sender sends the payment (PTLC) with the above onion, containing the point 'c * G + R', to Carol. Carol decrypts the onion and obtains 'd'. Now, Carol needs to find a route from itself to the ultimate receiver, which, in this case, is the next trampoline hop. Suppose Carol finds a route Carol -> Alice -> ultimate receiver. Carol needs to make 'c * G + d * G + R' reach the ultimate receiver. It can do this by selecting two scalars, 'a' and 'b', such that 'a + b = d'. Carol knows 'd', so it randomly selects 'b' and computes 'a = d - b'. Carol creates the onion as follows: - It copies the ciphertext from the ultimate sender: 'e' encrypted to the ultimate receiver. - The above ciphertext and 'b' encrypted to Alice. Carol sends the PTLC with the point 'c * G + R + a * G' to Alice. Alice decrypts the onion and learns 'b'. Then, Alice forwards the PTLC with the point 'c * G + R + a * G + b * G' to the next hop, the ultimate receiver. Now, 'a + b = d', so 'a * G + b * G = d * G'. Also, 'c + d = e', so 'c * G + d * G = e * G'. Therefore: c * G + R + a * G + b * G = c * G + a * G + b * G + R (commutative property) = c * G + (a + b) * G + R (associative property) = c * G + d * G + R (d = a + b by construction) = (c + d) * G + R (associative property) = e * G + R (e = c + d by construction) Thus, the ultimate receiver receives the same 'e * G + R' and cannot differentiate whether it was reached via a trampoline, a non-trampoline intermediate, or directly. Similarly, when claiming, every intermediate node, both trampoline and non-trampoline, has enough data to claim its incoming PTLC. And only the ultimate sender knows 'c', which allows it to recover 'r'. I hope this detailed explanation helps you understand the mathematical demonstration and its implications.

Potential vulnerability in Lightning backends: BOLT-11 "payment hash" does not commit to payment!

In this message, the sender, Calle, is informing a list of recipients about an exploit that was discovered by their team at LNbits. The exploit allowed an attacker to create balances by taking advantage of a quirk in how invoices are handled internally. The team has already patched this issue in LNbits version 0.10.5 and recommends that everyone update as soon as possible. Calle wants to describe the attack in detail because they believe that similar exploits may be possible in other Lightning applications. They specifically mention that this information would be important for people working on custodial wallets, payment processors, account management software, and so on. The attack involves an attacker manipulating two payments, A and B, and tricking the backend into thinking that B is equal to A. Here are the steps involved: 1. The attacker creates an invoice A with an amount of 1000 sat (satoshi) in LNbits. 2. The attacker also creates a separate invoice B' with an amount of 1 sat on their own node. 3. The attacker then modifies B' by inserting the payment hash of payment A into it, effectively making B with manipulated payment details. 4. The attacker re-signs the invoice to make it look legitimate again and serializes it, creating the malicious invoice B. 5. Next, the attacker creates a new account in LNbits and pays invoice B. 6. The LNbits backend, when processing the payment, uses the payment hash of B to determine whether it's an internal payment or a payment via the Lightning Network. 7. Since the backend assumes that the payment hash of A commits to A, it finds A in its database. 8. The backend then settles the payment internally by crediting A and debiting B. 9. As a result, the attacker has effectively "created" 999 sat in their account by manipulating the payment process. To mitigate this exploit, the recommended approach is for backends to either use unique "checking ids" that they generate themselves for looking up internal payments or implement additional checks to ensure that the invoice details haven't been tampered with. For example, they could verify that the amount of A is equal to the amount of B. Calle also highlights two lessons learned from this attack. Firstly, it emphasizes the level of sophistication of attackers familiar with the Lightning Network. This particular exploit required a deep understanding of the underlying technology and the ability to create custom tools. Secondly, it underscores the importance of understanding that the "payment hash" in an invoice is actually just a "preimage" hash and doesn't commit to payment details such as amount or pubkey. Calle suggests calling it the "preimage hash" going forward to avoid any implicit assumptions. Overall, this message serves as a detailed explanation of the discovered exploit, the steps involved in carrying it out, the recommended mitigation, and the lessons learned from this experience.

Dear 15-year-old, Recently, a team called LNbits discovered an interesting issue in their software that could allow someone to exploit it. Let me explain it to you in detail. LNbits is a software that handles invoices related to Lightning Network, which is a technology used for quick and low-cost transactions of cryptocurrencies like Bitcoin. In this software, there was a loophole that allowed an attacker to create fake balances by taking advantage of how invoices are processed internally. The team at LNbits fixed this issue in their latest version, 0.10.5, and they are urging everyone to update their software as soon as possible if they haven't done so already. They are sharing the details of the attack because they believe that similar exploits might be possible in other Lightning Network applications. If you are involved in developing custodial wallets, payment processors, or account management software, this information is relevant to you. Now, let's talk about how the attack works. The attacker first creates an invoice, let's call it Invoice A, with an amount of 1000 sat (satoshis, the smallest unit of Bitcoin). Then, they create another invoice, Invoice B', with an amount of 1 sat on their own node. The attacker then modifies Invoice B' by inserting the payment hash of Invoice A into it. The payment hash is a unique identifier for each payment. By doing this, the attacker tricks the LNbits backend, the system that handles the invoices, into thinking that Invoice B is actually Invoice A. They do this by reshaping the invoice and making it look like a legitimate payment. Next, the attacker creates a new account in LNbits and pays Invoice B. The LNbits backend, which checks the payment hash to determine whether it's an internal payment or a payment through Lightning Network, finds Invoice A in its database. This is because the backend assumes that the payment hash commits to Invoice A. However, the critical part here is that payment hashes do not commit to payment details like the amount, but only to the preimage (a unique code linked to the payment). As a result, the LNbits backend settles the payment by crediting Invoice A and debiting Invoice B. By doing this, the attacker has effectively "created" 999 sat. To prevent such attacks, it is important for backends to use unique identifiers or additional checks when looking up internal payments. This ensures that the invoice details have not been tampered with. There are two lessons to learn from this incident. Firstly, it is crucial to understand that attackers who are knowledgeable about Lightning Network can be quite sophisticated. This attack required a deep understanding of technical concepts and custom tools to carry it out. Secondly, the term "payment hash" is misleading because it suggests that it commits to payment details like the amount of money or the public key. In reality, it only commits to the preimage. To mitigate confusion, the author suggests renaming it as the "preimage hash." I hope this explanation helps you understand the issue and the importance of keeping software secure and updated. Best, Calle

An Open Source Framework to Collect Lightning Network Metrics

In this message, the author is introducing a side project they have been working on. The project involves collecting data on the Lightning Network, which is a protocol built on top of blockchain technology for conducting faster and cheaper transactions. The main objective of the project is to monitor the evolution of the Lightning Network and gather relevant data. This collected data can then be used to evaluate different proposals or ideas related to the network. One specific proposal mentioned is "channel jamming," which refers to a scenario where a malicious user intentionally overloads a channel to disrupt transactions. The author highlights that collecting real data is important as it provides tangible insights into the network's behavior and allows for more informed evaluations. Simulations can only provide theoretical results, whereas real data offers a more accurate representation of the network's dynamics. Additionally, the author mentions that their project aims to support University Research that may not have access to real data. By providing this collected information, researchers can analyze and evaluate their own ideas without having to rely solely on simulations. The author provides links to further information about the project. [1] leads to a detailed document outlining the idea and methodology behind the data collection. [2] directs to an experimental explorer, a platform where users can explore and visualize the collected data. Finally, [3] is a public Graphql API (Application Programming Interface) that exposes the collected data for developers or researchers to access. In conclusion, the author hopes that their project will be useful to someone interested in studying, evaluating, or proposing solutions for the Lightning Network.

Hello! I'm happy to explain this to you in great detail. So, it seems like the person who wrote this message has a side project where they're trying to gather data on something called the lightning network. The lightning network is a system built on top of the Bitcoin blockchain that allows for faster and cheaper transactions. The goal of this project is to track how the lightning network evolves over time. They want to do this to evaluate different proposals or ideas for improving the network. They mention something called "channel jamming," which is one proposal they're interested in investigating. By collecting real data on the network, they can see how these proposals actually affect the network in practice, instead of just relying on simulation results. Additionally, they mention that they want to support university research that may not have access to this real data. By providing this data, they hope to enable more research and experimentation in the field. To achieve this, the person has come up with a way to define and collect information that can later be shared with others. They've provided links to a more detailed description of their idea, an experimental explorer where you can see the data they've collected, and a public Graphql API that allows others to access this data as well. The hope is that this project will be useful for someone who wants to study or improve the lightning network. The person who wrote this message goes by the name Vincent and they're excited about the potential impact of their project. I hope that helps! Let me know if you have any further questions.

option_simple_close for "unfailable" closing

The link provided is a pull request on the GitHub repository for the "bolts" project. The pull request is labeled as #1096. The description of the pull request indicates that it is a "can't fail!" close protocol, which was discussed at the NY Summit and on @Roasbeef's wishlist. The protocol aims to be as simple as possible, with the only complexity arising from allowing each side to indicate whether they want to omit their own output. The protocol is "taproot ready" in the sense that shutdown is always sent to trigger it, allowing nonces to be included without any persistence requirement. The pull request consists of three commits to the repository. The first commit introduces the new protocol, the second commit removes the requirement that shutdown not be sent multiple times (which was already nonsensical), and the third commit removes the older protocols. The pull request includes changes to the file "02-peer-protocol.md". The changes in this file introduce the new protocol, describe the closing negotiation process, and specify the requirements for each step of the negotiation. The file includes a section on "Closing Negotiation" that explains that once shutdown is complete and the channel is empty of Hashed Time Locked Contracts (HTLCs), each peer says what fee it will pay and the other side simply signs off on that transaction. The section includes details on the message types, the data they contain, and the requirements for each peer in the negotiation. The pull request also includes changes to the file "03-transactions.md". The changes in this file provide details on the closing transactions used in the negotiation process. The file describes the different variants of the closing transaction and outlines the requirements for each variant. Finally, the pull request includes changes to the file "09-features.md". The changes in this file add a new feature called "option_simple_close" which is related to the simplified closing negotiation described in the pull request. Overall, the pull request introduces a new closing protocol for the bolts project, provides specifications for the negotiation process, and makes changes to related files to support the new protocol.

This is a pull request on a GitHub repository called "bolts" that proposes a new protocol for closing a channel in the Lightning Network. The pull request is numbered 1096. The new protocol is called "can't fail!" close protocol and it was discussed at the NY Summit and on the wishlist of a person named Roasbeef. The goal of this protocol is to make the closing process as simple as possible, with the only complexity being the option for each side to indicate whether they want to omit their own output. The protocol is "taproot ready" in the sense that the shutdown message is always sent to trigger the closing process, and this message can contain the necessary data without requiring any persistence. The pull request is split into three commits for cleanliness and organization. The first commit introduces the new protocol, the second removes a requirement that no longer makes sense, and the third removes older protocols that are no longer needed. The pull request includes changes to the "02-peer-protocol.md" and "03-transactions.md" files. In the "02-peer-protocol.md" file, there are several sections that describe the closing process, including the closing initiation, closing negotiation, and normal operation. The "closing negotiation" section is further divided into two parts: "closing_complete" and "closing_sig". In the "closing_complete" part, each peer says what fee it is willing to pay, and the other side simply signs that transaction. The complexity arises from allowing each side to omit its own output if it is not economically viable. This process can be repeated every time a shutdown message is received, allowing for re-negotiation. The "closing_sig" part describes the requirements for this message, including the transaction data that needs to be included and the signatures that need to be provided. The requirements differ depending on whether the sender of the message is the closer or the closee. The receiver of the closing_sig message needs to validate the signatures and select one of the transactions to respond to. The "03-transactions.md" file includes the details of the closing transactions, including the classic closing transaction variant and the closing transaction variant used for closing_complete and closing_sig messages. Overall, this pull request proposes a new protocol for closing a channel in the Lightning Network that simplifies the process and allows for negotiation between the peers involved. It includes changes to the protocol specification files to describe the new protocol in detail.

LN Summit 2023 Notes

The text you provided is a detailed summary of a discussion about various topics related to the Lightning Network (LN) specification. Here is a breakdown of the key points discussed: 1. Package Relay: The discussion focused on the proposal for package relay, which involves grouping transactions into packages for more efficient processing. The current proposal is to use ancestor package relay, which allows for up to 24 ancestors for each child transaction. Other topics discussed include base package relay, commitment transactions, ephemeral anchors, HLTCs (hashed timelock contracts), and mempool policy changes. 2. Taproot: The discussion touched on the latest developments in the Taproot privacy and scalability improvement proposal. Specific points discussed include the changes related to anchors and revocation paths, as well as the implementation of nonces. 3. Gossip V1.5 vs V2: The participants discussed the differences between Gossip V1.5 and V2 in terms of script binding and amount binding. They debated whether to fully bind to the script or allow any taproot output to be a channel. The potential implications for pathfinding and capacity graphs were also discussed. 4. Reputation System for Channel Jamming: The participants explored the idea of using a reputation system to mitigate channel jamming attacks. The discussion revolved around resource bucketing, reputation scores, endorsement signals, and the impact on network quality of service. 5. Simplified Commitments: The participants discussed the concept of simplified commitments, which aims to simplify the LN state machine by introducing turn taking and a refined protocol for updates, commitments, and revocations. They also discussed the possibility of introducing NACK messages for rejecting updates and the benefits of a more streamlined state machine. 6. Meta Spec Process: The participants debated the best approach to managing the LN specification as it evolves over time. They discussed the pros and cons of a living document vs. versioning, the need for modularization, and the importance of maintaining backward compatibility. The role of extensions, cleaning up the specification, and improving communication among developers were also discussed. 7. Async Payments/Trampoline: The participants briefly discussed the use of blinded payments for trampoline payments, where nodes in the network help route payments to their destination. The concept of trampolines, radius-based gossip, and splitting multi-path payments over trampoline were mentioned. In summary, the discussion covered a range of topics related to the LN specification, including package relay, Taproot, gossip protocols, reputation systems, simplified commitments, meta spec processes, and trampoline payments. The participants provided detailed insights, shared ideas, and debated the pros and cons of various proposals and approaches.

During the annual specification meeting in New York City at the end of June, the attendees attempted to take transcript-style notes. These notes are available in a Google Docs document, which you can find at the link provided. Additionally, the full set of notes is included at the end of the email, although the formatting may be affected. The discussions at the summit covered several topics, including: 1. Package Relay: The current proposal for package relay is ancestor package relay, which allows one child to have up to 24 ancestors. Currently, only mempool transactions are scored by ancestry, so there isn't much point in other types of packages. Commitment transactions still require the minimum relay fee for base package relay. Batch bumping is not allowed to prevent pinning attacks. With one anchor, RBF can be packaged. V3 transactions will allow for dropping minimum relay fees and the restriction of one child paying for one parent transaction. 2. HLTCs (HTLCs with anchors): There are changes being made to HLTCs with the introduction of SIGHASH_ANYONECANPAY, which allows the counterparty to inflate the size of a transaction. The discussion revolved around how much the system should be changed. The proposed changes would allow for zero-fee commitment transactions and one ephemeral anchor per transaction. The use of ephemeral anchors would eliminate the need for delay and ensure eviction of the parent transaction. 3. Mempool Policy: The mempool can be organized into clusters of transactions, allowing for easier sorting and reasoning. The mining algorithm will pick one "vertical" within the mempool using the ancestor fee rate. The discussion explored the possibility of adding cluster mempool to enable package RBF. 4. Taproot: The main change in taproot is around anchors, which become more complicated with this update. The discussion covered various aspects of taproot, including revocation paths, NUMS points, and co-op close negotiation. 5. Gossip V1.5 vs. V2: The discussion revolved around the script binding and amount binding in gossip. The participants debated whether to bind to the script or allow any taproot output. The consensus was to allow any taproot output to be a channel and let people experiment. 6. Multi-Sig Channel Parties: The discussion focused on different ways to implement multi-sig for one channel party, such as using scripts, FROSTy, or recursive musig. 7. PTLCs (Point Time Locked Contracts): The conversation explored different approaches to PTLCs, such as regular musig or adaptor signatures. The potential for redundant overpayment (stuckless payments) and different options for achieving it were also discussed. 8. Hybrid Approach to Channel Jamming: The discussion centered around different approaches to mitigate jamming attacks in Lightning Network, including monetary solutions (unconditional fees), reputation-based solutions, and utilizing scarce resources (POW, stake, tokens). The participants discussed the need to combine multiple solutions for effective mitigation and the challenges associated with each approach. 9. Reputation for Channel Jamming: The participants explored the concept of reputation-based mitigation for jamming attacks. The discussion focused on resource bucketing, reputation scores, and the allocation of protected and general slots for HTLCs based on reputation and endorsement signals. 10. Simplified Commitments: The conversation revolved around simplifying the state machine for Lightning Network by implementing turn-taking and introducing the concepts of revoke and NACK. The participants explored the implications of these changes and the benefits of simplified commitments. 11. Meta Spec Process: The participants discussed the idea of moving away from a single "living document" to a versioning system for the Lightning Network specification. The proposal was to have extensions that can be added and removed as needed, allowing for modularity and easier maintenance. The participants also discussed the need for better communication channels and the importance of recommitting to the lightning-dev mailing list. 12. Async Payments/Trampoline: The final discussion focused on trampoline payments and the potential for async (asynchronous) payments. The participants explored the concept of light nodes, trampoline routing, and the ability to split MPP (multi-part payments) over trampoline. In summary, the discussions covered a wide range of topics related to Lightning Network and its specifications. The participants delved into technical details, proposed solutions, and debated the benefits and challenges of various approaches.