> I'm happy to see thinking and discussion in this area. Getting this discussion going was exactly my intent. I'm not presenting so much a solution as we might want to do this at some point what are problems and can we solve them?
> If it turns out to be the case that PQ schemes need more on-chain size, but > have lower per-byte computation cost, a reasonable argument could be made > that a higher discount factor for PQ data is acceptable. I was focused on size because computation is pretty great for most PQ signature schemes. PQ signatures are far cheaper to validate per byte and according to BIP-360 Falcon is cheaper than edDSA per signature verification. EdDSA Cycles to verify: 130,000 FALCON-512 Cycles to verify: 81,036 This is one of the reasons I am very optimistic that Bitcoin will move to post-quantum signatures. If research shows that these signature schemes are sufficiently JPEG resistant, and I think it will, then a discount is very attractive. > I don't think pre-aggregation (beyond a single-transaction-wide one) is > realistic, as it effectively breaks in-mempool transaction replacement, > turning every pre-aggregated group of transactions that is being relayed > together into an atomic package that must be taken or not as a whole. In some circumstances it is possible you could aggregate (P+C1, P+C2) into (P+C1+C2). If you can prove that P is the same in both transactions thus the balance and authentication properties are maintained. However I think what you have described is the shape of the problem we need to solve. Consider transactions: T1, T1', T2, T3, T4, T5 where T1 and T1' are double spends, i.e., spend the same output to different outputs. If half the mempool aggregates TA = (T1, T2, T3) and the other half aggregates TB = (T1', T4, T5). TA and TB are mutually exclusive and transactions are needlessly dropped on the floor. This is a currently existing griefing vector with coinjoins today and is an issue with mimblewimble aggregation. I don't think we have seen it abused much, but that doesn't mean we can ignore it. I believe this is a solvable problem, but it requires careful thought and I haven't seen a fully baked answer. What follows is my intuition on how this might be solved. Approach one: have relay nodes share a map of what UTXOs would be spent by their mempool prior to performing an aggregation to detect and resolve doublespends. Approach two: allow an aggregator to non-interactively aggregate a set of transactions if they are the sender or receiver of funds in all the transactions they are aggregating. My biggest concern here is a conflict between aggregator/relay incentives and miner incentives that either causes miners to be aggregators or reduces the profitability of miners. This conflict arises from the fact that, unless prevented by the protocol, an aggregator can aggregate high fee transactions with low fee transactions in such a way as to reduce miner fees and possibility make fees for themselves. For the sake of example assume blocksize allows only two transactions per block: T1 has a 100 sat/vb fee T2 has a 100 sat/vb fee T3 has a 50 sat/vb fee T4 has a 50 sat/vb fee If the miner was the aggregator they would aggregate (T1 + T2) and mine it to get the highest fee. Instead an aggregator who is not a miner could collect a fee from the creator of T3 and T4 and aggregate (T1 + T3) and (T2 + T4) thereby raising the average fee of T3 and T4. The miner loses out on fees. Approach two makes this less of an issue because the creator of T1, if they are aware T2 exists, is unlikely to consent to having T1 aggregated with T3 since it lowers the total fee. This relay vs miner conflict isn't an entirely new issue in Bitcoin. Miners today could run relay nodes and keep the high fee transactions for themselves. I assume this isn't done very much in 2025 because the block subsidy still dominates, but is likely to be a bigger issue when fees dominate. On Mon, Apr 14, 2025 at 9:47 AM Pieter Wuille <[email protected]> wrote: > > Hi Ethan, > > thank you bringing this up. I'm unconvinced about the practicality, but I'm > happy to see thinking and discussion in this area. > > Two points addressed below: > > On Friday, April 4th, 2025 at 12:29 PM, Ethan Heilman <[email protected]> > wrote: > > > If it is the case that we can > > handle these extra bytes without degrading performance or > > decentralization, then consider the head room we are giving up that > > could be used for scalability. > > I don't disagree with the overall point raised here, but I do think it's > worth distinguishing between the "size" (bandwidth/storage) and "computation" > (CPU/IO) aspects of scalability. > > If it turns out to be the case that PQ schemes need more on-chain size, but > have lower per-byte computation cost, a reasonable argument could be made > that a higher discount factor for PQ data is acceptable. I don't know what > the trade-off here ought to be, and this does not diminish your "JPEG > resistance" argument, but I did want to point out that just counting size > isn't the only constraint here. > > > Such a system would present scaling issues for the mempool because > > prior to aggregation and compression, these transactions would be 2kb > > to 100kb in size and there would be a lot more of them. It is likely > > parties producing large numbers of transactions would want to > > pre-aggregate and compress them in one big many input, many output > > transactions. Aggregating prior to the miner may have privacy benefits > > but also scalability benefits as it would enable cut-throughs and very > > cheap consolidation transactions. ~87/txns a second does not include > > these additional scalability benefits. > > I don't think pre-aggregation (beyond a single-transaction-wide one) is > realistic, as it effectively breaks in-mempool transaction replacement, > turning every pre-aggregated group of transactions that is being relayed > together into an atomic package that must be taken or not as a whole. > Consider for example the case where transactions P, C1, and C2 are relayed, > with C1 and C2 depending on P. One node sees P and C1, but not C2, they may > pre-aggregate prior to relay. Another node sees P and C2, but not C1, they > may pre-aggregate those prior to relay. These two packages (P+C1, P+C2) > cannot be combined, so we've effectively forced the network/miners to choose > between one of C1 or C2, unless the individual transactions are still > available somewhere. > > I fear this is a very fast way to cause mining without direct-to-miner > transaction submission from users to become uncompetitive, making entering > the mining business permissioned, and effectively removing the point of > having a decentralized consensus mechanism in the first place. > > -- > Pieter > -- You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAEM%3Dy%2BVK2VwoTc3VbHFbARm9no6qJivrug%2BLPuGy_m8%2BPFELOA%40mail.gmail.com.
