Hi Ethan,

thank you bringing this up. I'm unconvinced about the practicality, but I'm 
happy to see thinking and discussion in this area.

Two points addressed below:

On Friday, April 4th, 2025 at 12:29 PM, Ethan Heilman <[email protected]> wrote:

> If it is the case that we can
> handle these extra bytes without degrading performance or
> decentralization, then consider the head room we are giving up that
> could be used for scalability.

I don't disagree with the overall point raised here, but I do think it's worth 
distinguishing between the "size" (bandwidth/storage) and "computation" 
(CPU/IO) aspects of scalability.

If it turns out to be the case that PQ schemes need more on-chain size, but 
have lower per-byte computation cost, a reasonable argument could be made that 
a higher discount factor for PQ data is acceptable. I don't know what the 
trade-off here ought to be, and this does not diminish your "JPEG resistance" 
argument, but I did want to point out that just counting size isn't the only 
constraint here.

> Such a system would present scaling issues for the mempool because
> prior to aggregation and compression, these transactions would be 2kb
> to 100kb in size and there would be a lot more of them. It is likely
> parties producing large numbers of transactions would want to
> pre-aggregate and compress them in one big many input, many output
> transactions. Aggregating prior to the miner may have privacy benefits
> but also scalability benefits as it would enable cut-throughs and very
> cheap consolidation transactions. ~87/txns a second does not include
> these additional scalability benefits.

I don't think pre-aggregation (beyond a single-transaction-wide one) is 
realistic, as it effectively breaks in-mempool transaction replacement, turning 
every pre-aggregated group of transactions that is being relayed together into 
an atomic package that must be taken or not as a whole. Consider for example 
the case where transactions P, C1, and C2 are relayed, with C1 and C2 depending 
on P. One node sees P and C1, but not C2, they may pre-aggregate prior to 
relay. Another node sees P and C2, but not C1, they may pre-aggregate those 
prior to relay. These two packages (P+C1, P+C2) cannot be combined, so we've 
effectively forced the network/miners to choose between one of C1 or C2, unless 
the individual transactions are still available somewhere.

I fear this is a very fast way to cause mining without direct-to-miner 
transaction submission from users to become uncompetitive, making entering the 
mining business permissioned, and effectively removing the point of having a 
decentralized consensus mechanism in the first place.

-- 
Pieter

-- 
You received this message because you are subscribed to the Google Groups 
"Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/bitcoindev/p8kWp-qhHYIB-nMWGHI5GJ65j2Ve_apGJXG3QByimJrGHKcyrfZII1OG0I40KJMCyeV-HDuhLfg-29S3nfKu1k9cUbvtJ_N5n2x9jmopRxA%3D%40wuille.net.

Reply via email to