On Tue, 7 Jan 2025 at 16:59, waxwing/ AdamISZ <[email protected]> wrote: > What's clear is that this risk is far worse with a static central coordinator > for all joins rather than the "each new participant coordinates" model. Also > to correct a common imprecision (so not ofc addressed to you nothingmuch, but > to the reader): the taker-maker model is *not* incompatible with coordinator > blinding.
Nor is it even limited to a centralized coordinator (e.g. if instantiated with a threshold issuance scheme). Another misconception is that such a mechanism helps privacy, when all it can do is provide denial of service resistance potentially without undermining privacy, but does nothing to actually improve it directly. The misconception is not accidental, as often wabisabi credentials are portrayed as privacy enhancing. > 2/ the ability of the coordinator to tag a targeted user by shenanigans with > the blinding key, roundID etc. > > The story you tell on this is interesting. In short, as per the "fundamental > weakness" paragraph above, it's the nature of these systems that the user is > anonymous and ephemeral and therefore the only "identity" they have is the > coin they bring to the join. Given that, attestations being verifiable > requires blockchain access as the ground truth. For similar things in > Joinmarket's protocol (and rest assured, we had the same requirement, > basically), we never had to bat an eye, because we could make calls on the > utxo set at any time, since we *force* users to use full nodes. But as you > say, it *should* still be fully possible with various kinds of light client > ... Indeed. Wasabi has optional full node support, and yet this check was never implemented. For light clients various reduced soundness mitigation are possible that would still make it significantly harder to do this successfully. > so I am wondering why the people working on the Wasabi project didn't > consider this a sine-qua-non. Well, FWIW I was, it came up repeatedly and I always assumed that it was supposed to be essential (see the footnote links in initial email for lots of supporting evidence). The oldest instance I found of me explicitly mentioning such consistency issues dates back to before the wabisabi design was in place, and I would think a company with "zk" in the name and which repeatedly used phrases like "can't be evil" and "trustless" to describe its service would care, but alas it did not work out that way. This is especially concerning since everyone involved knew there was a good chance that alternative coordinators are very likely in the future. > Why even bother with blinding if you're not going to give the client a surety > that the blinding is actually doing anything? Ostensibly denial of service protection. If being cynical, DoS protection only for the coordinator and not the users. But even that is empirically unmotivated given that the first 3 wasabi protocols had cryptographic flaws in the denial of service protection, but when DoS attacks finally arrived they exploited misconfiguration of digital ocean firewall (no datacenter level firewall was configured) and the recourse was enabling cloudflare with SSL termination. The simpler explanation is that it's an affinity scam, and regrettably I was tricked and exploited into effectively becoming a rubber stamp of approval with the intent of deceiving non-technical users to pay the coordination fees. Just one supporting example, note how how an audit of the code was announced https://github.com/orgs/WalletWasabi/discussions/7262 but it fails fails to mention that the audit only accounts for protocol security that protects the coordinator against malicious users, and that the non cryptographic but privacy sensitive code protecting clients was not audited. > On reflection, I can see at least one counter-argument: suppose user2 is > looking at user1's signature on the context of the round, and they are given > key P for user1's signature and just told "trust me" by the coordinator, and > they go ahead, because user2 only has a light client and no merkle proofs. > Well, if the coordinator lies about P, the user2 can find out later that day, > using a full node or block explorer to check user1's utxo. Now, if the > coordinator's message is *signed* so as to be non-repudiable, then user2 can > prove to the world that the coordinator lied. Conditional on that signing, I > think this counter-argument is strong; in the absence of signing, with > repudiable messages instead, then I think it's weak. Yep, publishing such signatures would have been a significant mitigation of the various tagging concerns. i don't remember if something like this was brought after they decided not to provide clients with the ownership proofs at all in the initial release. Ownership proofs were later included, but only covering the threat model for stateless signers https://github.com/WalletWasabi/WalletWasabi/pull/8708. Also note how this commit reverts a fix in the original PR, restoring dummy data in lieu of a meaningful commitment to the coordinator address, presumably with this rationale: https://github.com/WalletWasabi/WalletWasabi/issues/5992#issuecomment-1538230320 i.e. it's already broken and released so it's too late to fix anything). > I guess all this comes into particularly sharp focus now that we have various > different Wasabi coordinators. They should all be assumed to be run by the > Feds, so to speak, and analyzed from that perspective. (not that that wasn't > true with only 1, just that it's more true, now). Yep... The most popular coordinator still in use is described by its operator as "free" and "trustless", and has publicly admitted to be incapable of understanding these issues. As usual, there is a profit motive at play, see liquisabi.com for revenue estimates. > My gut reaction is to do "permanent key tweaked with context" here, so the > client could easily verify, based on remembering the permanent key, that the > correct (hash of date plus round number plus whatever) had been applied. But > that works in Schnorr family, I don't know if a key tweak can be applied to > RSA? Perhaps this question is academic, but I want to know how easily this > could have been fixed in practice. (I don't know why they were using RSA, but > I could imagine various practical reasons; there were after all known attacks > on Schnorr blinded signing).> Afaik RSA was just the obvious choice at the time for both (nopara is on the public record admitting he copied the RSA blind signing code from stack overflow... no clue about whirlpool's stated design rationale). In hindsight, this is arguably better than blind Schnorr, since Wagner attack mitigation is rather complex though not impossible (wasabi also had a nonce reuse issue with the blind signing key in the 1st iteration of blind introduced in this PR Schnorr https://github.com/WalletWasabi/WalletWasabi/pull/1006 and Wagner attack only became relevant after that) A tweaked key would indeed work very well for this kind of mitigation, as the untweaked key could have even been hard coded in the client, and the client could locally tweak it with the round ID as the committed data. This should also be possible with blind DH e-cash / privacy pass tokens, and the wabisabi issuer parameters, which are just an n-tuple of ECC public keys and similarly amenable to TR style commitments. > > 2. use of deterministic shuffling in the transaction, ensuring that > > signatures can only be aggregated in the absence of equivocation (assuming > > the corresponding Lehmer code has enough bits of entropy) > > That's an elegant idea; I presume it depends on tx size being large enough > (yeah, bits of entropy), but that certainly isn't an issue for the Wa(bi)sabi > design. Couldn't a similar trick be played with the coordinator's receiving > address (assuming that wasabi still works like that, with a coordinator fee > address in the tx)? Yes. It could be an OP_RETURN of course, and not a very costly one. It could be a pay to contract, if the coordinator is willing to risk not losing these transcripts, but if published that should not be a concern, just complexity in the coordinator's wallet. If the coordinator consolidates any inputs, it could be sign to contract, eliminating that risk. That said, coordinator fee support has been removed recently, partly because the fees were never enforced using the anonymous credential mechanism and client side determination of the effective values of inputs after deducting such fees, what has led to abusive coordinators siphoning user funds. Successfully equivocating transcripts reduces to a multi-collision attack. For the easiest case, that of a 2-collision to single out exactly one target user, two transcripts would need to be found such that the hashes encoded in the order collide. Even if n-1 users are honest, and the parameters were fully then this is still not a 2nd preimage attack, since nothing prevents a malicious coordinator from contributing the last input, and grinding ownership proofs on it to collide with the transcript observed by the targetted user. log2(40!) is ~159, just shy of standard NIST recommendations for collision resistance, note that this is for one list, but inputs and outputs are two separate ones, so log2(24!) ~= 79, would have cryptographic soundness if there are least 25 inputs and 25 outputs. The main benefit of this approach is that it saves a round trip, all clients can just sort the transaction locally and contribute signatures knowing the transaction would go through only if they all agree, but this round trip elimination comes with the liability of divulging to the potentially malicious coordinator what a client had intended to do. That said, partitioning all clients is an n-collision, so harder to do than just finding a 2 collision, and doing "just" 2^80 work in between learning the final output set and signature aggregation is very generous to the adversary, but either way the assumption was there'd be on the order of 100 inputs as a starting condition (in practice that figure is even higher), so much so that even the current sorting by amount is retained, just shuffling equivalent valued outputs would suffice for Lehmer coding a hash image with security level 2^160 (e.g. ~40 equivalence classes of 4 outputs each). > > it seems to me that if it was controlled by a rational attacker it would > > not use the overt key tagging attack when covert ways of deanonymizing are > > available and just as effective. Absolutely, and also note that using any kind of commit-to-the-transcript-in-the-transaction approach does not guarantee the coordinator will be caught unless users independently coordinate to check for consistency after the fact, nor does it prevent any privacy leaks arising from coin selection, or the choice of outputs. > It seems I missed something, here. What covert attacks are possible except > for Sybilling, excluding other users from the round? - which is only at best > semi-covert. Maybe stuff like timing and tor? You didn't miss anything, I only described the long standing active attacker concern, which I had described before the release due to the recent "vulnerability" and subsequent "fix": GingerWallet's round ID hash preimage validation was inexplicably lost in a refactor, and then restored. They claimed that this active adversary concern was a new attack, and that it was fully mitigated, neither of which is true. Usually when I criticized Wasabi in the past, not here, it was over these passive deanonymization concerns, which IMO are much more severe than the active ones for precisely the reasoning you gave above. 1. "optimistic" (ugh) approach to coin selection (first select coins, then try to register, high probability that not all can be registered due to abrupt cutoff, then figure out how to decompose the resulting balance), and some ad-hoc tweaks that de-randomize it in order to deal with some of the fragmentation issues that poor amount decomposition resulted is a major concern, especially since there's no accounting whatsoever for history intersection. if the adversary has some fore-knowledge of a wallet's cluster, then this informs the adversary of likely output value choices, and subsequent coinjoins or payment transactions can confirm and further undermine this through history intersection. 2. initially the tor circuit management was highly problematic, more recently it has been partly mitigated, but there are still potential timing leaks especially considering that the guard node will be fixed for a given client's circuits, reducing total variance for circuits. before they switched to clearnet coordinator w/ cloudflare + SSL termination as a trusted middleman this was more severe, due to the 2x factor in # of circuits required for hidden services. 3. the use of HTTP and JSON at the protocol level, neither of which is sufficiently rigid to be canonical, and reliance on *varying* 3rd party implementations of both (i.e. https://github.com/WalletWasabi/WalletWasabi/pull/13339) between different versions of the client presents another set of semantic leaks... These independent (i.e. potentially compounding) leaks can additionally be covertly amplified delaying or dropping coordinator responses (this in particular exploits the lack of request timing randomization during reissuance or output registration requests, by delaying responses to credential reissuance requests, if only one client is actually able to register an output when the first output registration is received, that's a deterministic link, for example). If such leaks are insufficient for the adversary to conclude that the a posteriori outcome is deanonymizable, then it can of course just disrupt the session forcing a blame round with plausible deniability. There's some discussions of some of these concerns e.g. here https://github.com/WalletWasabi/WabiSabi/issues/83 Part of what's so frustrating about my experience is that in addition to my criticisms seeming like a gish gallop simply because there are so many flaws, the "rebuttals" against these privacy leaks this were always in the spirit of "oh yeah? so why don't you deanonymize this transaction? oh you can't?", which is morally bankrupt given the profit motive and vulnerable and misinformed users' lives potentially being be at risk. More generally, it is well known e.g. from the mixnet literature that attackers generally have exponential advantage in every marginal bit of leak information, nevermind the fact that these concerns are specifically about a malicious coordinator not a 3rd party observer (there the concerns about balance decomposition and coin selection are still relevant FWIW), which is why in privacy enhancing technologies usually the burden of proof rests with the defender, not the attacker. -- You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAAQdECAg5W4a9_386FeGWBZnv7zje4gmXtAMcC8scWq_o2dEwg%40mail.gmail.com.
