> On 28 May 2020, at 09:21, Andrew Tunnell-Jones via dns-operations > <[email protected]> wrote: > > > From: Andrew Tunnell-Jones <[email protected]> > Subject: Re: [dns-operations] At least 3 CloudFlare DNS-hosted domains with > oddball TLSA lookup ServFail > Date: 28 May 2020 at 09:21:00 AEST > To: Christian Elmerot <[email protected]> > Cc: [email protected] > > > On Thu, May 28, 2020 at 3:18 AM Christian Elmerot <[email protected]> > wrote: >> >> >> On 26/05/2020 12:00, Viktor Dukhovni wrote: >>> On Thu, Apr 23, 2020 at 08:46:02AM -0400, Shumon Huque wrote: >>> >>>>> Great, thanks. Not yet resolved FWIW: >>>>> >>>>> http://dnssec-stats.ant.isi.edu/~viktor/dnsviz/cloudflare.com.html >>>> I didn't see the reason for the SERVFAIL in the dnsviz output. So I ran >>>> my own debugging tool on these domains. All the CF servers for the zone >>>> are unresponsive to DNS queries for the TLSA record at those names. I >>>> assume that's why we get SERVFAIL. They respond to other queries fine >>>> such as apex SOA, A, etc): >>> I've rescanned the three domains, still broken (same URL, updated >>> content), and yes silence. >>> >>> @alla.ns.cloudflare.com.[173.245.58.62] >>> ; <<>> DiG 9.16.2 <<>> +noidnout +nosearch +dnssec +noall +cmd +comment >>> +qu +ans +auth +nocl +nottl +nosplit +norecur -t tlsa >>> _25._tcp.mx01.mx-hosting.ch @173.245.58.62 >>> ;; connection timed out; no servers could be reached >>> >>> @guss.ns.cloudflare.com.[173.245.59.172] >>> ; <<>> DiG 9.16.2 <<>> +noidnout +nosearch +dnssec +noall +cmd +comment >>> +qu +ans +auth +nocl +nottl +nosplit +norecur -t tlsa >>> _25._tcp.mx01.mx-hosting.ch @173.245.59.172 >>> ;; connection timed out; no servers could be reached >>> >>> Unclear why the TLSA queries are dropped, and by whom (is Cloudflare >>> just proxying breakage at the customer's DNS?) >> >> I've looked into the error on our side and the reason for those >> SERVFAILs are due to malformed record content. This is likely due to an >> older version of our API not performing the correct validations for TLSA >> records and it is unfortunate the zone owners never checked the output. >> > > The web interface still allows creation of TLSA records with invalid > data. As well a "Certificate" field containing whitespace and hex is > accepted by the web interface but results in the same failure. This is > likely to catch people out as whitespace is part of the presentation > format of that field. The placeholder text for that field is also PEM > rather than hex.
This is reminiscent of DS parsing stupidities by web tools. DS also allows whitespace in the hex encoding and the introduction of type 2 DS records exposed broken parsers. We ended up modifying dnssec-dsfromkey to omit the space just so the cut-and-paste didn’t include whitespace. % dig dnskey . +noall +answer | dnssec-dsfromkey -f - . . IN DS 20326 8 2 E06D44B80B8F1D39A95C0B0D7C65D08458E880409BBC683457104237C7F8EC8D % Not that DNS is alone in the UX idiocy. Lots of phone number parsers can’t handle whitespace nor plus (+). Similarly credit card number parsers. The number on credit cards are broken up to PREVENT user input errors and the UX developers defeat that by force space less entry. > _______________________________________________ > dns-operations mailing list > [email protected] > https://lists.dns-oarc.net/mailman/listinfo/dns-operations -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: [email protected] _______________________________________________ dns-operations mailing list [email protected] https://lists.dns-oarc.net/mailman/listinfo/dns-operations
