On Fri, Nov 14, 2025 at 06:53:41PM +0100, Petr Menšík wrote: > I agree for the protocol itself nothing else is needed. But for tools > presenting TXT records to humans it does matter what encoding it is.
When that is the case, a more robust approach is to publish the desired (HTML) text via a suitable HTTP(S) server, and place an ASCII URL in the RDATA. As appropriate, the HTTP headers and/or markup can describe the language and character encoding of the content. If some application desperately wants UTF-8 in DNS RDATA, TXT records are not in my view the best vehicle for that. > People use also different character sets with letters not present in > US-ASCII. TXT records are unstructured and I think should be easy to process > by people. Some languages use latin letters with some additions, like my > native Czech. Other languages use completely different alphabet. Current > command-line tools escape UTF-8 encoding into /DDD form. Which is > definitively not easy to read by human. I think it should be presented as > UTF-8 encoded text whenever it is valid UTF-8 encoding. Escape it only if it > is not. TXT records are a bit of a misnomer, in that, as already noted in this thread, the payload cannot be assumed to be "text". They are not necessarily intended for presentation to a human reader. > I created bind9 feature request: > https://gitlab.isc.org/isc-projects/bind9/-/issues/5643 If I were making the decision at ISC, I'd decline to adopt the proposed change. > But I think it should be clarified, how this should be presented. DNS-SD can > store quite a lot of information into those records. I think it makes sense > to allow native speakers to insert text descriptions in whatever language it > is easiest for them to read. Current utilities do not make that simple. All sort of fun with BIDI, control characters, ... -- Viktor. 🇺🇦 Слава Україні! _______________________________________________ DNSOP mailing list -- [email protected] To unsubscribe send an email to [email protected]
