This is mostly off-topic, and relates primarily to one of my pet
peeves regarding everything cryptography-oriented on the Internet
today.  I also know that this is not the correct venue to try to make
any reforms.  However, since Mr. Ross has stated his view on the
topic, I feel that I must state mine.

My view is actually one more of administrative convenience.  One
reason that everything on the Internet has worked well up to now is
that the admins can actually look at the data streams coming in and
out, and figure out what's going on, and what piece of data is being
obtained from where.  (Look at SMTP, or ESMTP, or even the diagnostic
value of HTTP headers for an example.)

At this point, requiring additional tools (such as OpenSSL, libpkix,
NSS, or any of any others) to figure out what a given datastream is
simply obscures the ability to troubleshoot.  It's possible to
identify an X.509 certificate (since it encodes strings in ways that
can be simply read by standard ASCII printable-character scanning) by
looking for Subject lines that include S=, OU=, O=, C=, and so on.
It's not as easily possible to determine that a given data stream is
in actuality a CRL.

If you look at the DER and BER encodings, you will see that they are
designed to minimize the number of bits used to encode any given data
structure.  If you look at the definition for XER (X.693), it includes
the following paragraph:

[quoted from X.693 (12/2001) section A.3]
The length of this encoding in BASIC-XER is 653 octets ignoring all
"white-space".  For comparison, the same
PersonnelRecord value encoded with the UNALIGNED variant of PER (see
ITU-T Rec. X.690 | ISO/IEC 8825-1) is 84
octets, with the ALIGNED variant of PER it is 94 octets, with BER (see
ITU-T Rec. X.691 | ISO/IEC 8825-2) using the
definite length form it is a mininum of 136 octets, and with BER using
the indefinite length form it is a minimum of 161
octets.
[end quote]

I understand that memory-constrained devices would do well to have
less data to process; however, I don't particularly think that
obscuring the data sent by the CA is the way to go.  If necessary,
code the gateway that the memory-constrained devices use (I'm thinking
mobile phones here, primarily, though this means that I know that I am
ignoring other classes of memory-constrained devices that would not
necessarily use a gateway) to de-base64 the data, so that at the very
least the type of data can be identified without having to run it
through 'file' with its magic number structures -- especially since I
haven't seen a version of the 'magic' file that can properly identify
DER-encoded data as such.

As to the current case, this CA in question is not generating improper
certificates.  It is generating proper CRLs, and it is simply encoding
and transmitting them as PEM-encoded DER-encoded CRL structures when
RFC5280 (which, by the way, I've been repeatedly told that NSS does
*NOT* comply with) states that they must be sent as DER-encoded.

I have asked why NSS insists on DER-encoded CRLs and throws an
ffffe009 when the received data is PEM-encoded.  I have not received a
specific answer as to my query: is "-----BEGIN X509 CRL-----" a valid
DER sequence?  If it is not, I would ask if the received data could be
run through a base64 decoder and processing reattempted before
throwing the error.

(Honestly, having NSS do base64 decoding of anything it's handed if it
fails on initial import would go a LONG way to increasing the
usability of X.509 structures within common-use cryptography -- I know
certain software requires PEM-encoded certificates for import, and
this software that we're discussing now requires non-PEM-encoded
certificates for import.  This necessitates providing multiple links
to multiple formats of a root certificate, for example, and relying on
the user to do the bookkeeping that the computer is much more suited
for.  The user should be involved at the point of deciding whether to
trust and what to trust -- not how to get the data into the software
before the trust decision can be made in the first place.)

-Kyle H

2009/2/25 David E. Ross <nob...@nowhere.not>:
> On 2/25/2009 2:04 PM, Kyle Hamilton wrote:
>> Postel's first rule of interoperability: be liberal in what you
>> accept, be conservative in what you send.
>>
>> Which RFC requires which?  (I had read somewhere, for example, that
>> wildcard certificates must be handled by HTTP over TLS servers in a
>> particular way -- it turns out that it wasn't part of PKIX, as I had
>> thought, but rather an Informational RFC regarding "HTTP over TLS".)
>>
>> -Kyle H
>>
>> On Wed, Feb 25, 2009 at 1:57 PM, Nelson B Bolyard <nel...@bolyard.me> wrote:
>>> Kyle Hamilton wrote, On 2009-02-25 13:56:
>>>> This is going to sound rather stupid of me, but I'm going to ask this 
>>>> anyway:
>>>> Why is Firefox insisting on a specific encoding of the data, rather
>>>> than being flexible to alternate, unconfusable, common encodings?
>>> The RFCs require conforming CAs to send binary DER CRLs.
>
> In the case of secure browsing at authenticated Web sites, I want to be
> conservative in what I accept.  If a CA is generating certificates that
> do not comply with accepted RFCs, what else is that CA doing wrong?  In
> other words, if a CA sends CRLs that are not binary DER, that should be
> a red flag that the CA might not be trustworthy in other respects.
>
> --
> David E. Ross
> <http://www.rossde.com/>
>
> Go to Mozdev at <http://www.mozdev.org/> for quick access to
> extensions for Firefox, Thunderbird, SeaMonkey, and other
> Mozilla-related applications.  You can access Mozdev much
> more quickly than you can Mozilla Add-Ons.
> --
> dev-tech-crypto mailing list
> dev-tech-crypto@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-crypto
>
--
dev-tech-crypto mailing list
dev-tech-crypto@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-crypto

Reply via email to