>
>
>
> On Tue, Nov 13, 2018 at 11:26 AM Jakob Bohm via dev-security-policy <
> [email protected]> wrote:
>
>> Furthermore the start of the thread was off-list.  Also neither I, nor
>> some other participants have access to the audit reports etc. in CCADB.
>>
>
> Sure you do. That information is publicly available through
> https://wiki.mozilla.org/CA/Included_Certificates
>
>
>> This basic combination of noise and missing data is why I asked for a
>> one-stop overview of your complaints against TUVIT, similar to the lists
>> compiled for previous situations with multiple complaints against one
>> party.
>>
>
> Those are the output of these discussions, not the input or structure to
> them. There are certainly broader complaints, but if you'll note, my focus
> has been on attempting to satisfactorily resolve the current set of issues.
> Several times you've attempted to move it to the meta discussion, while
> I've tried to again focus on the specific lack of resolution for those
> initially identified issues. The reference to the other issues is precisely
> because the explanation and resolution of *these* issues can inform or be
> compared with the *past* issues, which would be used to build the list
> seemingly so desired.
>
>
>> "Misconfiguration and misapplication of the relevant rules..." is so
>> broad as to describe the majority of CA failures without giving any
>> useful specifics to assess the situation.  It's like saying someone's
>> crime was to "violate and break the relevant laws" (which would apply to
>> anything from jaywalking to mass murder).
>>
>
> While sympathetic to your frustration, I think that's a rather extreme
> interpretation. For example, CAs seem to believe that the majority of their
> failures are "human error" and that human error is corrected by "additional
> training". Perhaps you would like to propose a better wording to
> distinguish between the "Guaranteed to produce the wrong result, 100% of
> the time" configuration issues, in which a certificate profile is
> functionally unable to meet the stated configuration, and those which are
> tied to, for example, data validation issues (or lack thereof). My intent
> was to capture the former, while acknowledging that the latter is something
> that is primarily accounted for through design review, sampling, and
> testing.
>
>
>> It would also be useful to quantify the word substantial: Of all the
>> certificates issued by the audited CA organization, how large a
>> percentage suffered from each flaw, how many from none.  This is a key
>> number when assessing if statistical sampling by the auditor should have
>> caught an issue.  It is also a key number when assessing the level of
>> incompetence of the CA (but the CA is not the subject of this thread).
>>
>
> I already responded to this previously, and again in my more recent
> messages. In the issue that started this thread, we can see it's 100%. In
> the past issuance examples, we can see that it was 100% of certificates
> going through certain systems. While that is less than 100% of total
> volume, sampling methodology also must consider variances and other
> factors. For example, if a CA issues DV, OV, and EV, a sampling methodology
> would approach each profile distinctly for sample selection, rather than
> overall issuance. A sampling method for a CA may involve 100+ such samples
> (each representing a percentage), based on the design review that
> identifies variations and permutations relevant to the service provide.
> Similarly, the selection of 3% is relevant to CA self-audits primarily.
>
> This is where the initial request for the discussion about methodology - a
> discussion about how a CAB can miss 100% of certificates being misissued -
> is relevant. And, as of yet, unaddressed.
>
> Issue U1 (Qc-statement misencoding) apparently affected all certificates
>> from one issuing CA, and should thus have been caught by sampling by the
>> auditor.  The auditor has (according to earlier posts) admitted that the
>> bug was present in the sampled certificates from that issuing CA, but
>> that this was overlooked because that particular extension was not one
>> they had specific experience looking at.  Once the problem was pointed
>> out the auditor looked at the previously collected evidence and
>> confirmed the problem by checking that detail from first principles
>> (similar to software developer hand-executing a function with pen and
>> paper to confirm a bug).
>>
>
> I don't believe that is a correct summary. The auditor reported things
> were correct - i.e. no bug - and only after pushing further to state very
> clearly that there was a bug did the auditor confirm that, oh yes, there
> was a bug, we just overlooked it. Now, I can understand that the favorable
> reading for the auditor was simply that they were busy and on the road and
> favoring expediency over correctness - but we've seen CAs using this same
> reasoning for years. Multiple times now, CAs have faced serious
> misissuances, confidently and repeatedly stated they've identified all of
> them, and then be presented with an example certificate not identified by
> the CA that demonstrates the exact problem. Do you disagree that auditors
> should be aware of the perception of such responses and the harm to trust?
> Do you believe auditors should be held to a different standard?
>
>
>> S10. Then there is the issue if TUVIT was right or wrong in accepting a
>>   slow phase out of the certificates affected by U1.  This involves both
>>   the principal issue if there should be zero tolerance for incorrect
>>   certificates, the practical issue of how much harm this specific
>>   standards validation can cause, and how much time should be allowed
>>   for an orderly replacement process.  Multiple months seems a quite
>>   long time.  1 day quite a short time.
>>
>
> I've snipped a majority of your statements, mostly because I don't find
> them correct or helpful framing. To the extent I'm signalling specific
> things, it's because they are particularly egregious. I think Wayne has
> already responded in a way that conflicts with your statement of S10; you
> pose several extremes or absolutes, but that's not the issue. If we take a
> step back, the issue here is that TUVIT has taken the view that the ETSI
> specification supersedes the requirements of the Baseline Requirements;
> where the BRs are quite prescriptive in their requirements (24 hours - 5
> days), TUVIT has taken a position that any period not exceeding three
> months is acceptable. This, combined with the lack of reporting - which is
> not supported, in practice, by other CABs - creates a situation where, for
> audits conducted by TUVIT, there is zero community assurance that the
> provisions of 4.9.1.1 have been followed.
>
>
>> It is of cause the purpose of any audit scheme to check for the absence
>> of irregularities, and to report if any were found.
>
>
> Except that's not the point of the ETSI audit, which is at least why some
> discussion of the scheme is relevant. The "report if any were found" is
> functionally absent. The 'reporting' is done based on whether or not the
> certificate was issued, and the certificate is issued provided that any
> irregularities were resolved, to the auditor's satisfaction, within 90
> days. This is something 'unique' to ETSI audits, and not part of the
> underlying ISO/IEC 17065.
>
> Other ETSI-based auditors have recognized this gap, and thus ensured
> they've reported on irregularities. That shows how they can address the gap
> from the bare minimum required of ETSI and instead meet the expectations of
> the browsers consuming the reports.
>
>
>> But it is quite
>> rare for the audit to essentially redo every piece of administrative
>> work done by the audited company.
>>
>
> It's unclear your intended remark. During the sampling process, it is
> indeed a cross-check of all the administrative work - ensuring that
> sufficient evidence of all of the controls that exist to ensure the correct
> functioning of that administrative work were followed, with detailed
> analysis.
>
> The debate in bug #1391074 about the template used for ECDSA
>> certificates is a good example.  According to the bug, the ECDSA
>> certificate profile/template was correct, but some piece of software
>> mishandled approved ECDSA certificate requests and used the RSA
>> certificate profile/template, for at least some of the issued
>> certificates.  An incorrect ECDSA profile/template saying to set the
>> KeyEncryption bit should have been spotted by a configuration audit and
>> review (by TUVIT).  But code bugs are notoriously harder to spot.
>>
>
> I've got no idea where you got that summary from, but that's certainly not
> consistent with 3.3 of
> https://bug1391074.bmoattachments.org/attachment.cgi?id=8915934
>
> An EC profile/template was not configured. All certificates, regardless of
> key type, were configured to use the same profile.
>
> It is expected, of all CAs, that profiles are distinct per key type, least
> of all due to the necessity to ensure both the input keys are appropriate
> (e.g. correct strength) and the output certificate is correct from RFC 5280.
>
>
>> > In the context of ETSI, each of these configuration changes -
>> particularly
>> > once qualified - undergoes some review; whether after the fact
>> > (pre-qualification) or prior to such change.
>>
>> This is why it is interesting to look at each issue to determine if it
>> was subject to such review by or notification to TUVIT.
>>
>
> I'm not sure what you're saying. The model of certification requires this.
> Is your framing of the question whether or not T-Systems notified TUVIT of
> these issues? If they didn't, they were contractually negligent under the
> ETSI model, and TUVIT should, as part of their explanation, indicate how
> they addressed those failures. I'm taking Occam's Razor here, and presuming
> that TUVIT was notified, as they were required to be, and that the failure
> rests with TUVIT.
>
>
>> > Similarly, misissuance
>> > involves a degree of notification to the CAB.
>>
>> Only once known (e.g. around the time of the bug reports).  Because I
>> don't think you expect a CA to notify the auditor that it is about to
>> misissue, and then proceed to actually misissue instead of stopping
>> itself.
>>
>
> Yes, for all configuration changes post-misissuance, the CAB would have
> been notified. One would thus reasonably expect that, as these
> notifications increase, the CAB would take a more careful look at both
> patterns of problems and specific root causes to ensure that future issues
> are premptively identified, rather than retroactively remediated. This is
> why 100% misissuance is particularly concerning *given* past issues.
>
> Yet the substance of the discussion - the "current" issue if you will -
> can be discussed without the lengthy debate and dissection being had in
> your message here. That's because TUVIT can respond with regards to their
> methodology and approach used. If that methodology *didn't* consider the
> past incidents, then we can go through that retroactive dissection. If they
> did, however, then we can allow TUVIT to respond as to what they did and
> how it impacted.
>
> As such, I can see limited value in the present conversation for
> continuing this belabored dissection - we should instead focus on the
> methodology and approach used for the most recent issue, and based on that,
> analyze retroactively, to evaluate whether or not adequate assurance is
> being provided.
>
>
>>
>> > As such, it is entirely
>> > reasonable to expect a degree of supervision, as that is the value of
>> the
>> > certification scheme. All of this information would have been available
>> at
>> > the time of configuring qualified certificates, including the pattern of
>> > issues existing when configuring profiles and templates.
>>
>> Should have been available doesn't mean it was available.  There is
>> always a limit to the depth of audits, and thus we need to assess if
>> TUVIT was being sloppy, complacent, complicit or just unlucky.
>>
>
> As captured above, I disagree with that, not on principle, but in
> execution. We need more information from TUVIT, rather than attempting to
> divine through first principles as this thread tries to do. If no further
> information is provided, then we don't need to bother with that assessment
> at all - an auditor who is unwilling or unable to provide reasonable
> information is an auditor that should not be trusted.
>
>
>> It is relevant to ask, but it takes a considerable level of certainty
>> before starting formal proceedings to disqualify an auditor due to the
>> failings of a single audit subject.
>>
>> In comparison, E&Y was involved in auditing multiple bad CAs and RAs by
>> the time some E&Y branches where disqualified by Mozilla.
>
>
>> In the world of technical review and testing, TUV SUD is a major player,
>> reviewing the safety of many technical products far outside Germany,
>> they rank in the same league as UL in the US.
>>
>
> This gets so close to understanding the issue, but then radically misses
> the point. Multiple E&Y branches were disqualified on the basis of a single
> audit, but E&Y is not blanket disqualified.
>
> TUV IT is a specific entity, the equivalent to an E&Y "branch". Mentioning
> TUV SUD in the context of TUV IT is akin to mentioning KPMG in a discussion
> about E&Y. TUV IT is part of the TUV NORD group, which is itself distinct
> from TUV SUD.
>
_______________________________________________
dev-security-policy mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-security-policy
          • Re: Question... Nick Pope via dev-security-policy
            • Re: Que... Ryan Sleevi via dev-security-policy
              • Re:... Nick Pope via dev-security-policy
              • Re:... Nick Pope via dev-security-policy
              • Re:... Ryan Sleevi via dev-security-policy
              • Re:... Jakob Bohm via dev-security-policy
              • Re:... Ryan Sleevi via dev-security-policy
              • Re:... Jakob Bohm via dev-security-policy
              • Re:... Ryan Sleevi via dev-security-policy
              • Re:... Jakob Bohm via dev-security-policy
              • Re:... Ryan Sleevi via dev-security-policy
              • Re:... Jakob Bohm via dev-security-policy
              • Re:... Wayne Thayer via dev-security-policy
              • Re:... Ryan Sleevi via dev-security-policy
              • Re:... Wayne Thayer via dev-security-policy
              • Re:... Nick Pope via dev-security-policy
              • Re:... Wayne Thayer via dev-security-policy
              • Re:... Nick Pope via dev-security-policy
              • Re:... Nick Pope via dev-security-policy
  • Re: Questions regarding the q... Moudrick M. Dadashov via dev-security-policy

Reply via email to