On Mon, Nov 25, 2013 at 6:59 PM, Lawrence Mandel <lman...@mozilla.com> wrote:
>> If not, is there a better way to do this than duplicating probes and
>> checking the pref to see which probe should be fed?
>
> A probe is not restricted to boolean values. You can define a histogram that 
> maps values to conditions. As such, you can have a single probe that captures 
> all of the required data. Depending on your use case, this structure may be 
> more difficult to read after the data is aggregated on the server.

This approach works when counting all occurrences of some event (for
example, counting stuff on every page load). Unfortunately, this
approach doesn't work when using telemetry flags, i.e. counting
whether something happened in a session, since the "didn't happen"
cases don't get bucketed by the pref value.

Concretely, I'd like to understand if
https://bugzilla.mozilla.org/show_bug.cgi?id=910211 is a good idea.
(In theory, it is.) I think the way to measure success is  measuring
the proportion of Firefox sessions where the character encoding menu
is used, since measuring instances of use relative to total page loads
tells less about  the user exposure to encoding bogosity, because on
one hand, the user is exposed to bogosity if he/she  encounters
bogosity in every session even if there was a huge number of non-bogus
page loads and, on the other hand,  many uses of the encoding menu
once on a single site are not evidence of a broad problem.

So to measure this, I'd like to have CHARSET_OVERRIDE_USED flag
telemetry partitioned by whether the feature form
https://bugzilla.mozilla.org/show_bug.cgi?id=910211 is enabled and
randomizing whether the feature is enabled.

Part of the problem, though, is that the experiment would need to be
run on a channel that has localizations and a lot of user diversity in
terms of geographic location, language skills and localizations. This
means that the experiment couldn't be run on Nightly or Aurora. If we
have policy reasons against randomizing configurations on Release and
Beta,  maybe doesn't even make sense to put effort into arranging A/B
testing and the only option is to just land
https://bugzilla.mozilla.org/show_bug.cgi?id=910211 and compare
telemetry between the release with the feature and the previous
release. (Also it might make sense to just count on the feature making
sense in theory and not trying to worry too much about testing the
theory.)

Do we have policy reasons that preclude A/B testing on Release or Beta?

-- 
Henri Sivonen
hsivo...@hsivonen.fi
http://hsivonen.fi/
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to