Dear Peter,

On Sun, May 03, 2015 at 07:50:44AM +0200, Peter Bienstman wrote:
> > It is rule 5 of the SM2 algorithm that is not being executed at all for 
> > cards
> > graded 0 or 1
> 
> But rule 6 says not to apply rule 5 for failed cards:
> 
> "If the quality response was lower than 3 then start repetitions for the item 
> from the beginning without changing the E-Factor"

It doesn't say that.  It says that rule 6 applies only to cards with grades 0
and 1.  It doesn't and shouldn't say anything about the applicability of rule 5
in rule 6!  That information belongs in rule 5, which says nothing about
treating differently graded cards differently, apart from the value of the EF
increment.

In fact, the page with the algorithm that you linked to has a link at the
bottom to an example implementation.  In that implementation, rule 5 is applied
to all cards, regardless of the grade.  Mnemosyne is really not using rule 5
correctly.

> I guess the reasoning behind this was that after a lot a repetitions,
> increasing the difficulty as well as resetting the interval was considered
> too big of a penalty. I agree in the corner case you mention (immediately
> failure), this seams suboptimal, but remember that this is not an exact
> science, and the idea is that after many repetitions and corrections by the
> user, the intervals and easiness factors converge to something which is
> roughly OK.

Failing to recall it after a lot of repetitions indicates that the intervals
were too long.  Starting again without decreasing the easiness will just repeat
this.  Since the intervals grow very rapidly anyway (exponentially), an EF that
is a bit too low is not very problematic.  However, a failure to recall a card
is very serious and carries a very serious penalty: the interval is reset to 1
day.  That's much worse than a slightly lower EF.  So learning will be more
efficient if a card is rated a bit too difficult, and never forgotten, rather
than a bit too easy, and occasionally forgotten.

Since, as you say, the EF should converge to a roughly OK value in the very
long term anyway, the penalty to the EF that is incurred by a failure to recall
has no serious consequences for the interval lengths in the very long term.
But, after 0 or 1, the card has to be learned again from scratch.  Once the
intervals have grown again, the EF will converge.  The EF penalty that goes
with the reset therefore will only affect the short and intermediate term
intervals.  And that is where it improves the learning efficiency.

> I'm hesitant to change the scheduler now after so many years without detailed
> statistical analysis to back up any change. The data is there in the
> collected learning logs, but analyzing it has not yet made it to the top of
> my list.

By fixing the scheduler now and implementing the algorithm as designed, you
could actually get some interesting statistics for a comparison.  Of course,
putting a comment in the Mnemosyne documentation to explain the situation is
much easier than analysing the data.  With the current scheduler you really
cannot claim that you are using SM2 as you are doing now in the docs, and you
cannot make that claim either in any scientific publication based on the data
that comes out of Mnemosyne.  But if that is ok, because the precise algorithm
is not important for what you want to do with this data, then there is no
reason not to fix the scheduler now.

Best regards,

Astrid


-- 
To UNSUBSCRIBE, email to [email protected]
with a subject of "unsubscribe". Trouble? Contact [email protected]

Reply via email to