It will be helpful to get our own measurement on database failures.
Carlos just added something like that.

Huan

On Tue, Oct 6, 2009 at 3:49 PM, John Abd-El-Malek <[email protected]> wrote:
> Saw this on
> slashdot: http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf
> The conclusion is "an average of 25,000–75,000 FIT (failures in time per
> billion hours of operation) per Mbit".
> On my machine the browser process is usually > 100MB, so that averages out
> to 176 to 493 error per year, with those numbers having big variance
> depending on the machine.  Since most users don't have ECC, which means this
> will lead to corruption.  Sqlite is a heavy user of memory, so even if it's
> 1/4 of the 100MB, that means we'll see an average of 40-120 errors naturally
> because of faulty DIMMs.
> Given that sqlite corruption means (repeated) crashing of the browser
> process, it seems this data heavily suggests we should separate sqlite code
> into a separate process.  The IPC overhead is negligible compared to disk
> access.  My hunch is that the complexity is also not that high, since the
> code that deals with it is already asynchronous since we don't use sqlite on
> the UI/IO threads.
> What do others think?
> >
>

--~--~---------~--~----~------------~-------~--~----~
Chromium Developers mailing list: [email protected] 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-dev
-~----------~----~----~----~------~----~------~--~---

Reply via email to