On Sun, 5 Jan 2014 19:30:39 +1100
Peter Jeremy <pe...@rulingia.com> wrote:

> On 2014-Jan-05 09:11:38 +0100, "O. Hartmann"
> <ohart...@zedat.fu-berlin.de> wrote:
> >On Sun, 5 Jan 2014 10:14:26 +1100
> >Peter Jeremy <pe...@rulingia.com> wrote:
> >
> >> On 2014-Jan-04 23:26:42 +0100, "O. Hartmann"
> >> <ohart...@zedat.fu-berlin.de> wrote:
> >> >zfs list -r BACKUP00
> >> >NAME              USED  AVAIL  REFER  MOUNTPOINT
> >> >BACKUP00         1.48T  1.19T   144K  /BACKUP00
> >> >BACKUP00/backup  1.47T  1.19T  1.47T  /backup
> >> 
> >> Well, that at least shows it's making progress - it's gone from
> >> 2.5T to 1.47T used (though I gather that has taken several days).
> >> Can you pleas post the result of
> >> zfs get all BACKUP00/backup
> 
> >BACKUP00/backup  dedup                on                    local
> 
> This is your problem.  Before it can free any block, it has to check
> for other references to the block via the DDT and I suspect you don't
> have enough RAM to cache the DDT.
> 
> Your options are:
> 1) Wait until the delete finishes.
> 2) Destroy the pool with extreme prejudice: Forcably export the pool
>    (probably by booting to single user and not starting ZFS) and write
>    zeroes to the first and last MB of ada3p1.
> 
> BTW, this problem will occur on any filesystem where you've ever
> enabled dedup - once there are any dedup'd blocks in a filesystem,
> all deletes need to go via the DDT.
> 

As I stated earlier in the this thread, the box in question has 32 GB
RAM and this should be sufficient.

Attachment: signature.asc
Description: PGP signature

Reply via email to