On Fri, Apr 25, 2014 at 08:26:52PM -0500 I heard the voice of
Matthew D. Fuller, and lo! it spake thus:
> On Mon, Mar 03, 2014 at 11:17:05AM +0200 I heard the voice of
> Andriy Gapon, and lo! it spake thus:
> >
> > I noticed that on some of our systems we were getting a clearly
> > abnormal number
On Mon, Mar 03, 2014 at 11:17:05AM +0200 I heard the voice of
Andriy Gapon, and lo! it spake thus:
>
> I noticed that on some of our systems we were getting a clearly
> abnormal number of l2arc checksum errors accounted in l2_cksum_bad.
> [...]
> I propose the following patch which has been tested
On Mon, Mar 03, 2014 at 11:17:05AM +0200 I heard the voice of
Andriy Gapon, and lo! it spake thus:
>
> I noticed that on some of our systems we were getting a clearly
> abnormal number of l2arc checksum errors accounted in l2_cksum_bad.
> The hardware appeared to be in good health.
FWIW, I have a
- Original Message -
From: "Andriy Gapon"
on 18/10/2013 17:57 Steven Hartland said the following:
I think we we may well need the following patch to set the minblock
size based on the vdev ashift and not SPA_MINBLOCKSIZE.
svn diff -x -p sys/cddl/contrib/opensolaris/uts/common/fs/zfs
on 18/10/2013 17:57 Steven Hartland said the following:
> I think we we may well need the following patch to set the minblock
> size based on the vdev ashift and not SPA_MINBLOCKSIZE.
>
> svn diff -x -p sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
> Index: sys/cddl/contrib/opensolaris/uts/
Ок, just up to now no error on l2arc
L2 ARC Summary: (HEALTHY)
Passed Headroom:1.99m
Tried Lock Failures:144.53m
IO In Progress: 130.15k
Low Memory Aborts: 7
Free on W
Steven Hartland wrote:
SH> So previously you only started seeing l2 errors after there was
SH> a significant amount of data in l2arc? Thats interesting in itself
SH> if thats the case.
Yes someting arround 200+gb
SH> I wonder if its the type of data, or something similar. Do you
SH> run compres
So previously you only started seeing l2 errors after there was
a significant amount of data in l2arc? Thats interesting in itself
if thats the case.
I wonder if its the type of data, or something similar. Do you
run compression on any of your volumes?
zfs get compression
Regards
Steve
---
Just now I cannot say, as to triger problem we need at last 200+gb size on
l2arc wich usually grow in one production day.
But for some reason today in the morning server was rebooted so cache was
flushed and now only 100Gb.
Need to wait some more time.
At last for now none error on l2.
S
Hows things looking Vitalij?
- Original Message -
From: "Vitalij Satanivskij"
Ok. Just right now system rebooted with you patch.
Trim enabled again.
WIll wait some time untile size of used cache grow's.
Steven Hartland wrote:
SH> Looking at the l2arc compression code I believe t
; - Original Message -
SH> From: "Vitalij Satanivskij"
SH> To: "Steven Hartland"
SH> Cc: "Vitalij Satanivskij" ; "Dmitriy Makarov"
; "Justin T. Gibbs" ; "Borja
SH> Marcos" ;
SH> Sent: Friday, October 18, 2013 3:45
ot;Steven Hartland"
Cc: "Vitalij Satanivskij" ; "Dmitriy Makarov" ; "Justin T. Gibbs" ; "Borja
Marcos" ;
Sent: Friday, October 18, 2013 3:45 PM
Subject: Re: ZFS secondarycache on SSD problem on r255173
Just right now stats not to actual because of s
SH>
SH> - Original Message -
SH> From: "Vitalij Satanivskij"
SH> To: "Steven Hartland"
SH> Cc: ; "Justin T. Gibbs" ;
; "Borja Marcos" ;
SH> "Dmitriy Makarov"
SH> Sent: Friday, October 18, 2013 9:01 AM
SH> Subj
Steven Hartland"
Cc: ; "Justin T. Gibbs" ; ; "Borja Marcos" ;
"Dmitriy Makarov"
Sent: Friday, October 18, 2013 9:01 AM
Subject: Re: ZFS secondarycache on SSD problem on r255173
Hello.
Yesterday system was rebooted with vfs.zfs.trim.enabled=0
System version 10.
Hello.
Yesterday system was rebooted with vfs.zfs.trim.enabled=0
System version 10.0-BETA1 FreeBSD 10.0-BETA1 #6 r256669, without any changes in
code
Uptime 10:51 up 16:41
sysctl vfs.zfs.trim.enabled
vfs.zfs.trim.enabled: 0
Around 2 hours ago errors counter's
kstat.zfs.misc.arcstats.l2_ck
Correct.
- Original Message -
From: "Vitalij Satanivskij"
Just to be sure I understand you clearly, I need to test next configuration:
1) System with ashift patch eg. just latest stable/10 revision
2) vfs.zfs.trim.enabled=0 in /boot/loader.conf
So realy only diferens in default
: "Vitalij Satanivskij"
SH> To: "Steven Hartland"
SH> Cc: "Justin T. Gibbs" ; "Vitalij Satanivskij"
; ; "Borja Marcos"
SH> ; "Dmitriy Makarov"
SH> Sent: Thursday, October 17, 2013 7:12 AM
SH> Subject: Re: ZFS secondar
Cc: "Justin T. Gibbs" ; "Vitalij Satanivskij" ; ; "Borja Marcos"
; "Dmitriy Makarov"
Sent: Thursday, October 17, 2013 7:12 AM
Subject: Re: ZFS secondarycache on SSD problem on r255173
Hello.
SSD is Intel SSD 530 series (INTEL SSDSC2BW180A4 DC12
Hello.
SSD is Intel SSD 530 series (INTEL SSDSC2BW180A4 DC12)
Controler is onboard intel sata controler, motherboard is Supermicro X9SRL-F so
it's Intel C602 chipset
All cache ssd connected to sata 2 ports.
System has LSI MPS controler (SAS2308) with firmware version - 16.00.00.00, but
only h
Hello.
Problem description is in -
http://lists.freebsd.org/pipermail/freebsd-current/2013-October/045088.html
As we find later first begin's problem with errors counter in arcstats than
size of l2 grows abnormal.
After patch rollback everything is ok.
Justin T. Gibbs wrote:
JTG> You'll hav
Ohh stupid question what hardware are you running this on,
specifically what SSD's and what controller and if relavent
what controller Firmware version?
I wonder if you might have bad HW / FW, such as older LSI
mps Firmware, which is know to causing corruption with
some delete methods.
Without t
- Original Message -
From: "Justin T. Gibbs"
I took a quick look at arc.c and see that the trim_map_free()
calls don't take into account ashift. I don't know if that
has anything to do with your problem though. I would expect
this would just make the trim less efficient, but I need
I took a quick look at arc.c and see that the trim_map_free() calls don't take
into account ashift. I don't know if that has anything to do with your problem
though. I would expect this would just make the trim less efficient, but I
need to dig further.
--
Justin
On Oct 16, 2013, at 4:42 PM,
You'll have to be more specific. I don't have that email or know what list on
which to search.
Thanks,
Justin
On Oct 16, 2013, at 2:01 AM, Vitalij Satanivskij wrote:
> Hello.
>
> Patch brocke cache functionality.
>
> Look at's Dmitriy's mail from Mon, 07 Oct 2013 21:09:06 +0300
>
> With s
Steven Hartland wrote:
SH> I'm not clear what you rolled back there as r255173 has ntothing to do
SH> with this. Could you clarify
r255173 with you patch from email dated Tue, 17 Sep 2013 23:53:12 +0100 with
subject Re: ZFS secondarycache on SSD problem on r255173
Errors wich we
255753
2. Set vfs.zfs.max_auto_ashift=9
Regards
Steve
- Original Message -
From: "Vitalij Satanivskij"
To: "Steven Hartland"
Cc: "Vitalij Satanivskij" ; "Dmitriy Makarov" ; "Justin T. Gibbs" ; "Borja
Marcos" ;
Sent: Wednesday, October 1
n Hartland" ; "Justin T. Gibbs"
; "Borja Marcos" ;
SH>
SH> Sent: Wednesday, October 16, 2013 9:01 AM
SH> Subject: Re: ZFS secondarycache on SSD problem on r255173
SH>
SH>
SH> > Hello.
SH> >
SH> > Patch brocke cache functionality.
SH>
: "Vitalij Satanivskij"
To: "Dmitriy Makarov"
Cc: "Steven Hartland" ; "Justin T. Gibbs" ; "Borja Marcos" ;
Sent: Wednesday, October 16, 2013 9:01 AM
Subject: Re: ZFS secondarycache on SSD problem on r255173
Hello.
Patch brocke cache functionality
Hello.
Patch brocke cache functionality.
Look at's Dmitriy's mail from Mon, 07 Oct 2013 21:09:06 +0300
With subject ZFS L2ARC - incorrect size and abnormal system load on r255173
As patch alredy in head and BETA it's not good.
Yesterday we update one machine up to beta1 and forgot about patch
On Sep 17, 2013, at 4:53 PM, Steven Hartland wrote:
> - Original Message - From: "Justin T. Gibbs"
>
>
>> Sorry for being slow to chime in on this thread. I live in Boulder, CO and
>> we've had a bit of rain. :-)
>
> Hope all is well your side, everyone safe and sound if may be litt
The attached patch by Steven Hartland fixes issue for me too. Thank you!
--- Исходное сообщение ---
От кого: "Steven Hartland" < kill...@multiplay.co.uk >
Дата: 18 сентября 2013, 01:53:10
- Original Message -
From: "Justin T. Gibbs" <
---
Дмитрий Макаров
- Original Message -
From: "Justin T. Gibbs"
Sorry for being slow to chime in on this thread. I live in Boulder, CO and
we've had a bit of rain. :-)
Hope all is well your side, everyone safe and sound if may be little wetter
than usual.
As Steven pointed out, the warning is ben
t; "Justin T. Gibbs"
> Sent: Monday, September 16, 2013 1:29 PM
> Subject: Re[3]: ZFS secondarycache on SSD problem on r255173
>
>
>> And have to say that ashift of a main pool doesn't matter.
>> I've tried to create pool with ashift 9 (default val
Monday, September 16, 2013 1:29 PM
Subject: Re[3]: ZFS secondarycache on SSD problem on r255173
And have to say that ashift of a main pool doesn't matter.
I've tried to create pool with ashift 9 (default value) and with ashift 12 with creating gnops over gpart devices, export pool,
destroy gnops,
And have to say that ashift of a main pool doesn't matter.
I've tried to create pool with ashift 9 (default value) and with ashift 12 with
creating gnops over gpart devices, export pool, destroy gnops, import pool.
There is the same problem with cache device.
There is no problem with ZIL devi
There is no problem with ZIL devices, they reports ashift: 12
children[1]:
type: 'disk'
id: 1
guid: 6986664094649753344
path: '/dev/gpt/zil1'
phys_path: '/dev/gpt/zil1'
whole_disk: 1
metaslab_array:
"Dmitryy Makarov" ; ; "Justin T.
Gibbs"
Sent: Monday, September 16, 2013 12:06 PM
Subject: Re: ZFS secondarycache on SSD problem on r255173
On Sep 13, 2013, at 2:18 PM, Steven Hartland wrote:
This is a recent bit of code by Justin cc'ed, so he's likely the best pe
On Sep 13, 2013, at 2:18 PM, Steven Hartland wrote:
> This is a recent bit of code by Justin cc'ed, so he's likely the best person
> to
> investigate this one.
Hmm. There is still a lot of confusion surrounding all this, and it's a time
bomb waiting to explode.
A friend had serious problems o
This is a recent bit of code by Justin cc'ed, so he's likely the best person to
investigate this one.
Regards
Steve
- Original Message -
From: "Dmitryy Makarov"
To:
Sent: Friday, September 13, 2013 12:16 PM
Subject: ZFS secondarycache on SSD problem on r255
Hello, FreeBSD Current.
Have some trouble with adding SSD drive as secondarycache device on
10.0-CURRENT FreeBSD 10.0-CURRENT #1 r255173 :
SSD INTEL SSDSC2BW180A4
dmesg:
ada1 at ahcich2 bus 0 scbus4 target 0 lun 0
ada1: ATA-9 SATA 3.x device
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, P
40 matches
Mail list logo