> On 01 Aug 2016, at 19:30, Michelle Sullivan wrote:
>
> There are reasons for using either…
Indeed, but my decision was to run ZFS. And getting a HBA in some
configurations can be difficult because vendors insist on using
RAID adapters. After all, that’s what most of their customers demand.
If anyone is interested, as Michelle Sullivan just mentioned. One problem I
found when looking for an HBA is that they are not so easy to find. Scoured
the internet for a backup HBA I came across these -
http://www.avagotech.com/products/server-storage/host-bus-adapters/#tab-12Gb1
Can only speak f
> On 01 Aug 2016, at 15:12, O. Hartmann wrote:
>
> First, thanks for responding so quickly.
>
>> - The third option is to make the driver expose the SAS devices like a HBA
>> would do, so that they are visible to the CAM layer, and disks are handled by
>> the stock “da” driver, which is the ide
On Mon, 1 Aug 2016 11:48:30 +0200
Borja Marcos wrote:
Hello.
First, thanks for responding so quickly.
> > On 01 Aug 2016, at 08:45, O. Hartmann wrote:
> >
> > On Wed, 22 Jun 2016 08:58:08 +0200
> > Borja Marcos wrote:
> >
> >> There is an option you can use (I do it all the time!) to make
> On 01 Aug 2016, at 08:45, O. Hartmann wrote:
>
> On Wed, 22 Jun 2016 08:58:08 +0200
> Borja Marcos wrote:
>
>> There is an option you can use (I do it all the time!) to make the card
>> behave as a plain HBA so that the disks are handled by the “da” driver.
>>
>> Add this to /boot/loader.c
On Wed, 22 Jun 2016 08:58:08 +0200
Borja Marcos wrote:
> > On 22 Jun 2016, at 04:08, Jason Zhang wrote:
> >
> > Mark,
> >
> > Thanks
> >
> > We have same RAID setting both on FreeBSD and CentOS including cache
> > setting. In FreeBSD, I enabled the write cache but the performance is the
> >
On Thursday, September 13, 2012 4:48:20 pm matt wrote:
> On 09/13/12 13:13, Garrett Cooper wrote:
> > On Thu, Sep 13, 2012 at 12:54 PM, matt wrote:
> >> On 09/10/12 19:31, Garrett Cooper wrote:
> > ...
> >
> >> It seems hw.mfi.max_cmds is read only. The performance is pretty close to
> >> expected
On 09/13/12 13:13, Garrett Cooper wrote:
On Thu, Sep 13, 2012 at 12:54 PM, matt wrote:
On 09/10/12 19:31, Garrett Cooper wrote:
...
It seems hw.mfi.max_cmds is read only. The performance is pretty close to
expected with no nvram or bbu on this card and commodity disks from 1.5
years ago, as
On 09/13/12 13:13, Garrett Cooper wrote:
> On Thu, Sep 13, 2012 at 12:54 PM, matt wrote:
>> On 09/10/12 19:31, Garrett Cooper wrote:
> ...
>
>> It seems hw.mfi.max_cmds is read only. The performance is pretty close to
>> expected with no nvram or bbu on this card and commodity disks from 1.5
>> ye
On Thu, Sep 13, 2012 at 12:54 PM, matt wrote:
> On 09/10/12 19:31, Garrett Cooper wrote:
...
> It seems hw.mfi.max_cmds is read only. The performance is pretty close to
> expected with no nvram or bbu on this card and commodity disks from 1.5
> years ago, as far as I'm concerned. I'd love better
On 09/10/12 19:31, Garrett Cooper wrote:
On Mon, Sep 10, 2012 at 7:15 PM, matt wrote:
...
mfip was necessary, and allowed smartctl to work with '-d sat'
bonnie++ comparison. Run with no options immediately after system boot. In
both cases the same disks are used, two Seagate Barracuda 1TB 3G
On Mon, Sep 10, 2012 at 7:15 PM, matt wrote:
...
> mfip was necessary, and allowed smartctl to work with '-d sat'
>
> bonnie++ comparison. Run with no options immediately after system boot. In
> both cases the same disks are used, two Seagate Barracuda 1TB 3G/S (twin
> platter) and a Barracuda 5
On 09/10/12 11:35, Andrey Zonov wrote:
On 9/10/12 9:14 PM, matt wrote:
On 09/10/12 05:38, Achim Patzner wrote:
Hi!
We’re testing a new Intel S2600GL-based server with their recommended RAID adapter
("Intel(R) Integrated RAID Module RMS25CB080”) which is identified as
mfi0: port 0x2000-0x20f
On 09/10/12 11:35, Andrey Zonov wrote:
> On 9/10/12 9:14 PM, matt wrote:
>> On 09/10/12 05:38, Achim Patzner wrote:
>>> Hi!
>>>
>>> We’re testing a new Intel S2600GL-based server with their recommended RAID
>>> adapter ("Intel(R) Integrated RAID Module RMS25CB080”) which is identified
>>> as
>>>
On 9/10/12 9:14 PM, matt wrote:
> On 09/10/12 05:38, Achim Patzner wrote:
>> Hi!
>>
>> We’re testing a new Intel S2600GL-based server with their recommended RAID
>> adapter ("Intel(R) Integrated RAID Module RMS25CB080”) which is identified as
>>
>> mfi0: port 0x2000-0x20ff mem
>> 0xd0c6-0xd0
On 09/10/12 05:38, Achim Patzner wrote:
> Hi!
>
> We’re testing a new Intel S2600GL-based server with their recommended RAID
> adapter ("Intel(R) Integrated RAID Module RMS25CB080”) which is identified as
>
> mfi0: port 0x2000-0x20ff mem
> 0xd0c6-0xd0c63fff,0xd0c0-0xd0c3 irq 34 at de
Might be worth testing anyway, as that would help prove controller or driver
issue.
- Original Message -
From: "Achim Patzner"
To: "Steven Hartland"
Cc:
Sent: Monday, September 10, 2012 2:19 PM
Subject: Re: mfi driver performance
Am 10.09.2012 um 14:57 sch
Am 10.09.2012 um 14:57 schrieb Steven Hartland:
> How are you intending to use the controller?
As a “launch-and-forget” RAID 1+0 sub-system using UFS (which reminds me to
complain about sysinstall on volumes > 2 TB later).
> If you're looking to put a load of disk in for say ZFS have you tried
mber 10, 2012 1:38 PM
Subject: mfi driver performance
Hi!
We’re testing a new Intel S2600GL-based server with their recommended RAID adapter ("Intel(R) Integrated RAID Module RMS25CB080”)
which is identified as
mfi0: port 0x2000-0x20ff mem
0xd0c6-0xd0c63fff,0xd0c0-0xd0c3 irq
Hi!
We’re testing a new Intel S2600GL-based server with their recommended RAID
adapter ("Intel(R) Integrated RAID Module RMS25CB080”) which is identified as
mfi0: port 0x2000-0x20ff mem
0xd0c6-0xd0c63fff,0xd0c0-0xd0c3 irq 34 at device 0.0 on pci5
mfi0: Using MSI
mfi0: Megaraid SAS
20 matches
Mail list logo