On Sun, Mar 7, 2021 at 7:03 AM Dave Sherohman wrote:
> On Wed, Mar 03, 2021 at 10:28:34AM -0500, Marc Auslander wrote:
> > One potential gotcha. When you boot from an mdadm file system containing
> > /boot/grub, grub will not write to the file system. In particular, it
> will
> > not update gru
On Wed, Mar 03, 2021 at 10:28:34AM -0500, Marc Auslander wrote:
> One potential gotcha. When you boot from an mdadm file system containing
> /boot/grub, grub will not write to the file system. In particular, it will
> not update grub/grubenv even if you have a save_env line in grub.cfg. So if
>
On Mi, 03 mar 21, 17:16:14, Felix Miata wrote:
> Andrei POPESCU composed on 2021-03-03 17:50 (UTC+0200):
>
> > Felix Miata wrote:
>
> >> To start with, RAID1 is marginally slower than ordinary filesystems on
> >> partitions.
>
> > This is true for some workloads, for others it can be significan
Hello,
On Fri, Mar 05, 2021 at 10:04:34AM +, Darac Marjal wrote:
> So, if your file is small, then yes, you won't see any performance
> benefit. But if your file is larger than a block, or if you want
> to access more than one file at once, then RAID can read the
> second block from a differen
On 05/03/2021 09:25, Anssi Saari wrote:
> Greg Wooledge writes:
>
>> to...@tuxteam.de (to...@tuxteam.de) wrote:
1x 4TB, single drive, 3.7 TB, w=108MB/s , rw=50MB/s , r=204MB/s
2x 4TB, mirror (raid1),3.7 TB, w=106MB/s , rw=50MB/s , r=488MB/s
>>> Thanks. Real data :)
Greg Wooledge writes:
> to...@tuxteam.de (to...@tuxteam.de) wrote:
>> > 1x 4TB, single drive, 3.7 TB, w=108MB/s , rw=50MB/s , r=204MB/s
>> > 2x 4TB, mirror (raid1),3.7 TB, w=106MB/s , rw=50MB/s , r=488MB/s
>
>> Thanks. Real data :)
>>
>> The doubling in read throughput is s
On Thu, Mar 04, 2021 at 08:09:46AM -0500, Greg Wooledge wrote:
> to...@tuxteam.de (to...@tuxteam.de) wrote:
> > > 1x 4TB, single drive, 3.7 TB, w=108MB/s , rw=50MB/s ,
> > > r=204MB/s
> > > 2x 4TB, mirror (raid1),3.7 TB, w=106MB/s , rw=50MB/s , r=488MB/s
>
> > Thanks. Real d
to...@tuxteam.de (to...@tuxteam.de) wrote:
> > 1x 4TB, single drive, 3.7 TB, w=108MB/s , rw=50MB/s , r=204MB/s
> > 2x 4TB, mirror (raid1),3.7 TB, w=106MB/s , rw=50MB/s , r=488MB/s
> Thanks. Real data :)
>
> The doubling in read throughput is somewhat surprising to me. Some
>
Dave Sherohman writes:
> Any tips on making use of the grub shell to make further progress, such
> as getting the system to boot in non-rescue mode (i.e., not chrooted
> from the installer)? The help information available in the grub shell
> itself isn't terribly useful because it scrolls off th
On Wed, Mar 03, 2021 at 05:40:39PM -0500, Dan Ritter wrote:
> Felix Miata wrote:
> > Andrei POPESCU composed on 2021-03-03 17:50 (UTC+0200):
> >
> > > Felix Miata wrote:
> > [...] Do
> > you know of, or can you provide a reference to, any way RAID1 performance
> > can be
> > better than single d
On Wed, Mar 03, 2021 at 05:16:14PM -0500, Felix Miata wrote:
> Andrei POPESCU composed on 2021-03-03 17:50 (UTC+0200):
>
> > Felix Miata wrote:
>
> >> To start with, RAID1 is marginally slower than ordinary filesystems on
> >> partitions.
>
> > This is true for some workloads, for others it can
Felix Miata wrote:
> Andrei POPESCU composed on 2021-03-03 17:50 (UTC+0200):
>
> > Felix Miata wrote:
>
> >> To start with, RAID1 is marginally slower than ordinary filesystems on
> >> partitions.
>
> > This is true for some workloads, for others it can be significantly
> > faster.
>
> > htt
Andrei POPESCU composed on 2021-03-03 17:50 (UTC+0200):
> Felix Miata wrote:
>> To start with, RAID1 is marginally slower than ordinary filesystems on
>> partitions.
> This is true for some workloads, for others it can be significantly
> faster.
> https://arstechnica.com/information-technolog
On Ma, 02 mar 21, 14:01:52, Felix Miata wrote:
>
> To start with, RAID1 is marginally slower than ordinary filesystems on
> partitions.
This is true for some workloads, for others it can be significantly
faster.
https://arstechnica.com/information-technology/2020/04/understanding-raid-how-perf
On 3/3/2021 6:30 AM, Dave Sherohman wrote:
Based on this, I'm guessing that the original problem was that the
installer forgot to include mdadm support in its grub options, even
though it was configured with an mdadm boot device. And then I missed a
couple steps after adding mdadm support, so
So, after my last message, I ran across a general mdadm setup guide at
https://raid.wiki.kernel.org/index.php/Setting_up_a_(new)_system
and, in the "booting" section of that document, it said to add
GRUB_CMDLINE_LINUX="domdadm"
to /etc/default/grub. Didn't have that, so I added it, did the checks
General update on my situation:
Just before knocking off for the day, I tried telling the installer to
finish up without installing a boot loader, then used the netinst USB
stick as a rescue disk to boot my installed system. From there, I was
able to use the regular grub-install program (instead
On Tue, Mar 02, 2021 at 05:57:37AM -0600, Dave Sherohman wrote:
> I've got a new server and am currently fighting with the Debian 10
> installer (build 20190702) in my attempts to get it up and running.
> After much wailing and gnashing of teeth, I managed to get it to stop
> complaining about bein
Dan Ritter composed on 2021-03-02 07:54 (UTC-0500):
> Dave Sherohman wrote:
>> I've got a new server...
> You'll want to create the following partitions on each,
> identically:
> 1 efi - type efi
Not RAID I presum
On Tue, 2 Mar 2021 09:50:22 -0500
Dan Ritter wrote:
> > I'm not positive that that's the correct approach, however,
> > especially given that I suspect that /boot will need to be visible
> > to GRUB and may thus need to be outside the LVM.
>
> grub can find a root in LVM, but needs /boot to be
On 2021-03-02 at 09:50, Dan Ritter wrote:
> The Wanderer wrote:
>
>> On 2021-03-02 at 07:54, Dan Ritter wrote:
>>
>>> You'll want to create the following partitions on each,
>>> identically:
>>>
>>> 1 efi - type efi
>>> 2 boot (or boot/root) - type MDADM volume
>>> 3 root, if using separate boo
The Wanderer wrote:
> On 2021-03-02 at 07:54, Dan Ritter wrote:
>
> > You'll want to create the following partitions on each,
> > identically:
> >
> > 1 efi - type efi
> > 2 boot (or boot/root) - type MDADM volume
> > 3 root, if using separate boot - type MDADM volume
> > 4 swap - type MDADM vol
On Tue, Mar 02, 2021 at 09:26:21AM -0500, The Wanderer wrote:
> I didn't parse what he wrote that way. He said:
>
> You'll want to create the following partitions on each,
> identically:
>
> 1 efi - type efi
> 2 boot (or boot/root) - type MDADM volume
> 3 root, if us
On 2021-03-02 at 09:20, Dave Sherohman wrote:
> On Tue, Mar 02, 2021 at 09:09:52AM -0500, The Wanderer wrote:
>
>> On 2021-03-02 at 09:01, Dave Sherohman wrote:
>>> RAID Device #1 is 1.9 TB, ext4fs, and set to mount on /
>>> RAID Device #2 is 2.0 GB, ESP, and bootable
>>
>> So the EFI partition
On Tue, Mar 02, 2021 at 09:09:52AM -0500, The Wanderer wrote:
> On 2021-03-02 at 09:01, Dave Sherohman wrote:
>
> > On Tue, Mar 02, 2021 at 07:54:01AM -0500, Dan Ritter wrote:
>
> >> Then you go to the mdadm setup and create MDADM RAID1 devices
> >> out of each pair of boot, root and swap.
> >
>
On 2021-03-02 at 09:01, Dave Sherohman wrote:
> On Tue, Mar 02, 2021 at 07:54:01AM -0500, Dan Ritter wrote:
>> Then you go to the mdadm setup and create MDADM RAID1 devices
>> out of each pair of boot, root and swap.
>
> RAID Device #1 is 1.9 TB, ext4fs, and set to mount on /
> RAID Device #2 is
On Tue, Mar 02, 2021 at 07:54:01AM -0500, Dan Ritter wrote:
> In the installer, you want expert mode. Does the disk
> partitioner recognize both nvme0n1 and nvme1n1 ?
Yes. The list of disks shown in the partitioner is currently:
RAID device #1
RAID device #2
/dev/nvme0n1
/dev/nvme1n1
SCSI2 (0,0
On 2021-03-02 at 07:54, Dan Ritter wrote:
> You'll want to create the following partitions on each,
> identically:
>
> 1 efi - type efi
> 2 boot (or boot/root) - type MDADM volume
> 3 root, if using separate boot - type MDADM volume
> 4 swap - type MDADM volume
>
> Then you go to the mdadm setup
Dave Sherohman wrote:
> I've got a new server and am currently fighting with the Debian 10
> installer (build 20190702) in my attempts to get it up and running.
> After much wailing and gnashing of teeth, I managed to get it to stop
> complaining about being unble to mount /boot/efi and complete t
29 matches
Mail list logo