On 7/2/23 13:11, Mick Ab wrote:
On 19:58, Sun, 2 Jul 2023 David Christensen
On 7/2/23 10:23, Mick Ab wrote:
I have a software RAID 1 array of two hard drives. Each of the two disks
contains the Debian operating system and user data.
I am thinking of changing the motherboard because of problem
On 02.07.2023 22:23, Mick Ab wrote:
I have a software RAID 1 array of two hard drives. Each of the two
disks contains the Debian operating system and user data.
I am thinking of changing the motherboard because of problems that
might be connected to the current motherboard. The new motherboa
On 19:58, Sun, 2 Jul 2023 David Christensen
> On 7/2/23 10:23, Mick Ab wrote:
> > I have a software RAID 1 array of two hard drives. Each of the two disks
> > contains the Debian operating system and user data.
> >
> > I am thinking of changing the motherboard because of problems that
might be
> >
On 7/2/23 10:23, Mick Ab wrote:
I have a software RAID 1 array of two hard drives. Each of the two disks
contains the Debian operating system and user data.
I am thinking of changing the motherboard because of problems that might be
connected to the current motherboard. The new motherboard would
On Sun, 2 Jul 2023 18:23:31 +0100
Mick Ab wrote:
> I am thinking of changing the motherboard because of problems that
> might be connected to the current motherboard. The new motherboard
> would be the same make and model as the current motherboard.
>
> Would I need to recreate the RAID 1 array f
On Sunday, July 18, 2021 09:37:53 AM David wrote:
> On Sun, 18 Jul 2021 at 21:08, wrote:
> > Interesting -- not surprising, makes sense, but something (for me, at
> > least) to keep in mind -- probably not a good idea to run on an old
> > drive that hasn't been backed up.
>
> Sorry if my language
On 7/18/21 2:29 PM, Urs Thuermann wrote:
David Christensen writes:
You should consider upgrading to Debian 10 -- more people run that and
you will get better support.
It's on my TODO list. As well as upgrading the very old hardware.
Currently, it's a Gigabyte P35-DS3L with an Intel Core2Duo
David Christensen writes:
> You should consider upgrading to Debian 10 -- more people run that and
> you will get better support.
It's on my TODO list. As well as upgrading the very old hardware.
Currently, it's a Gigabyte P35-DS3L with an Intel Core2Duo E8400 CPU
and 8 GB RAM. It's only my pr
On 7/18/21 2:16 AM, Reco wrote:
Hi.
On Sat, Jul 17, 2021 at 02:03:15PM -0700, David Christensen wrote:
But much more noticable is the difference of data reads of the two
disks, i.e. 55 GB and 27 GB, i.e. roughly twice as much data is read
from /dev/sdb compared to /dev/sda. Trying to f
On 2021-07-18 14:37, David wrote:
On Sun, 18 Jul 2021 at 21:08, wrote:
On Saturday, July 17, 2021 09:30:56 PM David wrote:
> The 'smartctl' manpage explains how to run and abort self-tests.
> It also says that a running test can degrade the performance of the drive.
Interesting -- not surp
On Sun, 18 Jul 2021 at 21:08, wrote:
> On Saturday, July 17, 2021 09:30:56 PM David wrote:
> > The 'smartctl' manpage explains how to run and abort self-tests.
> > It also says that a running test can degrade the performance of the drive.
> Interesting -- not surprising, makes sense, but somethi
On 7/17/21 6:30 PM, David wrote:
On Sun, 18 Jul 2021 at 07:03, David Christensen
wrote:
On 7/17/21 5:34 AM, Urs Thuermann wrote:
On my server running Debian stretch,
the storage setup is as follows:
Two identical SATA disks with 1 partition on each drive spanning the
whole drive, i.e. /dev/s
On Saturday, July 17, 2021 09:30:56 PM David wrote:
> The 'smartctl' manpage explains how to run and abort self-tests.
> It also says that a running test can degrade the performance of the drive.
Interesting -- not surprising, makes sense, but something (for me, at least)
to keep in mind -- proba
Hi.
On Sat, Jul 17, 2021 at 02:03:15PM -0700, David Christensen wrote:
> > But much more noticable is the difference of data reads of the two
> > disks, i.e. 55 GB and 27 GB, i.e. roughly twice as much data is read
> > from /dev/sdb compared to /dev/sda. Trying to figure out the reason
>
On Sun, 18 Jul 2021 at 07:03, David Christensen
wrote:
> On 7/17/21 5:34 AM, Urs Thuermann wrote:
> > On my server running Debian stretch,
> > the storage setup is as follows:
> > Two identical SATA disks with 1 partition on each drive spanning the
> > whole drive, i.e. /dev/sda1 and /dev/sdb1.
On 7/17/21 5:34 AM, Urs Thuermann wrote:
On my server running Debian stretch,
You should consider upgrading to Debian 10 -- more people run that and
you will get better support.
I migrated to FreeBSD.
the storage setup is as follows:
Two identical SATA disks with 1 partition on each dri
Hi Urs,
Your plan to change the SATA cable seems wise - your various error
rates are higher than I have normally seen.
Also worth bearing in mind that Linux MD RAID 1 will satisfy all
read IO for a given operation from one device in the mirror. If
you have processes that do occasional big reads t
On 7/17/21 08:34, Urs Thuermann wrote:
Here, the noticable lines are IMHO
Raw_Read_Error_Rate (208245592 vs. 117642848)
Command_Timeout (8 14 17 vs. 0 0 0)
UDMA_CRC_Error_Count(11058 vs. 29)
Do these numbers indicate a serious problem with my /dev/sda drive?
And i
I'm going to echo your final thought there: Replace the SATA cables with 2
NEW ones of the same model. Then see how it goes, meaning rerun the tests
you just ran. If possible, try to make the geometries of the cables as
similar as you can: roughly same (short?) lengths, roughly as straight and
cong
On 2021-01-24 21:23, mick crane wrote:
On 2021-01-24 20:10, David Christensen wrote:
Please tell us why you must put the OS and the backup images on the
same RAID mirror of two HDD's, and why you cannot add one (or two?)
more devices for the OS.
I think I'll go with the first and last sugges
Thanks Andy and Linux-Fan, for the detailed reply.
Andy Smith writes:
Hi Pankaj,
Not wishing to put words in Linux-Fan's mouth, but my own views
are…
On Mon, Jan 25, 2021 at 11:04:09AM +0530, Pankaj Jangid wrote:
> Linux-Fan writes:
>
> > * OS data bitrot is not covered, but OS single HDD failure is.
> > I achieve this by having OS and Swap
mick crane wrote:
> I think I'll go with the first and last suggestion to just have 2 disks
> in raid1.
> It seems that properly you'd want 2 disks in raid for the OS, 2 at least
> for the pool and maybe 1 for the cache.
> Don't have anything big enough I could put 5 disks in.
> I could probably g
Hi Pankaj,
Not wishing to put words in Linux-Fan's mouth, but my own views
are…
On Mon, Jan 25, 2021 at 11:04:09AM +0530, Pankaj Jangid wrote:
> Linux-Fan writes:
>
> > * OS data bitrot is not covered, but OS single HDD failure is.
> > I achieve this by having OS and Swap on MDADM RAID 1
> >
On Du, 24 ian 21, 23:21:38, Linux-Fan wrote:
> mick crane writes:
>
> > On 2021-01-24 17:37, Andrei POPESCU wrote:
>
> [...]
>
> > > If you want to combine Linux RAID and ZFS on just two drives you could
> > > partition the drives (e.g. two partitions on each drive), use the first
> > > partitio
Linux-Fan writes:
> * OS data bitrot is not covered, but OS single HDD failure is.
> I achieve this by having OS and Swap on MDADM RAID 1
> i.e. mirrored but without ZFS.
I am still learning.
1. By "by having OS and Swap on MDADM", did you mean the /boot partition
and swap.
2. Why did y
On 2021-01-24 20:10, David Christensen wrote:
On 2021-01-24 03:36, mick crane wrote:
Let's say I have one PC and 2 unpartitioned disks.
Please tell us why you must put the OS and the backup images on the
same RAID mirror of two HDD's, and why you cannot add one (or two?)
more devices for the
mick crane writes:
On 2021-01-24 17:37, Andrei POPESCU wrote:
[...]
If you want to combine Linux RAID and ZFS on just two drives you could
partition the drives (e.g. two partitions on each drive), use the first
partition on each drive for Linux RAID, install Debian (others will have
to confi
On Du, 24 ian 21, 17:50:06, Andy Smith wrote:
>
> Once it's up and running you can then go and create a second
> partition that spans the rest of each disk, and then when you are
> ready to create your zfs pool:
>
> > "zpool create tank mirror disk1 disk2"
>
> # zpool create tank mirror /dev/dis
On 2021-01-24 03:36, mick crane wrote:
Let's say I have one PC and 2 unpartitioned disks.
Please tell us why you must put the OS and the backup images on the same
RAID mirror of two HDD's, and why you cannot add one (or two?) more
devices for the OS.
David
Andy Smith writes:
>...
>So personally I would just do the install of Debian with both disks
>inside the machine, manual partitioning, create a single partition
>big enough for your OS on the first disk and then another one the
>same on the second disk. Mark them as RAID members, set them to
>RAID
On 2021-01-24 17:37, Andrei POPESCU wrote:
On Du, 24 ian 21, 11:36:09, mick crane wrote:
I know I'm a bit thick about these things, what I'm blocked about is
where
is the OS.
Let's say I have one PC and 2 unpartitioned disks.
Put one disk in PC and install Debian on it.
Ok
Install headers
Hi Mick,
On Sun, Jan 24, 2021 at 11:36:09AM +, mick crane wrote:
> I know I'm a bit thick about these things, what I'm blocked about is where
> is the OS.
Wherever you installed it.
> Let's say I have one PC and 2 unpartitioned disks.
> Put one disk in PC and install Debian on it.
I think y
On Du, 24 ian 21, 11:36:09, mick crane wrote:
>
> I know I'm a bit thick about these things, what I'm blocked about is where
> is the OS.
> Let's say I have one PC and 2 unpartitioned disks.
> Put one disk in PC and install Debian on it.
Ok
> Install headers and ZFS-utils.
> I put other disk in
On 2021-01-23 22:01, David Christensen wrote:
On 2021-01-23 07:01, mick crane wrote:
On 2021-01-23 12:20, Andrei POPESCU wrote:
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so
what's
On 2021-01-23 07:01, mick crane wrote:
On 2021-01-23 12:20, Andrei POPESCU wrote:
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so
what's
scattered about is on the running disks and th
mick crane writes:
On 2021-01-23 17:11, Linux-Fan wrote:
mick crane writes:
[...]
Please note that "root on ZFS" is possible but quite complicated:
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian
%20Buster%20Root%20on%20ZFS.html
For my current system I actually use
On 2021-01-23 17:11, Linux-Fan wrote:
mick crane writes:
On 2021-01-23 12:20, Andrei POPESCU wrote:
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so
what's
scattered about is on the
mick crane writes:
On 2021-01-23 12:20, Andrei POPESCU wrote:
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so what's
scattered about is on the running disks and this new/old one is ju
On 2021-01-23 12:20, Andrei POPESCU wrote:
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so
what's
scattered about is on the running disks and this new/old one is just
backup
for them
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
> hello,
> I want to tidy things up as suggested.
> Have one old PC that I'll put 2 disks in and tidy everything up so what's
> scattered about is on the running disks and this new/old one is just backup
> for them.
> Can I assume that Debian installer
On 2021-01-22 15:10, David Christensen wrote:
A key issue with storage is bit rot.
I should have said "bit rot protection".
David
On 2021-01-22 14:26, mick crane wrote:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so
what's scattered about is on the running disks and this new/old one is
just backup for them.
Can I assume that Debian installer in some expert
mick crane writes:
hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so what's
scattered about is on the running disks and this new/old one is just backup
for them.
Can I assume that Debian installer in some expert mode will sort ou
On Sat, 14 Nov 2020 12:15:47 -0700
Charles Curley wrote:
> Or (afterthought here) did I give it the wrong UUID?
A week later, I came back to this. It appears I did use the wrong UUID
in /etc/crypttab.
root@hawk:~# ll /dev/disk/by-uuid/
total 0
drwxr-xr-x 2 root root 180 Nov 20 10:25 ./
drwxr-xr
On Sat, 14 Nov 2020 23:00:35 +0100
Toni Mas Soler wrote:
> I have more or less the same configuration. I am a no-systemd user
> (yet?) so I cannot show you the full example.
> You could verify:
> - Is there a mdraid1x module in your grub menu entry?
> - If I not wrong you made your RAID by mdadm
I have more or less the same configuration. I am a no-systemd user
(yet?) so I cannot show you the full example.
You could verify:
- Is there a mdraid1x module in your grub menu entry?
- If I not wrong you made your RAID by mdadm metadata version 1.2. I
think in this version metadata is located at
On Sat, 14 Nov 2020 08:12:41 +0100
john doe wrote:
> >
> > What do I do to automate that?
> >
>
>
>
> Is your '/etc/crypttab' file properly populated?
Well, I thought it was
At first I got the UUID for the RAID device, /dev/md0:
root@hawk:~# mdadm --detail /dev/md0
/dev/md0:
On 11/14/2020 4:23 AM, Charles Curley wrote:
I've added RAID and two new hard drives to my desktop. The RAID appears
to work, once it is up and running. Alas, on boot it is not being
properly set up. Everything else comes up correctly.
I have two new four terabyte drives set aside for RAID. They
On 10/26/2020 7:55 AM, Bill wrote:
Hi folks,
So we're setting up a small server with a pair of 1 TB hard disks
sectioned into 5x100GB Raid 1 partition pairs for data, with 400GB+
reserved for future uses on each disk.
Oh, also, why are you leaving so much unused space on the drives? On
This might be better handled on linux-r...@vger.kernel.org
On 10/26/2020 10:35 AM, Dan Ritter wrote:
Bill wrote:
So we're setting up a small server with a pair of 1 TB hard disks sectioned
into 5x100GB Raid 1 partition pairs for data, with 400GB+ reserved for
future uses on each disk.
On 10/26/20 4:55 AM, Bill wrote:
> lsblk reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5].
> blkid reveals that sda[1-5] and sdb[1-5] are still listed as
> TYPE="linux_raid_member".
>
> So first of all I'd like to be able to diagnose what's going on. What
> commands should I use for
Hi folks,
So we're setting up a small server with a pair of 1 TB hard
diskssectioned into 5x100GB Raid 1 partition pairs for data, with
400GB+reserved for future uses on each disk.I'm not sure what
happened, we had the five pairs of disk partitions setup properly
through the installer without
Bill wrote:
> So we're setting up a small server with a pair of 1 TB hard disks sectioned
> into 5x100GB Raid 1 partition pairs for data, with 400GB+ reserved for
> future uses on each disk.
That's weird, but I expect you have a reason for it.
> I'm not sure what happened, we had the five pairs
Eduardo M KALINOWSKI:
> On ter, 06 nov 2018, Finariu Florin wrote:
> > Hi,
> > Somebody can help me with some information about why I can not see the
> > Raid0 created in bios?
> > I have a motherboard EPC602D8A with 2 chipsets: Intel C602 (Sata 2 x 4,
> > Sata 3 x 2) and Marvell SE9172 (Sata 3 x
On ter, 06 nov 2018, Finariu Florin wrote:
Hi,
Somebody can help me with some information about why I can not see
the Raid0 created in bios?
I have a motherboard EPC602D8A with 2 chipsets: Intel C602 (Sata 2 x
4, Sata 3 x 2) and Marvell SE9172 (Sata 3 x 2). I create in BIOS a
Raid0 on Marv
На 2018-11-06 15:49, Finariu Florin написа:
Hi,
Somebody can help me with some information about why I can not see the
Raid0 created in bios?
I have a motherboard EPC602D8A with 2 chipsets: Intel C602 (Sata 2 x
4, Sata 3 x 2) and Marvell SE9172 (Sata 3 x 2). I create in BIOS a
Raid0 on Marvel and
On Tue, Nov 06, 2018 at 01:49:32PM +, Finariu Florin wrote:
> Hi,
>Somebody can help me with some information about why I can not see the
>Raid0 created in bios?
>I have a motherboard EPC602D8A with 2 chipsets: Intel C602 (Sata 2 x 4,
>Sata 3 x 2) and Marvell SE9172 (Sata 3
Hi.
On Mon, Nov 05, 2018 at 02:53:34PM +, Finariu Florin wrote:
> I have a motherboard EPC602D8A with 2chipsets: Intel C602 (Sata2 x 4, Sata3
> x 2)
A fakeraid aka Intel Martix RAID. There should be some mdadm support
for this, but you might as well use mdadm to create your RAID fro
Finariu Florin:
> Hi,somebody can help me with some information about why I can not seethe
> Raid0 created in bios?
> I have a motherboard EPC602D8A with 2chipsets: Intel C602 (Sata2 x 4, Sata3
> x 2) and Marvell SE9172 (Sata3x 2). I create in BIOS a Raid0 on Marvel and
> another Raid0 on Int
On 8. Nov 2017, at 21:58, deloptes wrote:
>
> Tobx wrote:
>
>> VERBOSE=false
>
> perhaps set to true and see what it says.
The comment to this option states:
# if this variable is set to true, mdadm will be a little more verbose e.g.
# when creating the initramfs.
I tried that, but I did
Tobx wrote:
> RAID assembling at boot only works when no journal device is involved.
>
I can't help much here, nothing to compare. I forgot to mention that md
driver is compiled in the kernel in my case.
> VERBOSE=false
perhaps set to true and see what it says.
>
> Options in /etc/mdadm/mdad
I was on 4.9.0-4 (Stretch), now tried with 4.13.0-0 but had no luck.
I also tried it again on a clean Ubuntu-Server 17.10 with Kernel 4.13.0-16 and
had exactly the same issue:
RAID assembling at boot only works when no journal device is involved.
> On 7. Nov 2017, at 20:04, deloptes wrote:
>
Tobx wrote:
> What am I missing?
I don't know if it is related and I don't use raid5, but rather raid1, and
in the past year or so I had experienced similar with our server. Now I run
4.12.10 and noticed in the changelog/release notes that there are a lot of
fixes in the md stack. The issues are
Thank you! I will try this procedure this week.
Tim
On 9/18/2015 5:04 PM, linuxthefish wrote:
Tim,
>From what I remember it's best to set it up when you installing the
system, then you can install the bootloader to /boot in RAID 1.
https://blog.sleeplessbeastie.eu/2013/10/04/how-to-configure-
Tim,
>From what I remember it's best to set it up when you installing the
system, then you can install the bootloader to /boot in RAID 1.
https://blog.sleeplessbeastie.eu/2013/10/04/how-to-configure-software-raid1-during-installation-process/
is what I followed.
Thanks,
Edmund
On 18 September 2
On 01/07/15 10:12 PM, Don Armstrong wrote:
On Wed, 01 Jul 2015, Gary Dale wrote:
You missed the point that this would require different partition
tables on the two drives.
I saw and rejected the point because you do not need different partition
tables, as is illustrated below from my original m
On 01/07/15 07:01 PM, Arno Schuring wrote:
Date: Wed, 1 Jul 2015 18:41:35 -0400
From: garyd...@torfree.net
On 01/07/15 03:24 PM, Don Armstrong wrote:
On Wed, 01 Jul 2015, Gary Dale wrote:
The size of the RAID array is set by the smallest partition so if you
want to be able to boot from eithe
On Wed, 01 Jul 2015, Gary Dale wrote:
> You missed the point that this would require different partition
> tables on the two drives.
I saw and rejected the point because you do not need different partition
tables, as is illustrated below from my original message.
% diff -u <(sudo fdisk -l /dev/sd
> Date: Wed, 1 Jul 2015 18:41:35 -0400
> From: garyd...@torfree.net
>
> On 01/07/15 03:24 PM, Don Armstrong wrote:
>> On Wed, 01 Jul 2015, Gary Dale wrote:
>>> The size of the RAID array is set by the smallest partition so if you
>>> want to be able to boot from either drive, then putting the ef0
On 01/07/15 03:24 PM, Don Armstrong wrote:
On Wed, 01 Jul 2015, Gary Dale wrote:
The size of the RAID array is set by the smallest partition so if you
want to be able to boot from either drive, then putting the ef02
partition in the free space on the new drive means that you will
either not be a
On Wed, 01 Jul 2015, Gary Dale wrote:
> The size of the RAID array is set by the smallest partition so if you
> want to be able to boot from either drive, then putting the ef02
> partition in the free space on the new drive means that you will
> either not be able to boot from the old drive should
Gary Dale wrote:
> On 01/07/15 10:39 AM, Sven Hartge wrote:
>> Gary Dale wrote:
>>> On 30/06/15 07:04 PM, Lisi Reisz wrote:
On Tuesday 30 June 2015 23:30:46 Sven Hartge wrote:
> Wow. 100MB for a bios_grub partition wastes about 99.8MB.
Which used to matter. But out of 2T???
>>> Ag
On 01/07/15 05:38 AM, Pascal Hambourg wrote:
Gary Dale a écrit :
On 30/06/15 02:17 PM, Pascal Hambourg wrote:
What I would do is shrink partition 5 by 100M then create a new ef02
partition in the freed space.
Why on earth would you want to do such a dangerous and useless thing ?
As I wrote in
On 01/07/15 10:39 AM, Sven Hartge wrote:
Gary Dale wrote:
On 30/06/15 07:04 PM, Lisi Reisz wrote:
On Tuesday 30 June 2015 23:30:46 Sven Hartge wrote:
Wow. 100MB for a bios_grub partition wastes about 99.8MB.
Which used to matter. But out of 2T???
Agreed. I remember having a 100M /boot part
Pascal Hambourg wrote:
> Sven Hartge a écrit :
>> I had an interesting problem once, where I upgraded a server from
>> Squeeze to Wheezy. This server had LVM on MD-RAID and so the core.img
>> has to include the LVM- and MD-RAID drivers in addition to the ext3
>> code.
>>
>> With Squeeze this co
Gary Dale wrote:
> On 30/06/15 07:04 PM, Lisi Reisz wrote:
>> On Tuesday 30 June 2015 23:30:46 Sven Hartge wrote:
>>> Wow. 100MB for a bios_grub partition wastes about 99.8MB.
>> Which used to matter. But out of 2T???
> Agreed. I remember having a 100M /boot partition which was always
> running
Gary Dale a écrit :
> On 30/06/15 02:56 PM, Pascal Hambourg wrote:
>> You don't need to update the BIOS. All a sensible BIOS has to do is load
>> the boot code in the MBR, regardless of the partition table style.
>>
> That's not true. I have a Dell laptop that needed a BIOS update after I
> switch
Thanks Gary and Pascal and all for your very informative inputs and
support. now it works and my drive is bootable now.
however one thing i have notice which is also off the topic is that i can
not boot my new GPT hard drive with supergrub CD. when i reaches where
kernel loads it restart immediate
Gary Dale a écrit :
> On 30/06/15 02:17 PM, Pascal Hambourg wrote:
>>
>>> What I would do is shrink partition 5 by 100M then create a new ef02
>>> partition in the freed space.
>>
>> Why on earth would you want to do such a dangerous and useless thing ?
>> As I wrote in a previous message, there is
Sven Hartge a écrit :
>
> I had an interesting problem once, where I upgraded a server from
> Squeeze to Wheezy. This server had LVM on MD-RAID and so the core.img
> has to include the LVM- and MD-RAID drivers in addition to the ext3
> code.
>
> With Squeeze this core.img just so fitted into the
Gary Dale a écrit :
> On 30/06/15 02:36 PM, Muhammad Yousuf Khan wrote:
>>
>> should i create partition 2 of a size of 1 GB. and make it as a boot
>> partition and install grup on that partition. [...]
>
> I've stopped using separate boot partitions since wheezy, which allowed
> systems to boot d
On 01/07/15 02:43 AM, Muhammad Yousuf Khan wrote:
Partition is created and grub is installed as instructed in the thread.
Also Grub is installed now.
should i make any config changes ( or not )in the /boot/grub/ directory?
Number Start (sector)End (sector) Size Code Name
1
Hi.
On Tue, Jun 30, 2015 at 11:17:11PM -0400, Gary Dale wrote:
> On 30/06/15 07:04 PM, Lisi Reisz wrote:
> >On Tuesday 30 June 2015 23:30:46 Sven Hartge wrote:
> >>Wow. 100MB for a bios_grub partition wastes about 99.8MB.
> >Which used to matter. But out of 2T???
> >
> >Lisi
> >
> >
> Agreed. I
Partition is created and grub is installed as instructed in the thread.
Also Grub is installed now.
should i make any config changes ( or not )in the /boot/grub/ directory?
Number Start (sector)End (sector) Size Code Name
12048 7813119 3.7 GiB FD00 Linux
On 30/06/15 07:04 PM, Lisi Reisz wrote:
On Tuesday 30 June 2015 23:30:46 Sven Hartge wrote:
Wow. 100MB for a bios_grub partition wastes about 99.8MB.
Which used to matter. But out of 2T???
Lisi
Agreed. I remember having a 100M /boot partition which was always
running out of space if I didn
On Tuesday 30 June 2015 23:30:46 Sven Hartge wrote:
> Wow. 100MB for a bios_grub partition wastes about 99.8MB.
Which used to matter. But out of 2T???
Lisi
--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debi
Gary Dale wrote:
> There is, but 2048 sectors is only 1M. Shrinking the swap partition and
> creating a 100M ef02 partition in the free space leaves a lot more
> headroom. Just because something fits today doesn't mean it will always fit.
Wow. 100MB for a bios_grub partition wastes about 99.8M
Gary Dale wrote:
> If you are referring to the ef02 partition, you don't install grub on
> it. In fact, installing grub on the mbr is preferred.
The first stage is put into the 512 Bytes of the MBR. The rest (the
core.img) is installed into the bios_grub partititon.
With a MSDOS partition tabl
On Tuesday 30 June 2015 14:56:46 Pascal Hambourg wrote:
> Gene Heskett a écrit :
> > On Tuesday 30 June 2015 11:55:20 Pascal Hambourg wrote:
> >> Actually GPT does not use *all* that space and GRUB could still
> >> find out the available unused space. But the GPT partition table
> >> could grow a
On 30/06/15 03:05 PM, Muhammad Yousuf Khan wrote:
Ramadan Kareem Yousuf.
What I would do is shrink partition 5 by 100M then create a new
ef02 partition in the freed space. This should be completely safe
since it is just a swap partition and contains no permanent data.
D
On 30/06/15 02:36 PM, Muhammad Yousuf Khan wrote:
Sorry Arno my last message was mistakenly sent to you only and not the
list.
Thanks all for you comments.
Pascal thanks for the tip about extending one of the partition and
using the free space i will do so but for now my primary problem is t
On 30/06/15 02:56 PM, Pascal Hambourg wrote:
Gene Heskett a écrit :
On Tuesday 30 June 2015 11:55:20 Pascal Hambourg wrote:
Actually GPT does not use *all* that space and GRUB could still find
out the available unused space. But the GPT partition table could grow
and overwrite the beginning of
On 30/06/15 02:17 PM, Pascal Hambourg wrote:
Gary Dale a écrit :
Number Start (sector)End (sector) Size Code Name
12048 7813119 3.7 GiB FD00 Linux RAID
327344896 1980469247 931.3 GiB FD00 Linux RAID
4 1980469248 2930
Muhammad Yousuf Khan a écrit :
>
> Pascal thanks for the tip about extending one of the partition and using
> the free space i will do so but for now my primary problem is to create a
> boot partition so that i can boot and replace the old 1.5 TB with 2TB
> drive. as you said boot partition in GPT
On Tue, 30 Jun 2015, Don Armstrong wrote:
> Just create a new partition before the 2048 sector to use as a grub
> bios_grub partition.
>
> For example:
>
> sudo gdisk /dev/sda
> p = print
> n = new partition
> number = 2
> start sector = 34 (or as close to zero as you can get)
> end sector = 2047
Gene Heskett a écrit :
>
> On Tuesday 30 June 2015 11:55:20 Pascal Hambourg wrote:
>>
>> Actually GPT does not use *all* that space and GRUB could still find
>> out the available unused space. But the GPT partition table could grow
>> and overwrite the beginning of the bootloader, so I guess it wa
>
>
>> Ramadan Kareem Yousuf.
>
> What I would do is shrink partition 5 by 100M then create a new ef02
> partition in the freed space. This should be completely safe since it is
> just a swap partition and contains no permanent data.
>
> Do this on both drives after stopping swap (swapoff) then
On Tue, 30 Jun 2015, Muhammad Yousuf Khan wrote:
> should i create partition 2 of a size of 1 GB. and make it as a boot
> partition and install grup on that partition. do you think performing
> these steps will do the job. or i have to more in order to boot my New
> 2TB GPT drive.
Just create a ne
Arno Schuring a écrit :
>
> As Pascal has said, the easiest is to create a new partition in the
> free space before partition 1 (sectors 34-2047). Make sure it has the
> correct type for a Bios Boot Partition (gdisk type ef02, with parted
> you need to set the Bootable flag).
Actually no. Parted
1 - 100 of 730 matches
Mail list logo