Hi,
hw wrote:
> with CDs/DVDs, writing is not so easy.
Thus it is not as easy to overwrite them by mistake.
The complicated part of optical burning can be put into scripts.
But i agree that modern HDD sizes cannot be easily covered by optical
media.
I wrote:
> > [...] LTO tapes [...]
hw wrote
On Thu, 2022-11-10 at 15:32 +0100, Thomas Schmitt wrote:
> Hi,
>
> i wrote:
> > > the time window in which the backuped data
> > > can become inconsistent on the application level.
>
> hw wrote:
> > Or are you referring to the data being altered while a backup is in
> > progress?
>
> Yes.
Ah I
On Mon, 2022-11-14 at 15:08 -0500, Michael Stone wrote:
> On Mon, Nov 14, 2022 at 08:40:47PM +0100, hw wrote:
> > Not really, it was just an SSD. Two of them were used as cache and they
> > failed
> > was not surprising. It's really unfortunate that SSDs fail particulary fast
> > when used for pu
On Mon, 2022-11-14 at 20:37 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Fri, 2022-11-11 at 22:11 +0100, Linux-Fan wrote:
> > > hw writes:
> > > > On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
> [...]
> > How do you intend to copy files at any other level than at file level? At
> > that
On 11/14/22 13:48, hw wrote:
On Fri, 2022-11-11 at 21:55 -0800, David Christensen wrote:
Lots of snapshots slows down commands that involve snapshots (e.g. 'zfs
list -r -t snapshot ...'). This means sysadmin tasks take longer when
the pool has more snapshots.
Hm, how long does it take? It
hw writes:
On Fri, 2022-11-11 at 21:26 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> > > On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:
> > > > On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> > > > > On Thu, Nov 10, 2022 at 05
On Fri, 2022-11-11 at 21:55 -0800, David Christensen wrote:
> [...]
> As with most filesystems, performance of ZFS drops dramatically as you
> approach 100% usage. So, you need a data destruction policy that keeps
> storage usage and performance at acceptable levels.
>
> Lots of snapshots slows
On Fri, 2022-11-11 at 21:26 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> > > On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:
> > > > On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> > > > > On Thu, Nov 10, 2022 at 05:34:32PM +010
On Mon, Nov 14, 2022 at 08:40:47PM +0100, hw wrote:
Not really, it was just an SSD. Two of them were used as cache and they failed
was not surprising. It's really unfortunate that SSDs fail particulary fast
when used for purposes they can be particularly useful for.
If you buy hard drives and
On Fri, 2022-11-11 at 14:48 -0500, Michael Stone wrote:
> On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> > There was no misdiagnosis. Have you ever had a failed SSD? They usually
> > just
> > disappear.
>
> Actually, they don't; that's a somewhat unusual failure mode.
What else happens?
hw writes:
On Fri, 2022-11-11 at 22:11 +0100, Linux-Fan wrote:
> hw writes:
> > On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
>
> [...]
>
> > > If you do not value the uptime making actual (even
> > > scheduled) copies of the data may be recommendable over
> > > using a RAID bec
On Sat, 2022-11-12 at 07:27 +0100, to...@tuxteam.de wrote:
> On Fri, Nov 11, 2022 at 07:22:19PM +0100, to...@tuxteam.de wrote:
>
> [...]
>
> > I think what hede was hinting at was that early SSDs had a (pretty)
> > limited number of write cycles [...]
>
> As was pointed out to me, the OP wasn't
On Fri, 2022-11-11 at 17:05 +, Curt wrote:
> On 2022-11-11, wrote:
> >
> > I just contested that their failure rate is higher than that of HDDs.
> > This is something which was true in early days, but nowadays it seems
> > to be just a prejudice.
>
> If he prefers extrapolating his anecdota
On 11/13/22 13:02, hw wrote:
On Fri, 2022-11-11 at 07:55 -0500, Dan Ritter wrote:
hw wrote:
On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
Linux-Fan wrote:
[...]
* RAID 5 and 6 restoration incurs additional stress on the other
disks in the RAID which makes it more likely that one of
On Fri, 2022-11-11 at 22:11 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
>
> [...]
>
> > > If you do not value the uptime making actual (even
> > > scheduled) copies of the data may be recommendable over
> > > using a RAID because such
On Fri, 2022-11-11 at 07:55 -0500, Dan Ritter wrote:
> hw wrote:
> > On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
> > > Linux-Fan wrote:
> > >
> > >
> > > [...]
> > > * RAID 5 and 6 restoration incurs additional stress on the other
> > > disks in the RAID which makes it more likely th
David Christensen wrote:
> The Intel Optane Memory Series products are designed to be cache devices --
> when using compatible hardware, Windows, and Intel software. My hardware
> should be compatible (Dell PowerEdge T30), but I am unsure if FreeBSD 12.3-R
> will see the motherboard NVMe slot or
On Fri, Nov 11, 2022 at 07:22:19PM +0100, to...@tuxteam.de wrote:
[...]
> I think what hede was hinting at was that early SSDs had a (pretty)
> limited number of write cycles [...]
As was pointed out to me, the OP wasn't hede. It was hw. Sorry for the
mis-attribution.
Cheers
--
t
signature.a
On 11/11/22 00:43, hw wrote:
On Thu, 2022-11-10 at 21:14 -0800, David Christensen wrote:
On 11/10/22 07:44, hw wrote:
On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
On 11/9/22 00:24, hw wrote:
> On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
Taking snapshots is
hw writes:
On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
[...]
> If you do not value the uptime making actual (even
> scheduled) copies of the data may be recommendable over
> using a RAID because such schemes may (among other advantages)
> protect you from accidental f
hw writes:
On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:
> > On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> > > On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> > > > And mind you, SSDs are *designed to fail* the soo
On Fri, Nov 11, 2022 at 02:05:33PM -0500, Dan Ritter wrote:
300TB/year. That's a little bizarre: it's 9.51 MB/s. Modern
high end spinners also claim 200MB/s or more when feeding them
continuous writes. Apparently WD thinks that can't be sustained
more than 5% of the time.
Which makes sense for
On Fri, Nov 11, 2022 at 09:03:45AM +0100, hw wrote:
On Thu, 2022-11-10 at 23:12 -0500, Michael Stone wrote:
The advantage to RAID 6 is that it can tolerate a double disk failure.
With RAID 1 you need 3x your effective capacity to achieve that and even
though storage has gotten cheaper, it hasn't
On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
There was no misdiagnosis. Have you ever had a failed SSD? They usually just
disappear.
Actually, they don't; that's a somewhat unusual failure mode. I have had
a couple of ssd failures, out of hundreds. (And I think mostly from a
specific
to...@tuxteam.de wrote:
>
> I think what hede was hinting at was that early SSDs had a (pretty)
> limited number of write cycles per "block" [1] before failure; they had
> (and have) extra blocks to substitute broken ones and do a fair amount
> of "wear leveling behind the scenes. So it made more
Jeffrey Walton wrote:
> On Fri, Nov 11, 2022 at 2:01 AM wrote:
> >
> > On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> > > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> >... Here's a report
> > by folks who do lots of HDDs and SDDs:
> >
> > https://www.backblaze.com/blog/backb
On Fri, Nov 11, 2022 at 12:53:21PM -0500, Jeffrey Walton wrote:
> On Fri, Nov 11, 2022 at 2:01 AM wrote:
> >
> > On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> > > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> >... Here's a report
> > by folks who do lots of HDDs and SDDs:
> >
>
On Fri, Nov 11, 2022 at 2:01 AM wrote:
>
> On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
>... Here's a report
> by folks who do lots of HDDs and SDDs:
>
> https://www.backblaze.com/blog/backblaze-hard-drive-stats-q1-2021/
>
> The
On Fri, Nov 11, 2022 at 05:05:51PM -, Curt wrote:
> On 2022-11-11, wrote:
> >
> > I just contested that their failure rate is higher than that of HDDs.
[...]
> If he prefers extrapolating his anecdotal personal experience to a
> general rule rather than applying a verifiable general rule to
On 2022-11-11, wrote:
>
> I just contested that their failure rate is higher than that of HDDs.
> This is something which was true in early days, but nowadays it seems
> to be just a prejudice.
If he prefers extrapolating his anecdotal personal experience to a
general rule rather than applying a
hw wrote:
> On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
> > Linux-Fan wrote:
> >
> >
> > [...]
> > * RAID 5 and 6 restoration incurs additional stress on the other
> > disks in the RAID which makes it more likely that one of them
> > will fail. The advantage of RAID 6 is that it ca
Am 10.11.2022 14:40, schrieb Curt:
(or maybe a RAID array is
conceivable over a network and a distance?).
Not only conceivable, but indeed practicable: Linbit DRBD
On Thursday, November 10, 2022 09:06:39 AM Dan Ritter wrote:
> If you need a filesystem that is larger than a single disk (that you can
> afford, or that exists), RAID is the name for the general approach to
> solving that.
PIcking a nit, I would say: "RAID is the name for *a* general approach to
On Fri, Nov 11, 2022 at 09:12:36AM +0100, hw wrote:
> Backblaze does all kinds of things.
whatever.
> > The gist, for disks playing similar roles (they don't use yet SSDs for bulk
> > storage, because of the costs): 2/1518 failures for SSDs, 44/1669 for HDDs.
> >
> > I'll leave the maths as an e
Am 11.11.2022 um 07:36 schrieb hw:
> That's on https://docs.freebsd.org/en/books/handbook/zfs/
>
> I don't remember where I read about 8, could have been some documentation
> about
> FreeNAS.
Well, OTOH there do exist some considerations, which may have lead to
that number sticking somewhere, bu
On Thu, 2022-11-10 at 13:40 +, Curt wrote:
> On 2022-11-08, The Wanderer wrote:
> >
> > That more general sense of "backup" as in "something that you can fall
> > back on" is no less legitimate than the technical sense given above, and
> > it always rubs me the wrong way to see the unconditio
On Thu, 2022-11-10 at 21:14 -0800, David Christensen wrote:
> On 11/10/22 07:44, hw wrote:
> > On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
> > > On 11/9/22 00:24, hw wrote:
> > > > On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
>
> [...]
>
> >
> Taking snapshots is
On Fri, 2022-11-11 at 08:01 +0100, to...@tuxteam.de wrote:
> On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
>
> [...]
>
> > Why would anyone use SSDs for backups? They're way too expensive for that.
>
> Possibly.
>
> > So far, th
On Thu, 2022-11-10 at 23:12 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 08:32:36PM -0500, Dan Ritter wrote:
> > * RAID 5 and 6 restoration incurs additional stress on the other
> > disks in the RAID which makes it more likely that one of them
> > will fail.
>
> I believe that's mostly
On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
> Linux-Fan wrote:
>
>
> [...]
> * RAID 5 and 6 restoration incurs additional stress on the other
> disks in the RAID which makes it more likely that one of them
> will fail. The advantage of RAID 6 is that it can then recover
> from tha
On Thu, 2022-11-10 at 22:37 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Wed, 2022-11-09 at 19:17 +0100, Linux-Fan wrote:
> > > hw writes:
> > > > On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > > > > Le 09/11/2022 à 12:41, hw a écrit :
>
> [...]
>
> > > > I'd
> > > > have to use md
On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
[...]
> Why would anyone use SSDs for backups? They're way too expensive for that.
Possibly.
> So far, the failure rate with SSDs has been not any better than the failure
> rate
> of
On Thu, 2022-11-10 at 14:28 +0100, DdB wrote:
> Am 10.11.2022 um 13:03 schrieb Greg Wooledge:
> > If it turns out that '?' really is the filename, then it becomes a ZFS
> > issue with which I can't help.
>
> just tested: i could create, rename, delete a file with that name on a
> zfs filesystem ju
On Thu, 2022-11-10 at 08:48 -0500, Dan Ritter wrote:
> hw wrote:
> > And I've been reading that when using ZFS, you shouldn't make volumes with
> > more
> > than 8 disks. That's very inconvenient.
>
>
> Where do you read these things?
I read things like this:
"Sun™ recommends that the number
On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:
> > On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> > > On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> > > > And mind you, SSDs are *designed to fail* the sooner the more d
On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
ls -la
insgesamt 5
drwxr-xr-x 3 namefoo namefoo 3 16. Aug 22:36 .
drwxr-xr-x 24 root root 4096 1. Nov 2017 ..
drwxr-xr-x 2 namefoo namefoo 2 21. Jan 2020 ?
namefoo@host /srv/datadir $ ls -la '?'
ls: Zugriff auf ? nicht möglich:
On 11/10/22 07:44, hw wrote:
On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
On 11/9/22 00:24, hw wrote:
> On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
Be careful that you do not confuse a ~33 GiB full backup set, and 78
snapshots over six months of that same full
On Thu, Nov 10, 2022 at 08:32:36PM -0500, Dan Ritter wrote:
* RAID 5 and 6 restoration incurs additional stress on the other
disks in the RAID which makes it more likely that one of them
will fail.
I believe that's mostly apocryphal; I haven't seen science backing that
up, and it hasn't been
On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:
On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> And mind you, SSDs are *designed to fail* the sooner the more data you write
> to
> them. They have their uses, maybe even for storag
Linux-Fan wrote:
> I think the arguments of the RAID5/6 critics summarized were as follows:
>
> * Running in a RAID level that is 5 or 6 degrades performance while
> a disk is offline significantly. RAID 10 keeps most of its speed and
> RAID 1 only degrades slightly for most use cases.
>
> *
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Am 10.11.2022 um 22:37 schrieb Linux-Fan:
> Ext4 still does not offer snapshots. The traditional way to do
> snapshots outside of fancy BTRFS and ZFS file systems is to add LVM
> to the equation although I do not have any useful experience with
> tha
hw writes:
On Wed, 2022-11-09 at 19:17 +0100, Linux-Fan wrote:
> hw writes:
> > On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > > Le 09/11/2022 à 12:41, hw a écrit :
[...]
> > I'd
> > have to use mdadm to create a RAID5 (or use the hardware RAID but that
> > isn't
>
> AFAIK BT
On Thu, Nov 10, 2022 at 06:54:31PM +0100, hw wrote:
> Ah, yes. I tricked myself because I don't have hd installed,
It's just a symlink to hexdump.
lrwxrwxrwx 1 root root 7 Jan 20 2022 /usr/bin/hd -> hexdump
unicorn:~$ dpkg -S usr/bin/hd
bsdextrautils: /usr/bin/hd
unicorn:~$ dpkg -S usr/bin/hex
On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> > And mind you, SSDs are *designed to fail* the sooner the more data you write
> > to
> > them. They have their uses, maybe even for storage if you're so desperate,
> > but
> > not for b
On Thu, 2022-11-10 at 09:30 -0500, Greg Wooledge wrote:
> On Thu, Nov 10, 2022 at 02:48:28PM +0100, hw wrote:
> > On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
>
> [...]
> > printf '%s\0' * | hexdump
> > 000 00c2 6177 7468
> > 007
>
> I dislike this outp
On Wed, 2022-11-09 at 14:22 +0100, Nicolas George wrote:
> hw (12022-11-08):
> > When I want to have 2 (or more) generations of backups, do I actually want
> > deduplication? It leaves me with only one actual copy of the data which
> > seems
> > to defeat the idea of having multiple generations of
On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
And mind you, SSDs are *designed to fail* the sooner the more data you write to
them. They have their uses, maybe even for storage if you're so desperate, but
not for backup storage.
It's unlikely you'll "wear out" your SSDs faster than you w
On Thu, 2022-11-10 at 10:47 +0100, DdB wrote:
> Am 10.11.2022 um 06:38 schrieb David Christensen:
> > What is your technique for defragmenting ZFS?
> well, that was meant more or less a joke: there is none apart from
> offloading all the data, destroying and rebuilding the pool, and filling
> it ag
On Thu, 2022-11-10 at 02:19 -0500, gene heskett wrote:
> On 11/10/22 00:37, David Christensen wrote:
> > On 11/9/22 00:24, hw wrote:
> > > On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
>
> [...]
> Which brings up another suggestion in two parts:
>
> 1: use amanda, with tar and comp
On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
> On 11/9/22 00:24, hw wrote:
> > On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
>
> > Hmm, when you can backup like 3.5TB with that, maybe I should put
> FreeBSD on my
> > server and give ZFS a try. Worst thing that can
On Wed, 09 Nov 2022 13:28:46 +0100
hw wrote:
> On Tue, 2022-11-08 at 09:52 +0100, DdB wrote:
> > Am 08.11.2022 um 05:31 schrieb hw:
> > > > That's only one point.
> > > What are the others?
> > >
> > > > And it's not really some valid one, I think, as
> > > > you do typically not run int
Brad Rogers wrote:
> On Thu, 10 Nov 2022 08:48:43 -0500
> Dan Ritter wrote:
>
> Hello Dan,
>
> >8 is not a magic number.
>
> Clearly, you don't read Terry Pratchett. :-)
In the context of ZFS, 8 is not a magic number.
May you be ridiculed by Pictsies.
-dsr-
On 2022-11-10, Nicolas George wrote:
> Curt (12022-11-10):
>> Why restate it then needlessly?
>
> To NOT state that you were wrong when you were not.
>
> This branch of the discussion bores me. Goodbye.
>
This isn't solid enough for a branch. It couldn't support a hummingbird.
And me too! That o
Curt (12022-11-10):
> Why restate it then needlessly?
To NOT state that you were wrong when you were not.
This branch of the discussion bores me. Goodbye.
--
Nicolas George
On 2022-11-10, Nicolas George wrote:
> Curt (12022-11-10):
>> > one drive fails → you can replace it immediately, no downtime
>> That's precisely what I said,
>
> I was not stating that THIS PART of what you said was srong.
Why restate it then needlessly?
>> so I'm
Curt (12022-11-10):
> > one drive fails → you can replace it immediately, no downtime
> That's precisely what I said,
I was not stating that THIS PART of what you said was srong.
> so I'm baffled by the redundancy of your
> words.
Hint: my mail did not stop at the l
On 2022-11-10, Nicolas George wrote:
> Curt (12022-11-10):
>> Maybe it's a question of intent more than anything else. I thought RAID
>> was intended for a server scenario where if a disk fails, you're down
>> time is virtually null, whereas as a backup is intended to prevent data
>> loss.
>
> May
On 2022-11-10 at 09:06, Dan Ritter wrote:
> Now, RAID is not a backup because it is a single store of data: if
> you delete something from it, it is deleted. If you suffer a
> lightning strike to the server, there's no recovery from molten
> metal.
Here's where I find disagreement.
Say you didn'
On Thu, 10 Nov 2022 08:48:43 -0500
Dan Ritter wrote:
Hello Dan,
>8 is not a magic number.
Clearly, you don't read Terry Pratchett. :-)
--
Regards _ "Valid sig separator is {dash}{dash}{space}"
/ ) "The blindingly obvious is never immediately apparent"
/ _)rad
Hi,
i wrote:
> > the time window in which the backuped data
> > can become inconsistent on the application level.
hw wrote:
> Or are you referring to the data being altered while a backup is in
> progress?
Yes. Data of different files or at different places in the same file
may have relations wh
On Thu, Nov 10, 2022 at 02:48:28PM +0100, hw wrote:
> On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
> good idea:
>
> printf %s * | hexdump
> 000 77c2 6861 0074
> 005
Looks like there might be more than one file here.
> > If you misrepresented the situat
Curt wrote:
> On 2022-11-08, The Wanderer wrote:
> >
> > That more general sense of "backup" as in "something that you can fall
> > back on" is no less legitimate than the technical sense given above, and
> > it always rubs me the wrong way to see the unconditional "RAID is not a
> > backup" trot
hw wrote:
> And I've been reading that when using ZFS, you shouldn't make volumes with
> more
> than 8 disks. That's very inconvenient.
Where do you read these things?
The number of disks in a zvol can be optimized, depending on
your desired redundancy method, total number of drives, and
tole
Curt (12022-11-10):
> Maybe it's a question of intent more than anything else. I thought RAID
> was intended for a server scenario where if a disk fails, you're down
> time is virtually null, whereas as a backup is intended to prevent data
> loss.
Maybe just use common sense. RAID means your data
On 2022-11-10 at 08:40, Curt wrote:
> On 2022-11-08, The Wanderer wrote:
>
>> That more general sense of "backup" as in "something that you can
>> fall back on" is no less legitimate than the technical sense given
>> above, and it always rubs me the wrong way to see the unconditional
>> "RAID is
On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
> On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
> > ls -la
> > insgesamt 5
> > drwxr-xr-x 3 namefoo namefoo 3 16. Aug 22:36 .
> > drwxr-xr-x 24 root root 4096 1. Nov 2017 ..
> > drwxr-xr-x 2 namefoo namefoo 2 21. Jan 2020
On 2022-11-08, The Wanderer wrote:
>
> That more general sense of "backup" as in "something that you can fall
> back on" is no less legitimate than the technical sense given above, and
> it always rubs me the wrong way to see the unconditional "RAID is not a
> backup" trotted out blindly as if tha
Am 10.11.2022 um 13:03 schrieb Greg Wooledge:
> If it turns out that '?' really is the filename, then it becomes a ZFS
> issue with which I can't help.
just tested: i could create, rename, delete a file with that name on a
zfs filesystem just as with any other fileystem.
But: i recall having seen
On Thu, 2022-11-10 at 10:59 +0100, DdB wrote:
> Am 10.11.2022 um 04:46 schrieb hw:
> > On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > > [...]
> [...]
> > >
> > Why would partitions be better than the block device itself?
On Thu, 2022-11-10 at 10:34 +0100, Christoph Brinkhaus wrote:
> Am Thu, Nov 10, 2022 at 04:46:12AM +0100 schrieb hw:
> > On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > > [...]
> [...]
> > >
> >
> > Why would partitions
On Wed, 2022-11-09 at 12:08 +0100, Thomas Schmitt wrote:
> Hi,
>
> i wrote:
> > > https://github.com/dm-vdo/kvdo/issues/18
>
> hw wrote:
> > So the VDO ppl say 4kB is a good block size
>
> They actually say that it's the only size which they support.
On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
> ls -la
> insgesamt 5
> drwxr-xr-x 3 namefoo namefoo3 16. Aug 22:36 .
> drwxr-xr-x 24 rootroot4096 1. Nov 2017 ..
> drwxr-xr-x 2 namefoo namefoo2 21. Jan 2020 ?
> namefoo@host /srv/datadir $ ls -la '?'
> ls: Zugriff auf ? nic
On Wed, 09 Nov 2022 13:52:26 +0100 hw wrote:
Does that work? Does bees run as long as there's something to
deduplicate and
only stops when there isn't?
Bees is a service (daemon) which runs 24/7 watching btrfs transaction
state (the checkpoints). If there are new transactions then it kicks
Am 10.11.2022 um 04:46 schrieb hw:
> On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
>> Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
>> [...]
>>> FreeBSD has ZFS but can't even configure the disk controllers, so that won't
>>> work.
>>
>> If I understand you right you mean R
Am 10.11.2022 um 06:38 schrieb David Christensen:
> What is your technique for defragmenting ZFS?
well, that was meant more or less a joke: there is none apart from
offloading all the data, destroying and rebuilding the pool, and filling
it again from the backup. But i do it from time to time if fr
Am Thu, Nov 10, 2022 at 04:46:12AM +0100 schrieb hw:
> On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > [...]
> > > FreeBSD has ZFS but can't even configure the disk controllers, so that
> > > won't
> > > work.
> >
> > If
On 11/10/22 00:37, David Christensen wrote:
On 11/9/22 00:24, hw wrote:
> On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
> Hmm, when you can backup like 3.5TB with that, maybe I should put
FreeBSD on my
> server and give ZFS a try. Worst thing that can happen is that it
crashe
On 11/9/22 01:35, DdB wrote:
> But
i am satisfied with zfs performance from spinning rust, if i dont fill
up the pool too much, and defrag after a while ...
What is your technique for defragmenting ZFS?
David
On 11/9/22 03:08, Thomas Schmitt wrote:
So i would use at least four independent storage facilities interchangeably.
I would make snapshots, if the filesystem supports them, and backup those
instead of the changeable filesystem.
I would try to reduce the activity of applications on the filesyste
On 11/9/22 05:29, didier gaumet wrote:
- *BSDs nowadays have departed from old ZFS code and use the same source
code stack as Linux (OpenZFS)
AIUI FreeBSD 12 and prior use ZFS-on-Linux code, while FreeBSD 13 and
later use OpenZFS code.
On 11/9/22 05:44, didier gaumet wrote:
> I was usin
On 11/9/22 00:24, hw wrote:
> On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
> Hmm, when you can backup like 3.5TB with that, maybe I should put
FreeBSD on my
> server and give ZFS a try. Worst thing that can happen is that it
crashes and
> I'd have made an experiment that wasn't
On Wed, 2022-11-09 at 19:17 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > > Le 09/11/2022 à 12:41, hw a écrit :
>
> [...]
>
> > > I am really not so well aware of ZFS state but my impression was that:
> > > - FUSE implementation of ZoL (ZF
On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> [...]
> > FreeBSD has ZFS but can't even configure the disk controllers, so that won't
> > work.
>
> If I understand you right you mean RAID controllers?
yes
> According to my
hw writes:
On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> Le 09/11/2022 à 12:41, hw a écrit :
[...]
> I am really not so well aware of ZFS state but my impression was that:
> - FUSE implementation of ZoL (ZFS on Linux) is deprecated and that,
> Ubuntu excepted (classic module?), Z
Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
Hi hw,
> On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> > Le 09/11/2022 à 12:41, hw a écrit :
> > [...]
> > > In any case, I'm currently tending to think that putting FreeBSD with ZFS
> > > on
> > > my
> > > server might be the best
On Wed, 2022-11-09 at 17:29 +0100, DdB wrote:
> Am 09.11.2022 um 12:41 schrieb hw:
> > In any case, I'm currently tending to think that putting FreeBSD with ZFS on
> > my
> > server might be the best option. But then, apparently I won't be able to
> > configure the controller cards, so that won't
On Wed, 2022-11-09 at 14:29 +0100, didier gaumet wrote:
> Le 09/11/2022 à 12:41, hw a écrit :
> [...]
> > In any case, I'm currently tending to think that putting FreeBSD with ZFS on
> > my
> > server might be the best option. But then, apparently I won't be able to
> > configure the controller ca
designed for arrays with 40+ disks and, besides data integrity, with ease of
> > use
> > in mind. Performance doesn't seem paramount. Also see
> > https://wiki.gentoo.org/wiki/ZFS
>
> > Well, the question is what you mean by performance. Maybe ZFS can
> >
Am 09.11.2022 um 12:41 schrieb hw:
> In any case, I'm currently tending to think that putting FreeBSD with ZFS on
> my
> server might be the best option. But then, apparently I won't be able to
> configure the controller cards, so that won't really work. And ZFS with Linux
> isn't so great becau
hw wrote on 11/9/22 04:41:
configure the controller cards, so that won't really work. And ZFS with Linux
isn't so great because it keeps fuse in between.
That isn't true. I've been using ZFS with Debian for years without FUSE,
through the ZFSonLinux project.
The only slightly discomfortin
1 - 100 of 156 matches
Mail list logo