On 10/12/2023 23:38, Stefan Monnier wrote:
Max Nikulin [2023-12-10 21:49:46] wrote:
udisksctl dump
udevadm info --query=all --name=sda
for various hints related to udisks. Perhaps a better variant of udevadm
options exists.
Thanks. Now I have some thread on which to pull 🙂
I have
Hi,
to...@tuxteam.de wrote:
> Remember
> Apple's "fat binaries", which contained a binary for 68K and another
> for PowerPC? Those were made with "forks", which was Apple's variant
> of "several streams in one file". And so on.
The most extreme example i know is Solaris:
https://docs.oracle.co
olution of
OS2's HPFs, etc, etc.
You also see, back then, increasing use of B and B+ trees in different
roles in file systems.
After all, designers moved from company to company and carried with them
ideas and teams. Companies were aggressively hiring people off other
companies.
It's ac
to...@tuxteam.de [2023-12-10 17:47:41] wrote:
> You ssh in as root (or serial port)?
I do over the serial port, but over SSH, I always login as myself first
and then `su -` to root.
> Perhaps it's a "user session" thingy playing games on you?
Could be,
Stefan
On Sun, Dec 10, 2023 at 11:42:42AM -0500, Stefan Monnier wrote:
> Stanislav Vlasov [2023-12-10 21:16:54] wrote:
> > In /media/ disks mounts by GUI. Stefan use root in gui login.
>
> Except:
> - I never do a "GUI login" as root.
> - "This is on a headless ARM board running Debian stable".
> I acc
Stanislav Vlasov [2023-12-10 21:16:54] wrote:
> In /media/ disks mounts by GUI. Stefan use root in gui login.
Except:
- I never do a "GUI login" as root.
- "This is on a headless ARM board running Debian stable".
I access it via SSH (and occasionally serial port).
Stefan
Max Nikulin [2023-12-10 21:49:46] wrote:
> On 10/12/2023 02:49, Stefan Monnier wrote:
>> "magically" mounted as
>> `/media/root/`.
> [...]
>> Any idea who/what does that, and how/where I can control it?
>
> This path is used by udisks, however I am unsure what may cause
> automounting for root.
>
>
2023-12-10 19:49 GMT+05:00, Max Nikulin :
> On 10/12/2023 02:49, Stefan Monnier wrote:
>> "magically" mounted as
>> `/media/root/`.
> [...]
>> Any idea who/what does that, and how/where I can control it?
>
> This path is used by udisks, however I am unsure what may cause
> automounting for root.
>
On 10/12/2023 02:49, Stefan Monnier wrote:
"magically" mounted as
`/media/root/`.
[...]
Any idea who/what does that, and how/where I can control it?
This path is used by udisks, however I am unsure what may cause
automounting for root.
I would check
udisksctl dump
udevadm info --q
On Sat, Dec 9, 2023, 1:50 PM Stefan Monnier
wrote:
> Recently I noticed some unused ext4 filesystems (i.e. filesystems that
> aren't in /etc/fstab, that I normally don't mount, typically because
> they're snapshots or backups) "magically" mounted as
> `/media/root/`.
>
> This is on a headless ARM
Recently I noticed some unused ext4 filesystems (i.e. filesystems that
aren't in /etc/fstab, that I normally don't mount, typically because
they're snapshots or backups) "magically" mounted as
`/media/root/`.
This is on a headless ARM board running Debian stable.
Not sure when this happen, but I
Hi,
hw wrote:
> with CDs/DVDs, writing is not so easy.
Thus it is not as easy to overwrite them by mistake.
The complicated part of optical burning can be put into scripts.
But i agree that modern HDD sizes cannot be easily covered by optical
media.
I wrote:
> > [...] LTO tapes [...]
hw wrote
On Thu, 2022-11-10 at 15:32 +0100, Thomas Schmitt wrote:
> Hi,
>
> i wrote:
> > > the time window in which the backuped data
> > > can become inconsistent on the application level.
>
> hw wrote:
> > Or are you referring to the data being altered while a backup is in
> > progress?
>
> Yes.
Ah I
On Mon, 2022-11-14 at 15:08 -0500, Michael Stone wrote:
> On Mon, Nov 14, 2022 at 08:40:47PM +0100, hw wrote:
> > Not really, it was just an SSD. Two of them were used as cache and they
> > failed
> > was not surprising. It's really unfortunate that SSDs fail particulary fast
> > when used for pu
>
> If you try this in practice, it is quite limited compared to file copies.
What's the difference between the target storage being offline and the target
storage server being switched off? You can't copy the files either way because
there's nothing available to copy them to.
On 11/14/22 13:48, hw wrote:
On Fri, 2022-11-11 at 21:55 -0800, David Christensen wrote:
Lots of snapshots slows down commands that involve snapshots (e.g. 'zfs
list -r -t snapshot ...'). This means sysadmin tasks take longer when
the pool has more snapshots.
Hm, how long does it take? It
hw writes:
On Fri, 2022-11-11 at 21:26 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> > > On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:
> > > > On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> > > > > On Thu, Nov 10, 2022 at 05
hierarchies. Child dataset
> properties can be inherited from the parent dataset. Commands can be
> applied to an entire hierarchy by specifying the top dataset and using a
> "recursive" option. Etc..
Ah, ok, that's what you mean.
> When a host is decommissioned an
On Fri, 2022-11-11 at 21:26 +0100, Linux-Fan wrote:
> hw writes:
>
> > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> > > On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:
> > > > On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> > > > > On Thu, Nov 10, 2022 at 05:34:32PM +010
On Mon, Nov 14, 2022 at 08:40:47PM +0100, hw wrote:
Not really, it was just an SSD. Two of them were used as cache and they failed
was not surprising. It's really unfortunate that SSDs fail particulary fast
when used for purposes they can be particularly useful for.
If you buy hard drives and
On Fri, 2022-11-11 at 14:48 -0500, Michael Stone wrote:
> On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> > There was no misdiagnosis. Have you ever had a failed SSD? They usually
> > just
> > disappear.
>
> Actually, they don't; that's a somewhat unusual failure mode.
What else happens?
n off single disks.
When I'm done making backups, I shut down the server and not much can happen
to
the backups.
If you try this in practice, it is quite limited compared to file copies.
> Additionally, when using files, only the _used_ space matters. Beyond
> that, the size of the
On Sat, 2022-11-12 at 07:27 +0100, to...@tuxteam.de wrote:
> On Fri, Nov 11, 2022 at 07:22:19PM +0100, to...@tuxteam.de wrote:
>
> [...]
>
> > I think what hede was hinting at was that early SSDs had a (pretty)
> > limited number of write cycles [...]
>
> As was pointed out to me, the OP wasn't
On Fri, 2022-11-11 at 17:05 +, Curt wrote:
> On 2022-11-11, wrote:
> >
> > I just contested that their failure rate is higher than that of HDDs.
> > This is something which was true in early days, but nowadays it seems
> > to be just a prejudice.
>
> If he prefers extrapolating his anecdota
On 11/13/22 13:02, hw wrote:
On Fri, 2022-11-11 at 07:55 -0500, Dan Ritter wrote:
hw wrote:
On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
Linux-Fan wrote:
[...]
* RAID 5 and 6 restoration incurs additional stress on the other
disks in the RAID which makes it more likely that one of
n is integral to the solution.
What has file tree copying to do with RAID scenarios?
> ### where to
>
> File trees are much easier copied to network locations compared to adding a
> “network mirror” to any RAID (although that _is_ indeed an option, DRBD was
> mentioned in a
On Fri, 2022-11-11 at 07:55 -0500, Dan Ritter wrote:
> hw wrote:
> > On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
> > > Linux-Fan wrote:
> > >
> > >
> > > [...]
> > > * RAID 5 and 6 restoration incurs additional stress on the other
> > > disks in the RAID which makes it more likely th
David Christensen wrote:
> The Intel Optane Memory Series products are designed to be cache devices --
> when using compatible hardware, Windows, and Intel software. My hardware
> should be compatible (Dell PowerEdge T30), but I am unsure if FreeBSD 12.3-R
> will see the motherboard NVMe slot or
On Fri, Nov 11, 2022 at 07:22:19PM +0100, to...@tuxteam.de wrote:
[...]
> I think what hede was hinting at was that early SSDs had a (pretty)
> limited number of write cycles [...]
As was pointed out to me, the OP wasn't hede. It was hw. Sorry for the
mis-attribution.
Cheers
--
t
signature.a
On 11/11/22 00:43, hw wrote:
On Thu, 2022-11-10 at 21:14 -0800, David Christensen wrote:
On 11/10/22 07:44, hw wrote:
On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
On 11/9/22 00:24, hw wrote:
> On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
Taking snapshots is
pace matters. Beyond that,
the size of the source and target file systems are decoupled. On the other
hand, RAID mandates that the sizes of disks adhere to certain properties
(like all being equal or wasting some of the storage).
> > Is anyone still using ext4? I'm not saying i
hw writes:
On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:
> > On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> > > On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> > > > And mind you, SSDs are *designed to fail* the soo
On Fri, Nov 11, 2022 at 02:05:33PM -0500, Dan Ritter wrote:
300TB/year. That's a little bizarre: it's 9.51 MB/s. Modern
high end spinners also claim 200MB/s or more when feeding them
continuous writes. Apparently WD thinks that can't be sustained
more than 5% of the time.
Which makes sense for
On Fri, Nov 11, 2022 at 09:03:45AM +0100, hw wrote:
On Thu, 2022-11-10 at 23:12 -0500, Michael Stone wrote:
The advantage to RAID 6 is that it can tolerate a double disk failure.
With RAID 1 you need 3x your effective capacity to achieve that and even
though storage has gotten cheaper, it hasn't
On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
There was no misdiagnosis. Have you ever had a failed SSD? They usually just
disappear.
Actually, they don't; that's a somewhat unusual failure mode. I have had
a couple of ssd failures, out of hundreds. (And I think mostly from a
specific
to...@tuxteam.de wrote:
>
> I think what hede was hinting at was that early SSDs had a (pretty)
> limited number of write cycles per "block" [1] before failure; they had
> (and have) extra blocks to substitute broken ones and do a fair amount
> of "wear leveling behind the scenes. So it made more
Jeffrey Walton wrote:
> On Fri, Nov 11, 2022 at 2:01 AM wrote:
> >
> > On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> > > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> >... Here's a report
> > by folks who do lots of HDDs and SDDs:
> >
> > https://www.backblaze.com/blog/backb
On Fri, Nov 11, 2022 at 12:53:21PM -0500, Jeffrey Walton wrote:
> On Fri, Nov 11, 2022 at 2:01 AM wrote:
> >
> > On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> > > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> >... Here's a report
> > by folks who do lots of HDDs and SDDs:
> >
>
On Fri, Nov 11, 2022 at 2:01 AM wrote:
>
> On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
>... Here's a report
> by folks who do lots of HDDs and SDDs:
>
> https://www.backblaze.com/blog/backblaze-hard-drive-stats-q1-2021/
>
> The
On Fri, Nov 11, 2022 at 05:05:51PM -, Curt wrote:
> On 2022-11-11, wrote:
> >
> > I just contested that their failure rate is higher than that of HDDs.
[...]
> If he prefers extrapolating his anecdotal personal experience to a
> general rule rather than applying a verifiable general rule to
On 2022-11-11, wrote:
>
> I just contested that their failure rate is higher than that of HDDs.
> This is something which was true in early days, but nowadays it seems
> to be just a prejudice.
If he prefers extrapolating his anecdotal personal experience to a
general rule rather than applying a
hw wrote:
> On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
> > Linux-Fan wrote:
> >
> >
> > [...]
> > * RAID 5 and 6 restoration incurs additional stress on the other
> > disks in the RAID which makes it more likely that one of them
> > will fail. The advantage of RAID 6 is that it ca
Am 10.11.2022 14:40, schrieb Curt:
(or maybe a RAID array is
conceivable over a network and a distance?).
Not only conceivable, but indeed practicable: Linbit DRBD
On Thursday, November 10, 2022 09:06:39 AM Dan Ritter wrote:
> If you need a filesystem that is larger than a single disk (that you can
> afford, or that exists), RAID is the name for the general approach to
> solving that.
PIcking a nit, I would say: "RAID is the name for *a* general approach to
On Fri, Nov 11, 2022 at 09:12:36AM +0100, hw wrote:
> Backblaze does all kinds of things.
whatever.
> > The gist, for disks playing similar roles (they don't use yet SSDs for bulk
> > storage, because of the costs): 2/1518 failures for SSDs, 44/1669 for HDDs.
> >
> > I'll leave the maths as an e
Am 11.11.2022 um 07:36 schrieb hw:
> That's on https://docs.freebsd.org/en/books/handbook/zfs/
>
> I don't remember where I read about 8, could have been some documentation
> about
> FreeNAS.
Well, OTOH there do exist some considerations, which may have lead to
that number sticking somewhere, bu
On Thu, 2022-11-10 at 13:40 +, Curt wrote:
> On 2022-11-08, The Wanderer wrote:
> >
> > That more general sense of "backup" as in "something that you can fall
> > back on" is no less legitimate than the technical sense given above, and
> > it always rubs me the wrong way to see the unconditio
On Thu, 2022-11-10 at 21:14 -0800, David Christensen wrote:
> On 11/10/22 07:44, hw wrote:
> > On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
> > > On 11/9/22 00:24, hw wrote:
> > > > On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
>
> [...]
>
> >
> Taking snapshots is
On Fri, 2022-11-11 at 08:01 +0100, to...@tuxteam.de wrote:
> On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> > On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
>
> [...]
>
> > Why would anyone use SSDs for backups? They're way too expensive for that.
>
> Possibly.
>
> > So far, th
On Thu, 2022-11-10 at 23:12 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 08:32:36PM -0500, Dan Ritter wrote:
> > * RAID 5 and 6 restoration incurs additional stress on the other
> > disks in the RAID which makes it more likely that one of them
> > will fail.
>
> I believe that's mostly
On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
> Linux-Fan wrote:
>
>
> [...]
> * RAID 5 and 6 restoration incurs additional stress on the other
> disks in the RAID which makes it more likely that one of them
> will fail. The advantage of RAID 6 is that it can then recover
> from tha
ong time. There's probably no reason to
change that. If you want something else, you can always go for it.
> Its my file system of choice unless I have
> very specific reasons against it. I have never seen it fail outside of
> hardware issues. Performance of ext4 is quite acc
On Fri, Nov 11, 2022 at 07:15:07AM +0100, hw wrote:
> On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
[...]
> Why would anyone use SSDs for backups? They're way too expensive for that.
Possibly.
> So far, the failure rate with SSDs has been not any better than the failure
> rate
> of
On Thu, 2022-11-10 at 14:28 +0100, DdB wrote:
> Am 10.11.2022 um 13:03 schrieb Greg Wooledge:
> > If it turns out that '?' really is the filename, then it becomes a ZFS
> > issue with which I can't help.
>
> just tested: i could create, rename, delete a file with that name on a
> zfs filesystem ju
On Thu, 2022-11-10 at 08:48 -0500, Dan Ritter wrote:
> hw wrote:
> > And I've been reading that when using ZFS, you shouldn't make volumes with
> > more
> > than 8 disks. That's very inconvenient.
>
>
> Where do you read these things?
I read things like this:
"Sun™ recommends that the number
On Thu, 2022-11-10 at 23:05 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:
> > On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> > > On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> > > > And mind you, SSDs are *designed to fail* the sooner the more d
On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
ls -la
insgesamt 5
drwxr-xr-x 3 namefoo namefoo 3 16. Aug 22:36 .
drwxr-xr-x 24 root root 4096 1. Nov 2017 ..
drwxr-xr-x 2 namefoo namefoo 2 21. Jan 2020 ?
namefoo@host /srv/datadir $ ls -la '?'
ls: Zugriff auf ? nicht möglich:
On 11/10/22 07:44, hw wrote:
On Wed, 2022-11-09 at 21:36 -0800, David Christensen wrote:
On 11/9/22 00:24, hw wrote:
> On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
Be careful that you do not confuse a ~33 GiB full backup set, and 78
snapshots over six months of that same full
On Thu, Nov 10, 2022 at 08:32:36PM -0500, Dan Ritter wrote:
* RAID 5 and 6 restoration incurs additional stress on the other
disks in the RAID which makes it more likely that one of them
will fail.
I believe that's mostly apocryphal; I haven't seen science backing that
up, and it hasn't been
On Thu, Nov 10, 2022 at 06:55:27PM +0100, hw wrote:
On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> And mind you, SSDs are *designed to fail* the sooner the more data you write
> to
> them. They have their uses, maybe even for storag
sues. Performance of ext4 is quite acceptable out of the box.
> E.g. it seems to be slightly faster than ZFS for my use cases. Almost every
> Linux live system can read it. There are no problematic licensing or
> stability issues whatsoever. By its popularity its probably one of the most
>
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Am 10.11.2022 um 22:37 schrieb Linux-Fan:
> Ext4 still does not offer snapshots. The traditional way to do
> snapshots outside of fancy BTRFS and ZFS file systems is to add LVM
> to the equation although I do not have any useful experi
rrespective of RAID level), you might also
> consider running ext4 and trade the complexity and features of the
> advanced file systems for a good combination of stability and support.
Is anyone still using ext4? I'm not saying it's bad or anything, it only
seems that it has gone
On Thu, Nov 10, 2022 at 06:54:31PM +0100, hw wrote:
> Ah, yes. I tricked myself because I don't have hd installed,
It's just a symlink to hexdump.
lrwxrwxrwx 1 root root 7 Jan 20 2022 /usr/bin/hd -> hexdump
unicorn:~$ dpkg -S usr/bin/hd
bsdextrautils: /usr/bin/hd
unicorn:~$ dpkg -S usr/bin/hex
On Thu, 2022-11-10 at 11:57 -0500, Michael Stone wrote:
> On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
> > And mind you, SSDs are *designed to fail* the sooner the more data you write
> > to
> > them. They have their uses, maybe even for storage if you're so desperate,
> > but
> > not for b
On Thu, 2022-11-10 at 09:30 -0500, Greg Wooledge wrote:
> On Thu, Nov 10, 2022 at 02:48:28PM +0100, hw wrote:
> > On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
>
> [...]
> > printf '%s\0' * | hexdump
> > 000 00c2 6177 7468
> > 007
>
> I dislike this outp
On Wed, 2022-11-09 at 14:22 +0100, Nicolas George wrote:
> hw (12022-11-08):
> > When I want to have 2 (or more) generations of backups, do I actually want
> > deduplication? It leaves me with only one actual copy of the data which
> > seems
> > to defeat the idea of having multiple generations of
On Thu, Nov 10, 2022 at 05:34:32PM +0100, hw wrote:
And mind you, SSDs are *designed to fail* the sooner the more data you write to
them. They have their uses, maybe even for storage if you're so desperate, but
not for backup storage.
It's unlikely you'll "wear out" your SSDs faster than you w
On Thu, 2022-11-10 at 10:47 +0100, DdB wrote:
> Am 10.11.2022 um 06:38 schrieb David Christensen:
> > What is your technique for defragmenting ZFS?
> well, that was meant more or less a joke: there is none apart from
> offloading all the data, destroying and rebuilding the pool, and filling
> it ag
On Thu, 2022-11-10 at 02:19 -0500, gene heskett wrote:
> On 11/10/22 00:37, David Christensen wrote:
> > On 11/9/22 00:24, hw wrote:
> > > On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
>
> [...]
> Which brings up another suggestion in two parts:
>
> 1: use amanda, with tar and comp
t-zfs-pool-layout/
Thanks! If I make a zpool for backups (or anything else), I need to do some
reading beforehand anyway.
> MySQL appears to have the ability to use raw disks. Tuned correctly,
> this should give the best results:
>
> https://dev.mysql.com/doc/refman/8.0/en/innodb-s
On Wed, 09 Nov 2022 13:28:46 +0100
hw wrote:
> On Tue, 2022-11-08 at 09:52 +0100, DdB wrote:
> > Am 08.11.2022 um 05:31 schrieb hw:
> > > > That's only one point.
> > > What are the others?
> > >
> > > > And it's not really some valid one, I think, as
> > > > you do typically not run int
Brad Rogers wrote:
> On Thu, 10 Nov 2022 08:48:43 -0500
> Dan Ritter wrote:
>
> Hello Dan,
>
> >8 is not a magic number.
>
> Clearly, you don't read Terry Pratchett. :-)
In the context of ZFS, 8 is not a magic number.
May you be ridiculed by Pictsies.
-dsr-
On 2022-11-10, Nicolas George wrote:
> Curt (12022-11-10):
>> Why restate it then needlessly?
>
> To NOT state that you were wrong when you were not.
>
> This branch of the discussion bores me. Goodbye.
>
This isn't solid enough for a branch. It couldn't support a hummingbird.
And me too! That o
Curt (12022-11-10):
> Why restate it then needlessly?
To NOT state that you were wrong when you were not.
This branch of the discussion bores me. Goodbye.
--
Nicolas George
On 2022-11-10, Nicolas George wrote:
> Curt (12022-11-10):
>> > one drive fails → you can replace it immediately, no downtime
>> That's precisely what I said,
>
> I was not stating that THIS PART of what you said was srong.
Why restate it then needlessly?
>> so I'm
Curt (12022-11-10):
> > one drive fails → you can replace it immediately, no downtime
> That's precisely what I said,
I was not stating that THIS PART of what you said was srong.
> so I'm baffled by the redundancy of your
> words.
Hint: my mail did not stop at the l
On 2022-11-10, Nicolas George wrote:
> Curt (12022-11-10):
>> Maybe it's a question of intent more than anything else. I thought RAID
>> was intended for a server scenario where if a disk fails, you're down
>> time is virtually null, whereas as a backup is intended to prevent data
>> loss.
>
> May
On 2022-11-10 at 09:06, Dan Ritter wrote:
> Now, RAID is not a backup because it is a single store of data: if
> you delete something from it, it is deleted. If you suffer a
> lightning strike to the server, there's no recovery from molten
> metal.
Here's where I find disagreement.
Say you didn'
On Thu, 10 Nov 2022 08:48:43 -0500
Dan Ritter wrote:
Hello Dan,
>8 is not a magic number.
Clearly, you don't read Terry Pratchett. :-)
--
Regards _ "Valid sig separator is {dash}{dash}{space}"
/ ) "The blindingly obvious is never immediately apparent"
/ _)rad
Hi,
i wrote:
> > the time window in which the backuped data
> > can become inconsistent on the application level.
hw wrote:
> Or are you referring to the data being altered while a backup is in
> progress?
Yes. Data of different files or at different places in the same file
may have relations wh
On Thu, Nov 10, 2022 at 02:48:28PM +0100, hw wrote:
> On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
> good idea:
>
> printf %s * | hexdump
> 000 77c2 6861 0074
> 005
Looks like there might be more than one file here.
> > If you misrepresented the situat
Curt wrote:
> On 2022-11-08, The Wanderer wrote:
> >
> > That more general sense of "backup" as in "something that you can fall
> > back on" is no less legitimate than the technical sense given above, and
> > it always rubs me the wrong way to see the unconditional "RAID is not a
> > backup" trot
hw wrote:
> And I've been reading that when using ZFS, you shouldn't make volumes with
> more
> than 8 disks. That's very inconvenient.
Where do you read these things?
The number of disks in a zvol can be optimized, depending on
your desired redundancy method, total number of drives, and
tole
Curt (12022-11-10):
> Maybe it's a question of intent more than anything else. I thought RAID
> was intended for a server scenario where if a disk fails, you're down
> time is virtually null, whereas as a backup is intended to prevent data
> loss.
Maybe just use common sense. RAID means your data
On 2022-11-10 at 08:40, Curt wrote:
> On 2022-11-08, The Wanderer wrote:
>
>> That more general sense of "backup" as in "something that you can
>> fall back on" is no less legitimate than the technical sense given
>> above, and it always rubs me the wrong way to see the unconditional
>> "RAID is
On Thu, 2022-11-10 at 07:03 -0500, Greg Wooledge wrote:
> On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
> > ls -la
> > insgesamt 5
> > drwxr-xr-x 3 namefoo namefoo 3 16. Aug 22:36 .
> > drwxr-xr-x 24 root root 4096 1. Nov 2017 ..
> > drwxr-xr-x 2 namefoo namefoo 2 21. Jan 2020
On 2022-11-08, The Wanderer wrote:
>
> That more general sense of "backup" as in "something that you can fall
> back on" is no less legitimate than the technical sense given above, and
> it always rubs me the wrong way to see the unconditional "RAID is not a
> backup" trotted out blindly as if tha
Am 10.11.2022 um 13:03 schrieb Greg Wooledge:
> If it turns out that '?' really is the filename, then it becomes a ZFS
> issue with which I can't help.
just tested: i could create, rename, delete a file with that name on a
zfs filesystem just as with any other fileystem.
But: i recall having seen
On Thu, 2022-11-10 at 10:59 +0100, DdB wrote:
> Am 10.11.2022 um 04:46 schrieb hw:
> > On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > > [...]
> [...]
> > >
> > Why would partitions be better than the block device itself?
On Thu, 2022-11-10 at 10:34 +0100, Christoph Brinkhaus wrote:
> Am Thu, Nov 10, 2022 at 04:46:12AM +0100 schrieb hw:
> > On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > > [...]
> [...]
> > >
> >
> > Why would partitions
On Wed, 2022-11-09 at 12:08 +0100, Thomas Schmitt wrote:
> Hi,
>
> i wrote:
> > > https://github.com/dm-vdo/kvdo/issues/18
>
> hw wrote:
> > So the VDO ppl say 4kB is a good block size
>
> They actually say that it's the only size which they support.
>
>
> > Deduplication doesn't work when f
On Thu, Nov 10, 2022 at 05:54:00AM +0100, hw wrote:
> ls -la
> insgesamt 5
> drwxr-xr-x 3 namefoo namefoo3 16. Aug 22:36 .
> drwxr-xr-x 24 rootroot4096 1. Nov 2017 ..
> drwxr-xr-x 2 namefoo namefoo2 21. Jan 2020 ?
> namefoo@host /srv/datadir $ ls -la '?'
> ls: Zugriff auf ? nic
On Wed, 09 Nov 2022 13:52:26 +0100 hw wrote:
Does that work? Does bees run as long as there's something to
deduplicate and
only stops when there isn't?
Bees is a service (daemon) which runs 24/7 watching btrfs transaction
state (the checkpoints). If there are new transactions then it kicks
Am 10.11.2022 um 04:46 schrieb hw:
> On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
>> Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
>> [...]
>>> FreeBSD has ZFS but can't even configure the disk controllers, so that won't
>>> work.
>>
>> If I understand you right you mean R
Am 10.11.2022 um 06:38 schrieb David Christensen:
> What is your technique for defragmenting ZFS?
well, that was meant more or less a joke: there is none apart from
offloading all the data, destroying and rebuilding the pool, and filling
it again from the backup. But i do it from time to time if fr
Am Thu, Nov 10, 2022 at 04:46:12AM +0100 schrieb hw:
> On Wed, 2022-11-09 at 18:26 +0100, Christoph Brinkhaus wrote:
> > Am Wed, Nov 09, 2022 at 06:11:34PM +0100 schrieb hw:
> > [...]
> > > FreeBSD has ZFS but can't even configure the disk controllers, so that
> > > won't
> > > work.
> >
> > If
On 11/10/22 00:37, David Christensen wrote:
On 11/9/22 00:24, hw wrote:
> On Tue, 2022-11-08 at 17:30 -0800, David Christensen wrote:
> Hmm, when you can backup like 3.5TB with that, maybe I should put
FreeBSD on my
> server and give ZFS a try. Worst thing that can happen is that it
crashe
On 11/9/22 01:35, DdB wrote:
> But
i am satisfied with zfs performance from spinning rust, if i dont fill
up the pool too much, and defrag after a while ...
What is your technique for defragmenting ZFS?
David
On 11/9/22 03:08, Thomas Schmitt wrote:
So i would use at least four independent storage facilities interchangeably.
I would make snapshots, if the filesystem supports them, and backup those
instead of the changeable filesystem.
I would try to reduce the activity of applications on the filesyste
1 - 100 of 454 matches
Mail list logo