Keith Bainbridge wrote:
> As promised:
> I said sometime in this thread that timeshift (and Back in Time) use hard
> links to create progressive copies of the system. The more I think about how
> hard links reportedly work, I reckon it can't be simply hard links.
>
> So I&
On 19/2/24 11:15, Kushal Kumaran wrote:
Have you read their FAQ page about hard links?
https://github.com/bit-team/backintime/blob/dev/FAQ.md#how-do-snapshots-
with-hard-links-work
Very interesting. Thank you
I have totally missed the concept of copying all files as a starting point.
I
On Sun, 18 Feb 2024 16:15:01 -0800
Kushal Kumaran wrote:
> Have you read their FAQ page about hard links?
> https://github.com/bit-team/backintime/blob/dev/FAQ.md#how-do-snapshots-with-hard-links-work
An excellent writeup. The only thing I would add is that creating a
hard link does requ
On Mon, Feb 19 2024 at 10:52:16 AM, Keith Bainbridge
wrote:
> As promised:
> I said sometime in this thread that timeshift (and Back in Time) use
> hard links to create progressive copies of the system. The more I
> think about how hard links reportedly work, I reckon it can't
Hi,
If the hard link field of a file is more than one, how can I find out all
the other hard links to this file?
thanks
--
Tong (remove underscore(s) to reply)
http://xpt.sourceforge.net/techdocs/
http://xpt.sourceforge.net/tools/
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a
On Tue, 17 Oct 2006 15:46:46 -0700, Bob McGowan wrote:
>> Is it possible to find the hard links of the same file? Ie, group the
>> above finding into same file groups?
>
> find . -type f -links +1 -ls | sort -n -k 1
>
> This command line will [...]
Bingo! thanks
On Tue, Oct 17, 2006 at 04:45:05PM -0400, T wrote:
> Hi
>
> How can I find hard linked files?
>
> Is it possible to find the hard links of the same file? Ie, group the
> above finding into same file groups?
>
Use stat or 'ls -i' to find the file's inode num
T wrote:
Hi
How can I find hard linked files?
Is it possible to find the hard links of the same file? Ie, group the
above finding into same file groups?
thanks
find . -type f -links +1 -ls | sort -n -k 1
This command line will find all regular files (-type f) that have 2 or
more hard
On Tue, Oct 17, 2006 at 04:45:05PM -0400, T wrote:
> Hi
>
> How can I find hard linked files?
All "regular" files are hard links. See
<http://en.wikipedia.org/wiki/Hard_link>
the stat(1) command tells you how many files point at a
given inode (so, if "Links:"
On Tuesday 17 October 2006 22:45, T wrote:
> Hi
>
> How can I find hard linked files?
Hi,
using for example:
[ "`stat -c %h filename`" -gt 1 ] && echo hard linked
> Is it possible to find the hard links of the same file? Ie, group the
> above finding into
On Tue, Oct 17, 2006 at 04:45:05PM -0400, T wrote:
> Hi
>
> How can I find hard linked files?
>
Anything that is a file is a hard link.
Regards,
-Roberto
--
Roberto C. Sanchez
http://people.connexer.com/~roberto
http://www.connexer.com
signature.asc
Description: Digital signature
Hi
How can I find hard linked files?
Is it possible to find the hard links of the same file? Ie, group the
above finding into same file groups?
thanks
--
Tong (remove underscore(s) to reply)
http://xpt.sourceforge.net/
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of
e that if I delete a file, it isn't actually deleted
until some backup hard links automatically expire 24 hours later, time
enough to recover from disasters. Some of these mechanisms use
scripts and soft links, others use hard links. I'm loth to lose the
hard link capability or to h
Hubert Chan wrote:
On Sun, 4 Sep 2005 15:40:58 +0100 (BST), Max <[EMAIL PROTECTED]> said:
Dear All, Is there a way of switching on permission to have hard links
to directories?
Is there a rationale for disallowing this?
The Linux kernel VFS does not allow directory hard
On Sun, 4 Sep 2005 15:40:58 +0100 (BST), Max <[EMAIL PROTECTED]> said:
> Dear All, Is there a way of switching on permission to have hard links
> to directories?
> Is there a rationale for disallowing this?
The Linux kernel VFS does not allow directory hard links for technical
Dear All,
Is there a way of switching on permission to have hard links to
directories?
Is there a rationale for disallowing this? What I'm trying to do is to
set up my file system so that, rather as with apt-get, files and entire
directories survive only as long as they are wanted. W
I want to move a directory containing my backups (created by
faubackup) to another directory on a different partition. The
problem is that the backup directory contains a lot of hard
links (to files inside the same backup directory).
Simply copying the directory with 'cp -R' replace
Adam Funk wrote:
> For a given, specific path/file X, I know that X is one of the pointers to
> inode Y, and I want to find the other path/filenames that point to the same
> inode Y. Is there any efficient way to make a command like the following?
> $ list_all_hard_links /path/to/x
Oh, I see. W
Andrey Andreev wrote:
> Andrey Andreev wrote:
>> Adam Funk wrote:
>>>When I see in a listing that a file has one or more other hard links, for
>>>example:
>>>is there any direct way to find the other /path/to/files that point to
>>>the same inodes?
Andrey Andreev wrote:
> Adam Funk wrote:
>>When I see in a listing that a file has one or more other hard links, for
>>example:
>>is there any direct way to find the other /path/to/files that point to the
>>same inodes?
> Something like in bash:
>
> for (( i=
Adam Funk wrote:
> When I see in a listing that a file has one or more other hard links, for
> example:
[snip]
> is there any direct way to find the other /path/to/files that point to the
> same inodes?
Something like in bash:
for (( i=2 ; $i < 10 ; i = $i +1 )) ; do find . -link
try/this/path -inum 1234
>
> but the find would be fairly time-consuming unless you already had a good
> idea about where to look.
Hmm I can't think of another way off the top of my head, but be
aware that hard links can only be to the same filesystem, so you can
reduce the time by li
When I see in a listing that a file has one or more other hard links, for
example:
$ ls -l
...
-rw--- 3 jfunk jfunk0 2005-06-22 11:45 bar
-rw--- 2 jfunk jfunk0 2005-06-22 11:46 foo
...
is there any direct way to find the other /path/to/files that point to the
same inodes
Elizabeth Barham <[EMAIL PROTECTED]> wrote:
>
> shelby:~# touch x
> shelby:~# ln x y
>
> Well it works now. Thank you everyone!!
There is a known bug in the 2.4 VFS where if you attempt to open a
bad inode on your file system, then all further attempts to do
operations like ln on anything will
On Wed, 25 Sep 2002 22:01:52 +0200 martin f krafft <[EMAIL PROTECTED]>
wrote:
> also sprach Mark L. Kahnt <[EMAIL PROTECTED]> [2002.09.25.1953
> +0200]:
> > > > > > > | :~$ touch k
> > > > > > > | :~$ ln k y
> >
> > I'm going to toss in a *wild* question, but given that the actual link
> > attem
also sprach Mark L. Kahnt <[EMAIL PROTECTED]> [2002.09.25.1953 +0200]:
> > > > > > | :~$ touch k
> > > > > > | :~$ ln k y
>
> I'm going to toss in a *wild* question, but given that the actual link
> attempt is failing, this wandered through the chasm I use as a mind:
in that case, please don't t
* Mark L. Kahnt ([EMAIL PROTECTED]) [020925 10:55]:
> On Wed, 2002-09-25 at 13:34, Vineet Kumar wrote:
> > * Elizabeth Barham ([EMAIL PROTECTED]) [020925 09:24]:
> > > link("k", "y") = -1 ENOENT (No such file or directory)
> I'm going to toss in a *wild* question, but give
Vineet Kumar <[EMAIL PROTECTED]> writes:
> > I did have some sporadic memory errors with this machine but corrected
> > them although I have not run memtest in a while (the mmap).
>
> This could be it; it does do some 'mmap'ing, so memory errors could be
> affecting it. They can affect everythin
On Wed, 2002-09-25 at 13:34, Vineet Kumar wrote:
> * Elizabeth Barham ([EMAIL PROTECTED]) [020925 09:24]:
> > "Noah L. Meyerhans" <[EMAIL PROTECTED]> writes:
> >
> > > --zOcTNEe3AzgCmdo9
> > > Content-Type: text/plain; charset=us-ascii
> > > Content-Disposition: inline
> > > Content-Transfer-Enco
* Elizabeth Barham ([EMAIL PROTECTED]) [020925 09:24]:
> "Noah L. Meyerhans" <[EMAIL PROTECTED]> writes:
>
> > --zOcTNEe3AzgCmdo9
> > Content-Type: text/plain; charset=us-ascii
> > Content-Disposition: inline
> > Content-Transfer-Encoding: quoted-printable
> >
> > On Wed, Sep 25, 2002 at 10:33:1
"Noah L. Meyerhans" <[EMAIL PROTECTED]> writes:
> --zOcTNEe3AzgCmdo9
> Content-Type: text/plain; charset=us-ascii
> Content-Disposition: inline
> Content-Transfer-Encoding: quoted-printable
>
> On Wed, Sep 25, 2002 at 10:33:15AM -0500, Elizabeth Barham wrote:
> > > | :~$ touch k
> > > | :~$ ln k
Elizabeth Barham <[EMAIL PROTECTED]> [2002-09-25 10:33:15 -0500]:
> > | :~$ ln --version
> > | ln (fileutils) 4.1
> Any idea of what might be causing ln not to work correctly on my
> system?
In addition to 'strace ln k y' use the -v option to have it verbosely
say what it is doing.
rm -rf /tmp
On Wed, Sep 25, 2002 at 10:33:15AM -0500, Elizabeth Barham wrote:
> > | :~$ touch k
> > | :~$ ln k y
>
> Any idea of what might be causing ln not to work correctly on my
> system?
Try running strace on it:
strace ln k y
Look for indications of obvious brokeness. And, as has been said
already,
Elizabeth Barham <[EMAIL PROTECTED]> writes:
> Any idea of what might be causing ln not to work correctly on my
> system?
Check for kernel oopses and run fsck on the filesystem.
--
Alan Shutko <[EMAIL PROTECTED]> - In a variety of flavors!
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with
Jamin W.Collins <[EMAIL PROTECTED]> writes:
> | :~$ touch k
> | :~$ ln k y
> | :~$ ln --version
> | ln (fileutils) 4.1
> | Written by Mike Parker and David MacKenzie.
> |
> | Copyright (C) 2001 Free Software Foundation, Inc.
> | This is free software; see the source for copying conditions. Ther
On Thu, 25 May 2000, Sven Burgener wrote:
>
> >> 108545 drwxr-xr-x 21 root root 1024 Feb 19 17:34 usr
>
> ... I assumed that the hard links theory of files applies to directories
> in the very same way. That would mean that - if it were possible - there
>
The thing is that from this listing:
>> 108545 drwxr-xr-x 21 root root 1024 Feb 19 17:34 usr
... I assumed that the hard links theory of files applies to directories
in the very same way. That would mean that - if it were possible - there
are 21 [hard] links to /usr somewh
On Wed, 24 May 2000, Sven Burgener wrote:
> 108545 drwxr-xr-x 21 root root 1024 Feb 19 17:34 usr
>
> and now I issue:
>
> hp90:/root # find / -inum 108545
> /usr
>
> All I got is /usr! How can that be explained? I must be missing
Well, the inode for /usr is 108545, so when you se
On Wed, May 24, 2000 at 11:16:42PM +0200, Sven Burgener wrote:
> >"ls -il" gives you the inode-numbers, so you'll see, which files are
> >hard-linked.
> >if you need to search the whole disk, then you want to use
> "find -inum".
>
> Yes, that works. But only for some files I created for this test
>"ls -il" gives you the inode-numbers, so you'll see, which files are
>hard-linked.
>if you need to search the whole disk, then you want to use
"find -inum".
Yes, that works. But only for some files I created for this test just
now. Assume I try to do it for /usr:
108545 drwxr-xr-x 21 root
> In an ls -l you get the # of hard links on the right side of the
> permissions. How do I find where all of those hard links are located on
> the harddisk?
>
"ls -il" gives you the inode-numbers, so you'll see, which files are
hard-linked.
if you need to search the w
Hi debians
In an ls -l you get the # of hard links on the right side of the
permissions. How do I find where all of those hard links are located on
the harddisk?
TIA
Sven
dan writes:
> Shouldn't files like egrep and fgrep be symbolic links to grep, and same
> for any other program like this?
I wrote:
> What's wrong with hard links for this?
> If you have hard links, and replace one of them, you still need to
> replace the other one, sin
>
> dan writes:
> > Shouldn't files like egrep and fgrep be symbolic links to grep, and same
> > for any other program like this?
>
> What's wrong with hard links for this?
If you have hard links, and replace one of them, you still need to
replace the other
1. Entries . and .. are removed.
2. The directory inode is unlinked from the parent's directory
entry.
Having a hard link to a directory gives a first problem:
Say you do:
mkdir /usr/dir1
link /usr/dir1 /usr/local/dir1 (hard links /usr/dir1 to
/usr/local/dir1)
Then if you do c
[..]
> Hard linked directories are bad, it would taker longer than that to explain.
That's apity, cause I've been wanting to know why they are
bad for a long time. Do you have any reference where I can
search for an answer on that one?
> For maximum security in chrooted environments:
> o don't
e directory to a common directory that they can all read/write to.
>
> I'm using wu-ftpd, ext2fs file system and debian 1.3. This should work
> according to the documentation that I've read. But apparently ext2fs does
> not allow hard links to directories. Is this true? Is there an
and debian 1.3. This should work
according to the documentation that I've read. But apparently ext2fs does
not allow hard links to directories. Is this true? Is there any other way
to implement this?
Thanks in advance,
Al Youngwerth
[EMAIL PROTECTED]
--
TO UNSUBSCRIBE FROM THIS MAILING L
48 matches
Mail list logo