While upgrading my ports from 12.0-RELEASE to 12-STABLE, I encountered a
lot of Text file busy errors, and almost no ports I could update successfully.
Then I attempted to fix/workaround the issue, and finally I found that
r348991 Switch to use shared vnode locks for text files during image
activa
Hi,
I got the panic below in unionfs. Head as at 325569, 2017-11-09 12:41:00 +1100
(Thu, 09 Nov 2017).
This is a ufs filesystem union mounted on top of a read-only ufs /etc. I know
unionfs has "architecture issues”. Is this resolvable?
panic: unionfs: it is not unionfs-vnode
cpuid = 5
Hi,
I'm experimenting with unionfs for my next FreeBSD book, and found a
reproducible panic. I haven't needed to report a panic for... uh... I
don't think I've done it this century, so I might be a wee bit wrong
in what you need... please correct me if you need other info
I forgot the command that triggers the panic! My apologies.
# cd /jails/jail1
# rmdir proc
panic!
The jail is not running, no procfs is mounted.
On Wed, Sep 02, 2015 at 10:41:57AM -0400, Michael W. Lucas wrote:
> Hi,
>
> I'm experimenting with unionfs for my next FreeBSD boo
For information: It's possible to reproduce this problem using the
indication given in kern/121385 (5 years old PR).
Regards,
Olivier
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe,
I've just have another crash related to mount_unionfs but on sparc64
arch this time ( 10.0-ALPHA4 #2 r255947).
Still no core dump:
panic: vm_fault: fault on nofault entry, addr: 1c05f2000
cpuid = 0
KDB: stack backtrace:
panic() at panic+0x1d4
vm_fault_hold() at vm_fault_hold+0x174
vm_fault() at
Hi all,
I've got a panic on my 10.0 FreeBSD.
This system was building port with poudriere and generate special
nanobsd images (with unionfs usage) when it panic.
I didn't have enough swap space for a full dump, I've just have a text dump:
root@orange:/var/crash # cat info.last
D
On Tue, 10 Apr 2012, Daichi GOTO wrote:
Thanks kwhite,
I found an another lock issue. Please try a patch included.
...
Success.
Your latest patch fixes the problem. Thanks!
...keith
___
freebsd-current@freebsd.org mailing list
http://lists.freebs
ssue. If that works well,
> > I'm going to refine and commit it to head.
> >
>
> Thanks!
>
> I tried your patch but get the same panic:
>
> FreeBSD 10.0-CURRENT #6 r233856M: Mon Apr 9 09:47:42 EDT 2012
> ...
> exclusive lock of (lockmgr) uf
r233856M: Mon Apr 9 09:47:42 EDT 2012
...
exclusive lock of (lockmgr) ufs @
/usr/src/sys/modules/unionfs/../../fs/unionfs/union_vnops.c:1897
while share locked from /usr/src/sys/modules/unionfs/../../fs/unionfs/
union_vnops.c:1897
panic: excl->share
cpuid = 0
KDB: enter:
ollowing panic when trying to run
> an executable on a unionfs filesystem:
>
> exclusive lock of (lockmgr) ufs @
> /usr/src/sys/modules/unionfs/../../fs/unionfs/union_vnops.c:1843
> while share locked from
> /usr/src/sys/modules/unionfs/../../fs/unionfs/union_vn
Starting with r230341, I get the following panic when trying to run
an executable on a unionfs filesystem:
exclusive lock of (lockmgr) ufs @
/usr/src/sys/modules/unionfs/../../fs/unionfs/union_vnops.c:1843
while share locked from
/usr/src/sys/modules/unionfs/../../fs/unionfs
Hi unionfs users ;)
We have developed new unionfs feature, "intermediate umount".
You can do like this:
# mount_unionfs /test2 /test1
# mount_unionfs /test3 /test1
# df
:/test2 x x x xx% /test1
:/test3 x x x xx% /test1
# umount ':/test2
e exhaustion.
This recursion in unionfs_statfs is used to gather statistic (some of
which is faked according to comments in the procedure)
why not replace recursion with cycle? (I'm not skilled enough do do
that :)
and a feature request:
it would be great if it was possible to umount one of
why not replace recursion with cycle? (I'm not skilled enough do do
> that :)
Exactly. It is one of possible plans to reduce kernel stack consumption
except rewriting all relative recursive code into loop treatment is
slow work to complete. Rewritten into loop treatment is the next s
Hi unionfs lovers,
It is possible to mount unionfs multiple times more than once at a
mount point. However, exceeding multiple mounts could consume kernel
stack over its limits and lead a system panic easily. Some users
reported that they got a system panic by multiple unionfs mounts.
So I make
> Apart from this issue with unionfs, I am also experiencing another
> issue, where for some reason I cannot perform a second mount of the CD
> right after booting the system. Basically, my WIP FreeBSD boot CD does
> the following (but written in C):
>
> mount -t cd9660 /dev/is
Hi all,
Even though the proposed fix for unionfs would still be nice to have in
SVN, I just wrote a patch for tmpfs to add support for whiteouts:
http://80386.nl/pub/tmpfs-whiteout.txt
Basically I've implemented it by allowing directory entries to refer to
NULL inodes, to indicat
Hi Ed and unionfs fan gyus.
Ed pointed out a contradict behavior between current
unionfs implementation and its manual, and sent me a
patch.
Thanks Ed ;)
Index: sys/fs/unionfs/union_vfsops.c
===
--- sys/fs/unionfs
* Pawel Jakub Dawidek wrote:
> What you are trying to do here is to mount /dev/iso9660/freebsd for the
> second time? This is not supported. The check is there to prevent doing
> this, as it will panic on you when you try to unmount first mount (not
> really a problem in your case, as the first mo
Hi Daichi,
I think Keith Packard of Xorg once wrote a commit message along the
lines of "5000 lines of code removed, feature added" This seems to be
similar, albeit on a smaller scale. ;-)
Apart from this issue with unionfs, I am also experiencing another
issue, where for some reaso
Hi Ed and unionfs fan gyus.
Ed pointed out a contradict behavior between current
unionfs implementation and its manual, and sent me a
patch.
Thanks Ed ;)
Index: sys/fs/unionfs/union_vfsops.c
===
--- sys/fs/unionfs
On 17.07.10 17:45, Edward Tomasz Napierala wrote:
Author: trasz
Date: Sat Jul 17 15:45:20 2010
New Revision: 210194
URL: http://svn.freebsd.org/changeset/base/210194
Log:
Remove updating process count by unionfs. It serves no purpose, unionfs just
needs root credentials for a moment
On Sun, Jul 20, 2003, Divacky Roman wrote:
> Hi,
>
> I might be wrong but this:
>
> free(mp->mnt_data, M_UNIONFSMNT); /* XXX */
> mp->mnt_data = 0;
>
> seems to me wrong and might cause crashes etc.
> am I correct or wrong?
>
> its from union_vfsops.c:384
What's w
wrong?
Could you describe scenario when this could be dangerous? Or why do you
think it is?
This memory is allocated while mounting unionfs file system,
so it is quite natural to free this memory while unmounting file system.
--
Pawel Jakub Dawidek [EMAIL PROTECTED]
UNIX Syste
Hi,
I might be wrong but this:
free(mp->mnt_data, M_UNIONFSMNT); /* XXX */
mp->mnt_data = 0;
seems to me wrong and might cause crashes etc.
am I correct or wrong?
its from union_vfsops.c:384
thnx
Roman Divacky
___
[
Recently I've been doing some twisted things with
nullfs/unionfs. I've managed to contain myself to using
read-only nullfs, but, the whole point of the unionfs usage is
to isolate changes (ie. I need read-write). Anyway I've hit a
VFS bug that upon reading current@, appears as tho
e had no serious problems with -stable, though
a few apparent bugs seem to be obvious; however, it seems that
the unionfs mount in -current is a great way for me to readily
and reliably panic the system, with a couple mutex panics that I
didn't make notes of.
Apart from the failure of getcwd() on
Timo,
Please send your crash dump to me.
While I have yet to cause a panic using unionfs, I've noticed a few
interesting things - like the inability to umount(no kidding - am I
doing something wrong?). Also, a whiteout file was affected. Yes, I'll
recheck that - that didn't ma
ght I might as well try to track down this
> > > problem. But before I start,is anyone else working on this one? I'd
> > > hate to waste my not exactly ample time trying to duplicate somebody
> > > else's work.
> >
> > Frankly, unless you have the time
> I'm starting VFS hacking again :) Throw up a PR, I'll assign it to myself
> and I'll look at it for you.
Mind if I tag along for the ride? I just finished a "code walkthrough"
class and vfs_ is somewhat fresh. Painfully fresh, I may add. I'll
gladly sanity check patches. I was looking for a e
one else working on this one? I'd
> > hate to waste my not exactly ample time trying to duplicate somebody
> > else's work.
>
> Frankly, unless you have the time to fix unionfs completely I wouldn't
> bother. We know it's broken, whats needed is someone with
ample time trying to duplicate somebody
> else's work.
Frankly, unless you have the time to fix unionfs completely I wouldn't
bother. We know it's broken, whats needed is someone with a good handle on
the VFS code to sit down and spend a week or so to fix it. On the other
ha
Here be dragons, I know :-). Got myself a nice juicy & reproducable crash here,
so I thought I might as well try to track down this problem. But before I start,is
anyone else working on this one? I'd hate to waste my not exactly ample time
trying to duplicate somebody else's work.
Regards,
Timo
stat() and
stat() are supposed to be unique.
> sure which one it is. Well, this breaks unionfs mounts because the device
> inode is the same, but the inode numbers are different since it's a mount
> point. What a subtle problem!
st_dev for a mount on non-device is norm
s is the -current list
and I figured if anywhere, this might be the place. It seems like half the
fs' man pages have that disclaimer at the bottom =)
I guess my next question is: is there a better list to take the discussion
of fixing up unionfs (e.g., hackers? fs?), and is there any decent amoun
which one it is. Well, this breaks unionfs mounts because the device
inode is the same, but the inode numbers are different since it's a mount
point. What a subtle problem!
But it showed up. When I commented out the optimized parts and tried, it
worked perfectly. I'm updating my source tree
A new unionfs has been committed. It fixes a whole lot of things,
but due to the complexity of the commit people should consider
unionfs to be unstable and under test. I will say, though, that
unionfs was terribly unstable before the commit and there is very
little that I
38 matches
Mail list logo