t; build packages, the virtualbox-ose build failed due to ar segfaulting.
>
> Can you give the patch in https://reviews.freebsd.org/D11687 a try?
I still see ar segfault. I'm also really curious about how the
rpc.lockd change can trigger the bug.
___
On 11 July 2017 at 12:44, Don Lewis wrote:
> This is a really strange problem ...
>
> Last week I upgraded my 12.0-CURRENT package build box from r318774 to
> r320570. I also upgraded the poudriere jail to match. When I went to
> build packages, the virtualbox-ose build failed due to ar segfault
poudriere jail with rev r318774
and the build passed. I then bisected, which took most of the last
week, and found that this commit is what is causing the breakage:
r320183 | delphij | 2017-06-20 23:34:06 -0700 (Tue, 20 Jun 2017) | 12 lines
Reduce code duplication in rpc.lockd.
Reuse
On Thu, Apr 19, 2012 at 08:44:37PM -0400, Rick Macklem wrote:
> Andrey Simonenko wrote:
> > On Mon, May 30, 2011 at 04:56:02PM -0400, Rick Macklem wrote:
> > > Hi,
> > >
> > > I have patches for the mountd, rpc.statd and rpc.lockd daemons
> > > th
all specified addresses), but these attempts do not
guaranty that mountd will not fail.
Several systems do not have -h like option for nfsd, mountd, etc.
Looks like that when this option was proposed for mountd, rpc.statd and
rpc.lockd it was not considered that using non wildcard address for RP
Andrey Simonenko wrote:
> On Mon, May 30, 2011 at 04:56:02PM -0400, Rick Macklem wrote:
> > Hi,
> >
> > I have patches for the mountd, rpc.statd and rpc.lockd daemons
> > that are meant to keep them from failing when a dynamically
> > selected port# is not
On Mon, May 30, 2011 at 04:56:02PM -0400, Rick Macklem wrote:
> Hi,
>
> I have patches for the mountd, rpc.statd and rpc.lockd daemons
> that are meant to keep them from failing when a dynamically
> selected port# is not available for some combination of
> udp,tcp X ipv4,i
Hi,
I have patches for the mountd, rpc.statd and rpc.lockd daemons
that are meant to keep them from failing when a dynamically
selected port# is not available for some combination of
udp,tcp X ipv4,ipv6
If anyone would like to test these patches, they can be found
at:
http
mpile my system with CURRENT from "Wed Nov 19 12:53:04".
rpc.lockd core dump still appear but I noticed something: each time, just before
it crashed, I can hear a very low noise coming from my harddrive. It is hard to
describe, but it looks like if my hardrive was powering on or coming bac
> Check to make sure that rpc.statd is running. There was an old bug that
> rpc.lockd would dump core if it couldn't find a statd.
Ho, it is running :)
Actually, all my homedir are mounted with NFS, so rpc.lockd get used a lot.
That is why it is an important concern to me.
I couldn
I will rpc.lockd with -ggdb as you said and see if it is repeatable.
Unfortunately, I'm not home right now, so I'll do this in 3 o 4 days.
Regards.
Antoine
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freeb
On Sat, 8 Nov 2003, Antoine Jacoutot wrote:
> Any idea ?
> I would be pleased to send more information but I didn't see where to find
> more debuging options for rpc.lockd.
Check to make sure that rpc.statd is running. There was an old bug that
rpc.lockd would dump core if it c
On Sat, Nov 08, 2003 at 12:02:03PM +0100, Antoine Jacoutot wrote:
> Hi :)
>
> Are they any know issues with rpc.lockd under -CURRENT.
> I had a look at the gnats database and did not find anything related.
> I'm asking this because I have a lot of:
> kernel: pid 70065 (rp
On Saturday 08 November 2003 12:26, Harti Brandt wrote:
> I can only say that I had a core dump under current on sparc, but the core
> file was unusable. Can you compile rcp.lockd with -g in CFLAGS and LDFLAGS
> and try to find out with gdb where it aborts?
Allright, as soon as I get home in 4/5 d
On Sat, 8 Nov 2003, Antoine Jacoutot wrote:
AJ>Hi :)
AJ>
AJ>Are they any know issues with rpc.lockd under -CURRENT.
AJ>I had a look at the gnats database and did not find anything related.
AJ>I'm asking this because I have a lot of:
AJ>kernel: pid 70065 (rpc.lockd), uid
Hi :)
Are they any know issues with rpc.lockd under -CURRENT.
I had a look at the gnats database and did not find anything related.
I'm asking this because I have a lot of:
kernel: pid 70065 (rpc.lockd), uid 0: exited on signal 11 (core dumped)
Any idea ?
I would be pleased to send
On Sat, 7 Jun 2003 22:27:14 -0700 (PDT)
David Yeske <[EMAIL PROTECTED]> wrote:
> Jun 8 00:52:33 photon sendmail[293]: h584pRfm000293: SYSERR(root): cannot
> flock(./tfh584pRfm000293, fd=5, type=6, omode=40001, euid=25^C.
> NFS access cache time=2
> Starting statd.
> Starting lockd.
>
> I should
> Generally, sendmail uses flock() on the aliases file and related databases
> to ensure consistency. As far as I know, it's unrelated to redirection.
And for locking queue files.
> > Here is what Control-T does
> > load: 0.20 cmd: sendmail 292 [pause] 0.02u 0.04s 0% 2016k
>
> pause, eh? That
ar as I know, it's unrelated to redirection.
> I think a solution could be to make virecover called later on. Why are
> rpc.lockd and rpc.statd not started directly after rpcbind?
No idea. Moving virecover later is a possibility; probably the missing
piece is that sendmail should depend
This is when sendmail is ran from virecover.
Is this because sendmail is taking redirection,
and it needs to flock() for that?
I think a solution could be to make virecover called later on.
Why are rpc.lockd and rpc.statd not started directly after
rpcbind?
Here is some more output.
Recovering
On Sat, 7 Jun 2003, David Yeske wrote:
> Jun 8 00:52:33 photon sendmail[293]: h584pRfm000293: SYSERR(root): cannot
> flock(./tfh584pRfm000293, fd=5, type=6, omode=40001, euid=25^C.
> NFS access cache time=2
> Starting statd.
> Starting lockd.
>
> It looks like sendmail st
Jun 8 00:52:33 photon sendmail[293]: h584pRfm000293: SYSERR(root): cannot
flock(./tfh584pRfm000293, fd=5, type=6, omode=40001, euid=25^C.
NFS access cache time=2
Starting statd.
Starting lockd.
I should clarify that /etc/rc.d/virecover is calling sendmail.
Does virecover need to be called this ea
Jun 8 00:52:33 photon sendmail[293]: h584pRfm000293: SYSERR(root): cannot
flock(./tfh584pRfm000293, fd=5, type=6, omode=40001, euid=25^C.
NFS access cache time=2
Starting statd.
Starting lockd.
It looks like sendmail starts before rpc.lockd and rpc.statd? This will cause
diskless clients to
On Sun, Nov 17, 2002 at 09:31:40PM +0100, Martijn Pronk wrote:
> I hope this is enough info for you, if you need a real dump to look
> at yourself, just let me know, I'll put it online then.
Thanks, but the binary dump would be more useful so I can read it into
ethereal. ethereal does a really g
Andrew P. Lentvorski wrote:
Can you produce a packet trace for Kris? This would give him a known good
trace so that he can point out the differences from his particular
configuration, or, alternatively, he could file a bug report with the
Linux folks.
Ok, here is a but of output from tcpudmp,
On Fri, 15 Nov 2002, Martijn Pronk wrote:
> However, I had some starting problems with rpc.lockd.
> Aparently it requires that rpc.statd also is running.
Hmmm, it shouldn't fail if rpc.statd isn't enabled, but it should probably
complain loudly. Make sure you file a FreeBSD bu
he info, I will try that tonight after I get back home
from the first
day of the EuroBSDcon.
OK, I tested it with a Solaris 8 NFS server with -current as a client,
and locking does indeed work. (At least, vi didn't show the word "UNLOCKED"
in the bottom line after I enabled rpc.lo
On Thu, Nov 14, 2002 at 01:47:43AM -0800, Andrew P. Lentvorski wrote:
> On Wed, 13 Nov 2002, Kris Kennaway wrote:
>
> > Yes, and I have no problems interoperating NFS under 4.x between these
> > machines (or under 5.0 as long as I don't try and lock any files) -
> &
On Wed, 13 Nov 2002, Kris Kennaway wrote:
> Yes, and I have no problems interoperating NFS under 4.x between these
> machines (or under 5.0 as long as I don't try and lock any files) -
> it's just 5.0's rpc.lockd.
Can you help isolate the problem by trying this same oper
On Wed, Nov 13, 2002 at 06:13:21PM -0800, Terry Lambert wrote:
> Kris Kennaway wrote:
> > A few months ago I posted about rpc.lockd interop problems I am having
> > between my 5.0 NFS client and a Redhat 7.1 server. Both are running
> > rpc.lockd, but when I send a lock req
Kris Kennaway wrote:
> A few months ago I posted about rpc.lockd interop problems I am having
> between my 5.0 NFS client and a Redhat 7.1 server. Both are running
> rpc.lockd, but when I send a lock request to the server it hangs
> forever blocked on the /var/run/lock socket.
>
* Kris Kennaway <[EMAIL PROTECTED]> [021113 16:41] wrote:
> A few months ago I posted about rpc.lockd interop problems I am having
> between my 5.0 NFS client and a Redhat 7.1 server. Both are running
> rpc.lockd, but when I send a lock request to the server it hangs
> forever b
A few months ago I posted about rpc.lockd interop problems I am having
between my 5.0 NFS client and a Redhat 7.1 server. Both are running
rpc.lockd, but when I send a lock request to the server it hangs
forever blocked on the /var/run/lock socket.
tcpdump shows that the lock RPC request is
All,
I'm running -current on my main box, NFS mounting home directories from my
-stable box. The -current box is configured as an NFS client only as it does
not serve anything. The -stable box is configured as an NFS server only.
In this configuration, running rpc.statd and rpc.lockd on
r the two 'kernel trap 12 with interrupts
disabled' messages, the hot key does not work anymore.
Using gdb on rpc.lockd and some ddb single-stepping, I was able to
see that the freeze occurs somewhere during the first call to
callrpc().
I'll try a remote GDB session and see if it he
Le 2001-05-29, Andrew Gallatin écrivait :
> Did you also rebuild your kernel?
Yep, I did buildworld buildkenrnel installkernel installworld,
then mergemaster and reboot.
> In order for a bug report like this to be useful, you need to supply a
> backtrace from ddb or gdb. See the Kernel Debuggi
Thomas Quinot [[EMAIL PROTECTED]] wrote:
> In the hope to check for any recent improvements with lockd,
> I cvsupped this morning and remade world. I now have a very
Did you also rebuild your kernel?
In order for a bug report like this to be useful, you need to supply a
backtrace from ddb or gdb
In the hope to check for any recent improvements with lockd,
I cvsupped this morning and remade world. I now have a very
strange behaviour of lockd:
* rc.conf has nfs_server_enable, rpc_lockd_enable and rpc_statd_enable
set to YES.
* the system seems to boot correclty; rpc.lockd and
On Sat, Dec 16, 2000 at 04:26:58PM -0600, Dan Nelson wrote:
> That's why dotlocking is recommended for locking mail spools. Both
> procmail and mutt will dotlock your mail file while it's being
> accessed.
Or Maildirs.
--
Jos Backus _/ _/_/_/"Modularity is not a hack."
In the last episode (Dec 16), Axel Thimm said:
> Wouldn't that mean, that you might cause data corruption if, say, I
> was to read my mail from a FreeBSD box over an NFS mounted spool
> directory (running under OSF1 in our case), and I decided to write
> back the mbox to the spool dir the same mom
tions to the kernel. However a FreeBSD kernel is still unable to
> acquire an NFS lock. This latter case is quite likely what your users are
> seeing the affects of.
Just to understand it right: The current rpc.lockd is neither requesting
locks, if FreeBSD is an NFS client to whatever NF
Going with the lockd code on builder is great with me. The last I had
looked it had some of the same issues as the lockd developed here (no
handling of grace periods, etc.), so on a featureset we are even. The rpics
lockd has the advantage of being known by some of us to a much greater extent
th
:I'm not going to take such an action w/o the blessing of -core. :)
:
:--
:David Cross | email: [EMAIL PROTECTED]
:Lab Director | Rm: 308 Lally Hall
In regards to Jordan's message just a moment ago... you know, I *total*
forgot t
On Fri, Dec 15, 2000 at 12:09:32AM +0100, Thierry Herbelot wrote:
> Hello,
>
> I've recently seen in the NetBSD 1.5 release Notes that *they* claim to
> have a fully functional rpc.lockd manager : "Server part of NFS locking
> (implemented by rpc.lockd(8)) now works.&q
I'm not going to take such an action w/o the blessing of -core. :)
--
David Cross | email: [EMAIL PROTECTED]
Lab Director | Rm: 308 Lally Hall
Rensselaer Polytechnic Institute, | Ph: 518.276.2860
Department of Compute
Hello,
I've recently seen in the NetBSD 1.5 release Notes that *they* claim to
have a fully functional rpc.lockd manager : "Server part of NFS locking
(implemented by rpc.lockd(8)) now works."
could someone have a look at what our cousins have done and perhaps
import
knock on the door from someone who's completed the client, after
all, what use is the client code without the server code?
As an interim solution we could put the lockd into the system as
rpc.lockd-experimental.
I think had we done this over six months ago when you made the
initial announcemen
I pruned the Cc: list a bit...
One of the email messages that you quoted has the URL for the latest
development of the lockd code. As far as tests go it appears to be mostly
complete (there appears to be an issue with RPC64 on little endian machines,
but I have not yet had a chance to crawl thro
Dear all,
rpc.lockd in FreeBSD suffers from a pubic server's lazyness --- It says it's
done the job, but never did anything besides talking...
Searching through the lists gives different stories. Some say that NFS locking
isn't really necessary, but what about locking critical
From: "Roman Shterenzon" <[EMAIL PROTECTED]>
> On Tue, 19 Sep 2000 [EMAIL PROTECTED] wrote:
> > Yeah probably should...perhaps suggest it to -docs. Someone (from
> > something.edu, perhaps rpi.edu) posted a URL to one of the lists of a
> > working but unteste
Now that the code freeze is over, can the 64Bit XDR changes be made? This
is the only thing preventing the next release of the rpc.lockd code at this
point.
--
David Cross | email: [EMAIL PROTECTED]
Acting Lab Director | NYSLP: FREEBSD
On Mon, 6 Mar 2000, David E. Cross wrote:
:Version 2 of the lock manager is ready to be released. Amitha
:says that it passes all of the tests in the suite posted by Drew (thanks
:Drew). A noteable exception to this is on SGI where some lock requests
:are never even received from the remote hos
Version 2 of the lock manager is ready to be released. Amitha
says that it passes all of the tests in the suite posted by Drew (thanks
Drew). A noteable exception to this is on SGI where some lock requests
are never even received from the remote host. Also DOS sharing is not
yet complete.
On a
test here. Here is my scenario, tell me
if I can help. I have many freebsd machines acting as nfs clients to
access data on a combination of sun and netapp nfs servers. If I set up
this version of rpc.lockd on the freebsd clients will it lock the files
on the servers so that only one client at a time
can understand part of the reason for this... 4.0-RELEASE is right
> arround the corner, and people are focusing on delivering a stable product,
> not introducing alpha code into the system at the last second. the current
> rpc.lockd is a known value, placing a "maybe" version int
> I can understand part of the reason for this... 4.0-RELEASE is right
> arround the corner, and people are focusing on delivering a stable product,
> not introducing alpha code into the system at the last second. the current
> rpc.lockd is a known value, placing a "maybe&
tand part of the reason for this... 4.0-RELEASE is right
> arround the corner, and people are focusing on delivering a stable product,
> not introducing alpha code into the system at the last second. the current
> rpc.lockd is a known value, placing a "maybe" version into the stre
At 5:41 AM +0100 2/12/00, Ferdinand Goldmann wrote:
>On Fri, 11 Feb 2000, Jordan K. Hubbard wrote:
>
> > Well, I'd first be very interested to know if anyone has even seen
> > this work. :)
>
>Well, will the 4.0 lockd also work with a 3.3 system? I could need
>a working lockd, but I do not want to
on delivering a stable product,
not introducing alpha code into the system at the last second. the current
rpc.lockd is a known value, placing a "maybe" version into the stream at
this point would do no one any good.
> > I'm suprised no one has gotten back to you on this, sinc
On Fri, 11 Feb 2000, Jordan K. Hubbard wrote:
> Well, I'd first be very interested to know if anyone has even seen
> this work. :)
Well, will the 4.0 lockd also work with a 3.3 system? I could need a working
lockd, but I do not want to upgrade this system to 4.0 yet.
/ferdinand
To Unsubscri
:I realize that we are all very busy and the coming 4.0-RELEASE has also
:compounded things, but I have heard nothing back on the rpc.lockd that
:was released just a short time ago. I take it no news is good news and
:we can start the process of bringing it into the source tree? :)
:
:--
:David
4.0-RELEASE has also
> > compounded things, but I have heard nothing back on the rpc.lockd that
> > was released just a short time ago. I take it no news is good news and
> > we can start the process of bringing it into the source tree? :)
>
> I'm suprised no one has gotten
* David E. Cross <[EMAIL PROTECTED]> [000211 16:50] wrote:
> I realize that we are all very busy and the coming 4.0-RELEASE has also
> compounded things, but I have heard nothing back on the rpc.lockd that
> was released just a short time ago. I take it no news is good news and
I realize that we are all very busy and the coming 4.0-RELEASE has also
compounded things, but I have heard nothing back on the rpc.lockd that
was released just a short time ago. I take it no news is good news and
we can start the process of bringing it into the source tree? :)
--
David Cross
Amitha (the person who has been working on the lockd code) has finished
most of his work. There are still some issues with handling async locks
and cancel messages. Also we were not able to implement the full NLM
protocol as the FreeBSD kernel does not currently request NFS locks (we
should f
It is almost done. A working and very lightly tested version of the code will
be made available on Monday (Jan 24). It should be considered alpha quality,
I would not recommend running important NFS servers with this code.
--
David Cross | email: [EMAIL PROTECTED]
Hi
I'm in trouble with the rpc.lockd daemon
we have ~120 HPUX workstations running HPUX 10.20
that use NFS client mounted /var/mail directory to
our FreeBSD 3.1 mailhub.
The problem is some /var/mail/login.lock files stays
in the directory after the user has sent his email
and are not re
67 matches
Mail list logo