[Yet another oddity.]
On 2020-Jun-11, at 21:05, Mark Millard wrote:
>
> There is another oddity in the code structure, in
> that if pt was ever NULL the code would misuse the
> NULL before the test for non-NULL is made:
>
>pt = moea_pvo_to_pte(pvo, -1);
> . . .
>
There is another oddity in the code structure, in
that if pt was ever NULL the code would misuse the
NULL before the test for non-NULL is made:
pt = moea_pvo_to_pte(pvo, -1);
. . .
old_pte = *pt;
/*
* If the PVO is in the page table
[Just a better panic backtrace text copy.]
On 2020-Jun-11, at 20:29, Mark Millard wrote:
> On 2020-Jun-11, at 19:25, Justin Hibbits wrote:
>
>> On Thu, 11 Jun 2020 17:30:24 -0700
>> Mark Millard wrote:
>>
>>> On 2020-Jun-11, at 16:49, Mark Millard wrote:
>>>
On 2020-Jun-11, at 14:42,
t; Jun 11 16:03:27 FBSDG4S2 kernel: pid 871 (mountd), jid 0, uid 0:
>>> exited on signal 6 (core dumped) Jun 11 16:03:40 FBSDG4S2 kernel:
>>> pid 1065 (su), jid 0, uid 0: exited on signal 6 Jun 11 16:04:13
>>> FBSDG4S2 kernel: pid 1088 (su), jid 0, uid 0: exited on signal 6
>>
d 0: exited on
> signal 6
>
> Jun 11 16:05:46 FBSDG4S2 kernel: pid 873 (nfsd), jid 0, uid 0: exited on
> signal 6 (core dumped)
>
>
> Rebooting and rerunning and showing the stress output and such
> (I did not capture copies during the first test, but the first
> test had
ss output and such
(I did not capture copies during the first test, but the first
test had similar messages at the same sort of points):
Second test . . .
# stress -m 2 --vm-bytes 1700M
stress: info: [1166] dispatching hogs: 0 cpu, 0 io, 2 vm, 0 hdd
:
/usr/src/contrib/jemalloc/include
11 16:04:28 FBSDG4S2 kernel: pid 968 (sshd), jid 0, uid 0:
> > exited on signal 6
> >
> > Jun 11 16:05:42 FBSDG4S2 kernel: pid 1028 (login), jid 0, uid 0:
> > exited on signal 6
> >
> > Jun 11 16:05:46 FBSDG4S2 kernel: pid 873 (nfsd), jid 0, uid 0:
> &g
On 2020-Jun-11, at 14:41, Brandon Bergren wrote:
> An update from my end: I now have the ability to test dual processor G4 as
> well, now that mine is up and running.
Cool.
FYI:
Dual processors are not required for the
problem to happen: the stress based testing
showed the problem just as
On 2020-Jun-11, at 13:55, Justin Hibbits wrote:
> On Wed, 10 Jun 2020 18:56:57 -0700
> Mark Millard wrote:
>
>> On 2020-May-13, at 08:56, Justin Hibbits wrote:
>>
>>> Hi Mark,
>>
>> Hello Justin.
>
> Hi Mark,
Hello again, Justin.
>>
>>> On Wed, 13 May 2020 01:43:23 -0700
>>> Mark Mi
On Thu, 11 Jun 2020 14:36:37 -0700
Mark Millard wrote:
> On 2020-Jun-11, at 13:55, Justin Hibbits
> wrote:
>
> > On Wed, 10 Jun 2020 18:56:57 -0700
> > Mark Millard wrote:
> >
> >> On 2020-May-13, at 08:56, Justin Hibbits
> >> wrote:
> >>> Hi Mark,
> >>
> >> Hello Justin.
> >
> >
An update from my end: I now have the ability to test dual processor G4 as
well, now that mine is up and running.
On Thu, Jun 11, 2020, at 4:36 PM, Mark Millard wrote:
>
> How did you test?
>
> In my context it was far easier to see the problem
> with builds that did not use MALLOC_PRODUCTION.
On Wed, 10 Jun 2020 18:56:57 -0700
Mark Millard wrote:
> On 2020-May-13, at 08:56, Justin Hibbits wrote:
>
> > Hi Mark,
>
> Hello Justin.
Hi Mark,
>
> > On Wed, 13 May 2020 01:43:23 -0700
> > Mark Millard wrote:
> >
> >> [I'm adding a reference to an old arm64/aarch64 bug that had
>
On 2020-May-13, at 08:56, Justin Hibbits wrote:
> Hi Mark,
Hello Justin.
> On Wed, 13 May 2020 01:43:23 -0700
> Mark Millard wrote:
>
>> [I'm adding a reference to an old arm64/aarch64 bug that had
>> pages turning to zero, in case this 32-bit powerpc issue is
>> somewhat analogous.]
>>
>
B)
> > avail memory = 1527508992 (1456 MB)
> >
> > # stress -m 1 --vm-bytes 1792M
> > stress: info: [1024] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
> > :
> > /usr/src/contrib/jemalloc/include/jemalloc/internal/arena_inlines_b.h:258:
> > Failed assertion:
.9, 466.42 MHz
> cpu0: Features 9c00
> cpu0: HID0 8094c0a4
> real memory = 1577857024 (1504 MB)
> avail memory = 1527508992 (1456 MB)
>
> # stress -m 1 --vm-bytes 1792M
> stress: info: [1024] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
> :
> /usr/src/contrib/jema
PowerPC 7400 revision 2.9, 466.42 MHz
cpu0: Features 9c00
cpu0: HID0 8094c0a4
real memory = 1577857024 (1504 MB)
avail memory = 1527508992 (1456 MB)
# stress -m 1 --vm-bytes 1792M
stress: info: [1024] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
:
/usr/src/contrib/jemalloc/include/jemalloc
/usr/src/contrib/jemalloc/include/jemalloc/internal/arena_inlines_b.h:258:
Failed assertion: "slab == extent_slab_get(extent)"
:
/usr/src/contrib/jemalloc/include/jemalloc/internal/arena_inlines_b.h:258:
Failed assertion: "slab == extent_slab_get(extent)"
and eventually:
[1] S
[A new kind of experiment and partial results.]
Given the zero'ed memory page(s) that for some of
the example contexts include a page that should not
be changing after initialization in my context
(jemalloc global variables), I have attempted the
following for such examples:
A) Run gdb
B) Attach
t; #1 0x502b2170 in __raise (s=6) at /usr/src/lib/libc/gen/raise.c:52
>> #2 0x50211cc0 in abort () at /usr/src/lib/libc/stdlib/abort.c:67
>> #3 0x50206104 in sz_index2size_lookup (index=) at
>> /usr/src/contrib/jemalloc/include/jemalloc/internal/sz.h:200
>> #4 sz_index2size
bout the
>>> observed asserts for those below.
>>>
>>>
>>> sshd hit an assert, failing slab == extent_slab_get(extent) :
>>>
>>> (gdb) bt
>>> #0 thr_kill () at thr_kill.S:4
>>> #1 0x50927170 in __raise (s=6) at /usr/src/lib/li
86cc0 in abort () at /usr/src/lib/libc/stdlib/abort.c:67
>> #3 0x508834b0 in arena_dalloc (tsdn=, ptr=,
>> tcache=, alloc_ctx=, slow_path=)
>> at
>> /usr/src/contrib/jemalloc/include/jemalloc/internal/arena_inlines_b.h:315
>> #4 idalloctm (tsdn=0x500dd040, ptr=0x5008a
b/libc/gen/raise.c:52
> #2 0x50886cc0 in abort () at /usr/src/lib/libc/stdlib/abort.c:67
> #3 0x508834b0 in arena_dalloc (tsdn=, ptr=,
> tcache=, alloc_ctx=, slow_path=)
>at
> /usr/src/contrib/jemalloc/include/jemalloc/internal/arena_inlines_b.h:315
> #4 idalloctm (tsdn=0x
c/gen/raise.c:52
#2 0x50886cc0 in abort () at /usr/src/lib/libc/stdlib/abort.c:67
#3 0x508834b0 in arena_dalloc (tsdn=, ptr=,
tcache=, alloc_ctx=, slow_path=)
at /usr/src/contrib/jemalloc/include/jemalloc/internal/arena_inlines_b.h:315
#4 idalloctm (tsdn=0x500dd040, ptr=0x5008a180, tcache=0x500
[This report just shows an interesting rpcbind crash:
a pointer was filled with part of a string instead,
leading to a failed memory access attempt from the junk
address produced.]
Core was generated by `/usr/sbin/rpcbind'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x5024405c
[This report just shows some material for the
sendmail SIGSEGV's, based on truss output.]
I've returned to using the modern jemalloc because
it seems to show problems more, after having
caught the earlier reported dhclient example under
the older jemalloc context. (Again: jemalloc may
be exposing
/mountd -r'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x50235df0 in cache_bin_dalloc_easy (bin=,
bin_info=, ptr=0x50049160) at
/usr/src/contrib/jemalloc/include/jemalloc/internal/cache_bin.h:121
warning: Source file is more recent than executable.
121 if (unlike
_unset
> were apparently inlined).
>
> The chain for the example seems to be:
> fork_privchld -> dispatch_imsg -> jemalloc
>
> For reference . . .
>
> # gdb dhclient /dhclient.core
> GNU gdb (GDB) 9.1 [GDB v9.1 for FreeBSD]
> Copyright (C) 2020 Free Software Foun
e Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
. . .
Reading symbols from dhclient...
Reading symbols from /usr/lib/debug//sbin/dhclient.debug...
[New LWP 100089]
Core was generated by `dhclient: gem0 [priv]'.
Program terminated with
On 2020-May-3, at 01:26, nonameless at ukr.net wrote:
> --- Original message ---
> From: "Mark Millard"
> Date: 3 May 2020, 04:47:14
>
>
>
>> [I'm only claiming the new jemalloc is involved and that
>> reverting avoids the problem.]
>>
>> I've been reporting to some lists problems with:
--- Original message ---
From: "Mark Millard"
Date: 3 May 2020, 17:38:14
>
>
> On 2020-May-3, at 01:26, nonameless at
> ukr.net wrote:
>
>
>
>
> > --- Original message ---
> > From: "Mark Millard"
> > Date: 3 May 2020, 04:47:14
> >
> >
> >
> >> [I'm only claiming the new jem
--- Original message ---
From: "Mark Millard"
Date: 3 May 2020, 04:47:14
> [I'm only claiming the new jemalloc is involved and that
> reverting avoids the problem.]
>
> I've been reporting to some lists problems with:
>
> dhclient
> sendmail
> rpcbind
> mountd
> nfsd
>
> getting SIG
[I'm only claiming the new jemalloc is involved and that
reverting avoids the problem.]
I've been reporting to some lists problems with:
dhclient
sendmail
rpcbind
mountd
nfsd
getting SIGSEGV (signal 11) crashes and some core
dumps on the old 2-socket (1 core per socket) 32-bit
PowerMac G4 runnin
On 04/21/12 02:23, Doug Barton wrote:
...
In libedit we have incomplete merges from upstream (that was
CVS fault), we have some changes that are obsolete wrt to how
upstream solved the same issues and we have a couple of
files that have diverged completely from upstream.
I agree that sounds lik
On 04/20/2012 06:06 PM, Pedro Giffuni wrote:
> On 04/20/12 19:32, David O'Brien wrote:
>> On Fri, Apr 20, 2012 at 02:13:32PM -0700, Pedro Giffuni wrote:
>>> Easier said than done. Feel free to give libedit a try.
>> That has nothing to do with our process and everything to do with us
>> blindly hac
On 04/20/12 19:32, David O'Brien wrote:
On Fri, Apr 20, 2012 at 02:13:32PM -0700, Pedro Giffuni wrote:
Easier said than done. Feel free to give libedit a try.
That has nothing to do with our process and everything to do with us
blindly hacking away pissing all over to be our own thing -- BUT st
On Fri, Apr 20, 2012 at 02:13:32PM -0700, Pedro Giffuni wrote:
> Easier said than done. Feel free to give libedit a try.
That has nothing to do with our process and everything to do with us
blindly hacking away pissing all over to be our own thing -- BUT still
wanting to take work from the origina
On 04/20/2012 02:13 PM, Pedro Giffuni wrote:
>
>
> --- Ven 20/4/12, Doug Barton ha scritto:
> ...
>>
>> With due respect, if doing it the right way is too
>> difficult, the answer
>> is to ask for help rather than giving up. There are plenty
>> of us who are
>> experienced with doing this, and w
--- Ven 20/4/12, Doug Barton ha scritto:
...
>
> With due respect, if doing it the right way is too
> difficult, the answer
> is to ask for help rather than giving up. There are plenty
> of us who are
> experienced with doing this, and would be glad to assist.
>
> In the CVS era I agree that v
On 04/20/2012 11:18 AM, Pedro Giffuni wrote:
> FWIW,
>
> While the vendor branch is usually the cleanest way to merge
> updates, it is not always the best. I personally gave up on
> updating two packages from the vendor tree because it's just
> too much trouble.
With due respect, if doing it the
Hi;
--- Ven 20/4/12, Doug Barton ha scritto:
...
> >
> > The workflow I'm using is documented in the patch
> (contrib/jemalloc/FREEBSD-upgrade). Can you tell me
> how to achieve a similarly streamlined import flow with a
> vendor branch in the mix? Also, what histo
ies that
>> consume FreeBSD into their products (I speak for Juniper Networks in
>> this).
>>
>> Why do you feel they are [measurably] extra work with no benefit?
>
> The workflow I'm using is documented in the patch
> (contrib/jemalloc/FREEBSD-upgrade). Ca
grade rediff' step?
stdlib.h+malloc_np.h and jemalloc.h are different enough that they require
separate maintenance. Alas, not all programming can be automated; if
interfaces change, manual intervention is required.
> contrib/jemalloc/FREEBSD-upgrade doesn't describe the "commit
On Thu, Apr 12, 2012 at 01:19:56PM -0700, Jason Evans wrote:
> On Apr 12, 2012, at 11:41 AM, David O'Brien wrote:
> > On Wed, Apr 04, 2012 at 09:56:45PM -0700, Jason Evans wrote:
> >> I have the current version of jemalloc integrated into libc as
> >>
On Apr 12, 2012, at 11:41 AM, David O'Brien wrote:
> On Wed, Apr 04, 2012 at 09:56:45PM -0700, Jason Evans wrote:
>> I have the current version of jemalloc integrated into libc as
>> contrib/jemalloc:
>> http://people.freebsd.org/~jasone/patches/jemalloc_20120404b
On Wed, Apr 04, 2012 at 09:56:45PM -0700, Jason Evans wrote:
> I have the current version of jemalloc integrated into libc as
> contrib/jemalloc:
> http://people.freebsd.org/~jasone/patches/jemalloc_20120404b.patch
Looking at the latest patch
http://people.freebsd.org/~jason
On Apr 5, 2012, at 6:33 AM, John Baldwin wrote:
> On Thursday, April 05, 2012 12:56:45 am Jason Evans wrote:
>>
>> * Will the utrace feature be missed? I removed it some time ago, mainly
>> because traces are impossibly large for most real-world use cases.
>
> I will only speak to this one. I
On Apr 5, 2012, at 10:52 AM, Konstantin Belousov wrote:
> On Wed, Apr 04, 2012 at 09:56:45PM -0700, Jason Evans wrote:
>> I have the current version of jemalloc integrated into libc as
>> contrib/jemalloc:
>>
>> http://people.freebsd.org/~jasone/patches/jemalloc_
On Thu, Apr 05, 2012 at 11:55:48AM -0700, Jason Evans wrote:
> On Apr 5, 2012, at 10:52 AM, Konstantin Belousov wrote:
> > On Wed, Apr 04, 2012 at 09:56:45PM -0700, Jason Evans wrote:
> >> I have the current version of jemalloc integrated into libc as
> >> contrib/j
On Wed, Apr 04, 2012 at 09:56:45PM -0700, Jason Evans wrote:
> I have the current version of jemalloc integrated into libc as
> contrib/jemalloc:
>
> http://people.freebsd.org/~jasone/patches/jemalloc_20120404b.patch
> * Are the symbol versioning specifications rig
On Thursday, April 05, 2012 12:56:45 am Jason Evans wrote:
> I have the current version of jemalloc integrated into libc as
> contrib/jemalloc:
>
> http://people.freebsd.org/~jasone/patches/jemalloc_20120404b.patch
>
> This is the first update to FreeBSD's jemalloc
I have the current version of jemalloc integrated into libc as contrib/jemalloc:
http://people.freebsd.org/~jasone/patches/jemalloc_20120404b.patch
This is the first update to FreeBSD's jemalloc in over two years, and the
differences are huge (faster, better introspection, hope
51 matches
Mail list logo