On Fri, 11 Dec 2020 15:36:53 +0100 Eric Dumazet wrote:
> On Fri, Dec 11, 2020 at 12:24 PM SeongJae Park wrote:
> >
> > From: SeongJae Park
> >
> > On a few of our systems, I found frequent 'unshare(CLONE_NEWNET)' calls
> > make the number of acti
From: SeongJae Park
On a few of our systems, I found frequent 'unshare(CLONE_NEWNET)' calls
make the number of active slab objects including 'sock_inode_cache' type
rapidly and continuously increase. As a result, memory pressure occurs.
In more detail, I made an artifi
On Fri, 11 Dec 2020 11:41:36 +0100 Eric Dumazet wrote:
> On Fri, Dec 11, 2020 at 11:33 AM SeongJae Park wrote:
> >
> > On Fri, 11 Dec 2020 09:43:41 +0100 Eric Dumazet wrote:
> >
> > > On Fri, Dec 11, 2020 at 9:21 AM SeongJae Park wrote:
> > > >
> &
On Fri, 11 Dec 2020 09:43:41 +0100 Eric Dumazet wrote:
> On Fri, Dec 11, 2020 at 9:21 AM SeongJae Park wrote:
> >
> > From: SeongJae Park
> >
> > For each 'fqdir_exit()' call, a work for destroy of the 'fqdir' is
> > enqueued. T
From: SeongJae Park
On a few of our systems, I found frequent 'unshare(CLONE_NEWNET)' calls
make the number of active slab objects including 'sock_inode_cache' type
rapidly and continuously increase. As a result, memory pressure occurs.
In more detail, I made an artifi
From: SeongJae Park
For each 'fqdir_exit()' call, a work for destroy of the 'fqdir' is
enqueued. The work function, 'fqdir_work_fn()', internally calls
'rcu_barrier()'. In case of intensive 'fqdir_exit()' (e.g., frequent
'unshare()
On Thu, 10 Dec 2020 15:09:10 +0100 Eric Dumazet wrote:
> On Thu, Dec 10, 2020 at 9:09 AM SeongJae Park wrote:
> >
> > From: SeongJae Park
> >
> > On a few of our systems, I found frequent 'unshare(CLONE_NEWNET)' calls
> > make the number of active sla
From: SeongJae Park
In 'fqdir_exit()', a work for destroy of the 'fqdir' is enqueued. The
work function, 'fqdir_work_fn()', calls 'rcu_barrier()'. In case of
intensive 'fqdir_exit()' (e.g., frequent 'unshare()' systemcalls), this
in
From: SeongJae Park
On a few of our systems, I found frequent 'unshare(CLONE_NEWNET)' calls
make the number of active slab objects including 'sock_inode_cache' type
rapidly and continuously increase. As a result, memory pressure occurs.
In more detail, I made an artifi
On Thu, 10 Dec 2020 01:17:58 +0100 Eric Dumazet wrote:
>
>
> On 12/8/20 10:45 AM, SeongJae Park wrote:
> > From: SeongJae Park
> >
> > In 'fqdir_exit()', a work for destruction of the 'fqdir' is enqueued.
> > The work function,
On Wed, 9 Dec 2020 15:16:59 -0800 Jakub Kicinski wrote:
> On Tue, 8 Dec 2020 10:45:29 +0100 SeongJae Park wrote:
> > From: SeongJae Park
> >
> > In 'fqdir_exit()', a work for destruction of the 'fqdir' is enqueued.
> > The work function, '
From: SeongJae Park
In 'fqdir_exit()', a work for destruction of the 'fqdir' is enqueued.
The work function, 'fqdir_work_fn()', calls 'rcu_barrier()'. In case of
intensive 'fqdir_exit()' (e.g., frequent 'unshare(CLONE_NEWNET)'
systemca
From: SeongJae Park
On a few of our systems, I found frequent 'unshare(CLONE_NEWNET)' calls
make the number of active slab objects including 'sock_inode_cache' type
rapidly and continuously increase. As a result, memory pressure occurs.
'cleanup_net()' and '
On Wed, 6 May 2020 07:41:51 -0700 "Paul E. McKenney" wrote:
> On Wed, May 06, 2020 at 02:59:26PM +0200, SeongJae Park wrote:
> > TL; DR: It was not kernel's fault, but the benchmark program.
> >
> > So, the problem is reproducible using the lebench[1] only.
ps://github.com/LinuxPerfStudy/LEBench/blob/master/TEST_DIR/OS_Eval.c#L820
[3]
https://github.com/LinuxPerfStudy/LEBench/blob/master/TEST_DIR/OS_Eval.c#L822
Thanks,
SeongJae Park
s magnitude and scope you must do an
> allmodconfig build.
Definitely my fault. I will fix this in next spin.
Thanks,
SeongJae Park
>
> Thank you.
On Tue, 5 May 2020 11:27:20 -0700 "Paul E. McKenney" wrote:
> On Tue, May 05, 2020 at 07:49:43PM +0200, SeongJae Park wrote:
> > On Tue, 5 May 2020 10:23:58 -0700 "Paul E. McKenney"
> > wrote:
> >
> > > On Tue, May 05, 2020 at 09:25:06AM -0700
On Tue, 5 May 2020 11:17:07 -0700 "Paul E. McKenney" wrote:
> On Tue, May 05, 2020 at 07:56:05PM +0200, SeongJae Park wrote:
> > On Tue, 5 May 2020 10:30:36 -0700 "Paul E. McKenney"
> > wrote:
> >
> > > On Tue, May 05, 2020 at 07:05:53PM +020
On Tue, 5 May 2020 10:28:50 -0700 "Paul E. McKenney" wrote:
> On Tue, May 05, 2020 at 09:37:42AM -0700, Eric Dumazet wrote:
> >
> >
> > On 5/5/20 9:31 AM, Eric Dumazet wrote:
> > >
> > >
> > > On 5/5/20 9:25 AM, Eric Dumazet wrote:
&
On Tue, 5 May 2020 10:30:36 -0700 "Paul E. McKenney" wrote:
> On Tue, May 05, 2020 at 07:05:53PM +0200, SeongJae Park wrote:
> > On Tue, 5 May 2020 09:37:42 -0700 Eric Dumazet
> > wrote:
> >
> > >
> > >
> > > On 5/5/20 9:31 AM, Eric
On Tue, 5 May 2020 10:23:58 -0700 "Paul E. McKenney" wrote:
> On Tue, May 05, 2020 at 09:25:06AM -0700, Eric Dumazet wrote:
> >
> >
> > On 5/5/20 9:13 AM, SeongJae Park wrote:
> > > On Tue, 5 May 2020 09:00:44 -0700 Eric Dumazet
> > > wr
On Tue, 5 May 2020 09:37:42 -0700 Eric Dumazet wrote:
>
>
> On 5/5/20 9:31 AM, Eric Dumazet wrote:
> >
> >
> > On 5/5/20 9:25 AM, Eric Dumazet wrote:
> >>
> >>
> >> On 5/5/20 9:13 AM, SeongJae Park wrote:
> >>> On Tue, 5 May 2
On Tue, 5 May 2020 09:00:44 -0700 Eric Dumazet wrote:
> On Tue, May 5, 2020 at 8:47 AM SeongJae Park wrote:
> >
> > On Tue, 5 May 2020 08:20:50 -0700 Eric Dumazet
> > wrote:
> >
> > >
> > >
> > > On 5/5/20 8:07 AM, SeongJae Park wrote:
On Tue, 5 May 2020 08:20:50 -0700 Eric Dumazet wrote:
>
>
> On 5/5/20 8:07 AM, SeongJae Park wrote:
> > On Tue, 5 May 2020 07:53:39 -0700 Eric Dumazet wrote:
> >
>
> >> Why do we have 10,000,000 objects around ? Could this be because of
> >> some
On Tue, 5 May 2020 07:53:39 -0700 Eric Dumazet wrote:
> On Tue, May 5, 2020 at 4:54 AM SeongJae Park wrote:
> >
> > CC-ing sta...@vger.kernel.org and adding some more explanations.
> >
> > On Tue, 5 May 2020 10:10:33 +0200 SeongJae Park wrote:
> >
> >
On Tue, 5 May 2020 13:44:42 +0100 Al Viro wrote:
> CAUTION: This email originated from outside of the organization. Do not cli=
> ck links or open attachments unless you can confirm the sender and know the=
> content is safe.
>
>
>
> On Tue, May 05, 2020 at 09:28:39AM
CC-ing sta...@vger.kernel.org and adding some more explanations.
On Tue, 5 May 2020 10:10:33 +0200 SeongJae Park wrote:
> From: SeongJae Park
>
> The commit 6d7855c54e1e ("sockfs: switch to ->free_inode()") made the
> deallocation of 'socket_alloc' to b
From: SeongJae Park
This reverts commit 6d7855c54e1e269275d7c504f8f62a0b7a5b3f18.
The commit 6d7855c54e1e ("sockfs: switch to ->free_inode()") made the
deallocation of 'socket_alloc' to be done asynchronously using RCU, as
same to 'sock.wq'.
The change mad
From: SeongJae Park
This reverts commit 333f7909a8573145811c4ab7d8c9092301707721.
The commit 6d7855c54e1e ("sockfs: switch to ->free_inode()") made the
deallocation of 'socket_alloc' to be done asynchronously using RCU, as
same to 'sock.wq'. And the following
From: SeongJae Park
The commit 6d7855c54e1e ("sockfs: switch to ->free_inode()") made the
deallocation of 'socket_alloc' to be done asynchronously using RCU, as
same to 'sock.wq'. And the following commit 333f7909a857 ("coallocate
socket_sq with socket
On Tue, 5 May 2020 09:45:35 +0200 Greg KH wrote:
> On Tue, May 05, 2020 at 09:28:41AM +0200, SeongJae Park wrote:
> > From: SeongJae Park
> >
> > This reverts commit 6d7855c54e1e269275d7c504f8f62a0b7a5b3f18.
> >
> > The commit 6d7855c54e1e ("sockf
On Tue, 5 May 2020 09:45:11 +0200 Greg KH wrote:
> On Tue, May 05, 2020 at 09:28:40AM +0200, SeongJae Park wrote:
> > From: SeongJae Park
> >
> > This reverts commit 333f7909a8573145811c4ab7d8c9092301707721.
> >
> > The commit 6d7855c54e1e ("sockf
From: SeongJae Park
This reverts commit 333f7909a8573145811c4ab7d8c9092301707721.
The commit 6d7855c54e1e ("sockfs: switch to ->free_inode()") made the
deallocation of 'socket_alloc' to be done asynchronously using RCU, as
same to 'sock.wq'. And the following
From: SeongJae Park
This reverts commit 6d7855c54e1e269275d7c504f8f62a0b7a5b3f18.
The commit 6d7855c54e1e ("sockfs: switch to ->free_inode()") made the
deallocation of 'socket_alloc' to be done asynchronously using RCU, as
same to 'sock.wq'.
The change mad
From: SeongJae Park
The commit 6d7855c54e1e ("sockfs: switch to ->free_inode()") made the
deallocation of 'socket_alloc' to be done asynchronously using RCU, as
same to 'sock.wq'. And the following commit 333f7909a857 ("coallocate
socket_sq with socket
and using
LDLIBS instead of LDFLAGS.
Signed-off-by: SeongJae Park
---
tools/testing/selftests/memfd/Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/testing/selftests/memfd/Makefile
b/tools/testing/selftests/memfd/Makefile
index 79891d033de1..ad8a0897e47f 100644
-
se -lm was defined as LDFLAGS and implicit rule
of make places LDFLAGS before source file. This commit fixes the
problem by using LDLIBS instead of LDFLAGS.
Signed-off-by: SeongJae Park
---
tools/testing/selftests/intel_pstate/Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
di
After commit a8ba798bc8ec ("selftests: enable O and KBUILD_OUTPUT"),
net selftest build fails because it points output file without $(OUTPUT)
yet. This commit fixes the error.
Signed-off-by: SeongJae Park
Fixes: a8ba798bc8ec ("selftests: enable O and KBUILD_OUTPUT")
---
too
This patchset fixes build errors in selftest.
SeongJae Park (3):
selftest/memfd/Makefile: Fix build error
selftest/intel_pstate/aperf: Use LDLIBS instead of LDFLAGS
selftest/net/Makefile: Specify output with $(OUTPUT)
tools/testing/selftests/intel_pstate/Makefile | 2 +-
tools/testing
39 matches
Mail list logo