reset handling")
Signed-off-by: Dany Madden
Reviewed-by: Rick Lindsley
On Fri, 2019-05-17 at 14:01 -0700, Rick Edgecomb e wrote:
> Meelis Roos reported issues with the new VM_FLUSH_RESET_PERMS flag on
> the
> sparc architecture.
>
Argh, this patch is not correct in the flush range for non-x86. I'll
send a revision.
en me if this
sounds reasonable at all I would really appreciate it.
Thanks,
Rick
[1] https://patchwork.ozlabs.org/patch/376523/
[2] https://patchwork.ozlabs.org/patch/687780/
like sparc
that already have normal kernel memory as executable. This patch fixes the usage
of this flag on sparc to also fix it in case the root cause is also an issue on
other architectures. Separately we can disable usage of VM_FLUSH_RESET_PERMS for
these architectures if desired.
Rick
rmsissions")
Reported-by: Meelis Roos
Cc: Meelis Roos
Cc: Peter Zijlstra
Cc: "David S. Miller"
Cc: Dave Hansen
Cc: Borislav Petkov
Cc: Andy Lutomirski
Cc: Ingo Molnar
Cc: Nadav Amit
Signed-off-by: Rick Edgecombe
---
mm/vmalloc.c | 23 +--
1 file chan
On Mon, 2019-05-13 at 10:01 -0700, Rick Edgecombe wrote:
> On Mon, 2019-05-13 at 17:01 +0300, Meelis Roos wrote:
> > I tested yesterdays 5.2 devel git and it failed to boot on my Sun Fire V445
> > (4x UltraSparc III). Init is started and it hangs there:
> >
> > [ 3
commit:
>
> d53d2f78ceadba081fc7785570798c3c8d50a718 is the first bad commit
> commit d53d2f78ceadba081fc7785570798c3c8d50a718
> Author: Rick Edgecombe
> Date: Thu Apr 25 17:11:38 2019 -0700
>
> bpf: Use vmalloc special flag
>
> Use new flag VM_FLUSH_RES
On Wed, 2019-02-06 at 00:35 +, Alexei Starovoitov wrote:
> On 2/5/19 2:50 PM, Rick Edgecombe wrote:
> > This introduces a new capability for BPF program JIT's to be located in
> > vmalloc
> > space on x86_64. This can serve as a backup area for
> > CONFIG_
g some text in vmalloc so that calls can be in relative jump
range. For example, a BPF library program could maybe be re-mapped multiple
times so that a copy is always near the caller and so we could use the faster
calls.
Rick Edgecombe (4):
bpf, x64: Implement BPF call retpoline
bpf,
insertion would fail, or
BPF would fallback to the interpreter.
In the case of using vmalloc, it is not charged against bpf_jit_limit.
Cc: Daniel Borkmann
Cc: Alexei Starovoitov
Signed-off-by: Rick Edgecombe
---
arch/x86/net/bpf_jit_comp.c | 32
1 file changed, 32
, however the allocation
may be larger at the end when using retpoline due the thunk emitted at
the end.
Cc: Daniel Borkmann
Cc: Alexei Starovoitov
Signed-off-by: Rick Edgecombe
---
arch/x86/net/bpf_jit_comp.c | 117 +---
1 file changed, 94 insertions(+), 23
: Rick Edgecombe
---
include/linux/filter.h | 3 +++
kernel/bpf/core.c | 20 +++-
2 files changed, 14 insertions(+), 9 deletions(-)
diff --git a/include/linux/filter.h b/include/linux/filter.h
index ad106d845b22..33c0ae5990e1 100644
--- a/include/linux/filter.h
+++ b/include
Add x86 call retpoline sequence from the "Intel Retpoline: A Branch Target
Injection Mitigation White Paper" for BPF JIT compiler. Unlike the paper
it uses RBX instead of RAX since RAX is part of the BPF calling
convetions.
Cc: Daniel Borkmann
Cc: Alexei Starovoitov
Signed-of
On Mon, 2018-12-17 at 05:41 +0100, Jessica Yu wrote:
> +++ Edgecombe, Rick P [12/12/18 23:05 +]:
> > On Wed, 2018-11-28 at 01:40 +, Edgecombe, Rick P wrote:
> > > On Tue, 2018-11-27 at 11:21 +0100, Daniel Borkmann wrote:
> > > > On 11/27/2018 01:19 AM, Edgecom
On Sat, 2018-12-15 at 10:52 -0800, Andy Lutomirski wrote:
> On Wed, Dec 12, 2018 at 2:01 PM Edgecombe, Rick P
> wrote:
> >
> > On Wed, 2018-12-12 at 11:57 -0800, Andy Lutomirski wrote:
> > > On Wed, Dec 12, 2018 at 11:50 AM Edgecombe, Rick P
> > > wrote:
&g
On Thu, 2018-12-13 at 19:27 +, Nadav Amit wrote:
> > On Dec 13, 2018, at 11:02 AM, Edgecombe, Rick P
> > wrote:
> >
> > On Wed, 2018-12-12 at 23:40 +, Nadav Amit wrote:
> > > > On Dec 11, 2018, at 4:03 PM, Rick Edgecombe
> > > > wrote:
>
On Wed, 2018-12-12 at 23:40 +, Nadav Amit wrote:
> > On Dec 11, 2018, at 4:03 PM, Rick Edgecombe
> > wrote:
> >
> > Add new flags for handling freeing of special permissioned memory in
> > vmalloc,
> > and remove places where the handling was done in modu
On Wed, 2018-11-28 at 01:40 +, Edgecombe, Rick P wrote:
> On Tue, 2018-11-27 at 11:21 +0100, Daniel Borkmann wrote:
> > On 11/27/2018 01:19 AM, Edgecombe, Rick P wrote:
> > > On Mon, 2018-11-26 at 16:36 +0100, Jessica Yu wrote:
> > > > +++ Rick E
On Wed, 2018-12-12 at 11:57 -0800, Andy Lutomirski wrote:
> On Wed, Dec 12, 2018 at 11:50 AM Edgecombe, Rick P
> wrote:
> >
> > On Tue, 2018-12-11 at 18:20 -0800, Andy Lutomirski wrote:
> > > On Tue, Dec 11, 2018 at 4:12 PM Rick Edgecombe
> > > wrote:
>
On Wed, 2018-12-12 at 06:30 +, Nadav Amit wrote:
> > On Dec 11, 2018, at 4:03 PM, Rick Edgecombe
> > wrote:
> >
> > This adds a more efficient x86 architecture specific implementation of
> > arch_vunmap, that can free any type of special permission memory
On Tue, 2018-12-11 at 18:24 -0800, Andy Lutomirski wrote:
> On Tue, Dec 11, 2018 at 4:12 PM Rick Edgecombe
> wrote:
> >
> > This adds a more efficient x86 architecture specific implementation of
> > arch_vunmap, that can free any type of special permission memory with
On Tue, 2018-12-11 at 18:20 -0800, Andy Lutomirski wrote:
> On Tue, Dec 11, 2018 at 4:12 PM Rick Edgecombe
> wrote:
> >
> > This adds two new flags VM_IMMEDIATE_UNMAP and VM_HAS_SPECIAL_PERMS, for
> > enabling vfree operations to immediately clear executable TLB entrie
better communicate
their different (non-flushing) behavior from the rest of the set_pages_*
functions.
The method for doing this with only 1 TLB flush was suggested by Andy
Lutomirski.
Suggested-by: Andy Lutomirski
Signed-off-by: Rick Edgecombe
---
arch/x86/include/asm/set_memory.h | 2 +
arch
27;s next version of his
patchset
Changes since v1:
- New efficient algorithm on x86 for tearing down executable RO memory and
flag for this (Andy Lutomirski)
- Have no W^X violating window on tear down (Nadav Amit)
Rick Edgecombe (4):
vmalloc: New flags for safe vfree on special perm
This switches to use the new vmalloc flags to control freeing memory with
special permissions.
Signed-off-by: Rick Edgecombe
---
include/linux/filter.h | 26 --
kernel/bpf/core.c | 1 -
2 files changed, 12 insertions(+), 15 deletions(-)
diff --git a/include/linux
Add new flags for handling freeing of special permissioned memory in vmalloc,
and remove places where the handling was done in module.c.
This will enable this flag for all architectures.
Signed-off-by: Rick Edgecombe
---
kernel/module.c | 43 ---
1 file
Suggested-by: Andy Lutomirski
Suggested-by: Will Deacon
Signed-off-by: Rick Edgecombe
---
include/linux/vmalloc.h | 2 ++
mm/vmalloc.c| 73 +
2 files changed, 69 insertions(+), 6 deletions(-)
diff --git a/include/linux/vmalloc.h b/include/linux
oc() allocates memory the normal way, then, later on, we
> > > > call some function that, all at once, removes the memory from the
> > > > direct map and applies the right permissions to the vmalloc alias (or
> > > > just makes the vmalloc alias not-present so we can
not-present so we can add permissions
> later without flushing), and flushes the TLB. And we arrange for
> vunmap to zap the vmalloc range, then put the memory back into the
> direct map, then free the pages back to the page allocator, with the
> flush in the appropriate place.
>
>
On Tue, 2018-12-04 at 16:53 -0800, Nadav Amit wrote:
> > On Dec 4, 2018, at 4:29 PM, Edgecombe, Rick P
> > wrote:
> >
> > On Tue, 2018-12-04 at 16:01 -0800, Nadav Amit wrote:
> > > > On Dec 4, 2018, at 3:51 PM, Edgecombe, Rick P <
> >
> On Mon, Dec 3, 2018 at 5:43 PM Nadav Amit wrote:
> > > > > > On Nov 27, 2018, at 4:07 PM, Rick Edgecombe <
> > > > > > rick.p.edgeco...@intel.com> wrote:
> > > > > >
> > > > > > Since vfree will lazily flush the TL
On Tue, 2018-12-04 at 16:01 -0800, Nadav Amit wrote:
> > On Dec 4, 2018, at 3:51 PM, Edgecombe, Rick P
> > wrote:
> >
> > On Tue, 2018-12-04 at 12:36 -0800, Nadav Amit wrote:
> > > > On Dec 4, 2018, at 12:02 PM, Edgecombe, Rick P <
> >
On Tue, 2018-12-04 at 12:09 -0800, Andy Lutomirski wrote:
> On Tue, Dec 4, 2018 at 12:02 PM Edgecombe, Rick P
> wrote:
> >
> > On Tue, 2018-12-04 at 16:03 +, Will Deacon wrote:
> > > On Mon, Dec 03, 2018 at 05:43:11PM -0800, Nadav Amit wrote:
> > > &
On Tue, 2018-12-04 at 16:03 +, Will Deacon wrote:
> On Mon, Dec 03, 2018 at 05:43:11PM -0800, Nadav Amit wrote:
> > > On Nov 27, 2018, at 4:07 PM, Rick Edgecombe
> > > wrote:
> > >
> > > Since vfree will lazily flush the TLB, but not lazily free the
It looks like this new flag is in linux-next now. As I am reading it, these
architectures have a module_alloc that uses some sort of executable flag and
are not using the default module_alloc which is already covered, and so may need
it plugged in:
arm
arm64
parisc
s390
unicore32
Thanks,
Rick
On Thu, 2018-11-29 at 23:06 +0900, Masami Hiramatsu wrote:
> On Tue, 27 Nov 2018 16:07:52 -0800
> Rick Edgecombe wrote:
>
> > Sometimes when memory is freed via the module subsystem, an executable
> > permissioned TLB entry can remain to a freed page. If the page is r
On Wed, 2018-11-28 at 17:40 -0800, Andy Lutomirski wrote:
> > On Nov 27, 2018, at 4:07 PM, Rick Edgecombe <
> > rick.p.edgeco...@intel.com> wrote:
> >
> > Change the module allocations to flush before freeing the pages.
> >
> > Signed-off-by: Rick Edgec
On Wed, 2018-11-28 at 15:11 -0800, Andrew Morton wrote:
> On Tue, 27 Nov 2018 16:07:54 -0800 Rick Edgecombe
> wrote:
>
> > Change the module allocations to flush before freeing the pages.
> >
> > ...
> >
> > --- a/arch/x86/kernel/module.c
> > +++
On Tue, 2018-11-27 at 11:21 +0100, Daniel Borkmann wrote:
> On 11/27/2018 01:19 AM, Edgecombe, Rick P wrote:
> > On Mon, 2018-11-26 at 16:36 +0100, Jessica Yu wrote:
> > > +++ Rick Edgecombe [20/11/18 15:23 -0800]:
> >
> > [snip]
> > > Hi Rick!
> > >
freeing the pages.
If this solution seems good I can plug the flag in for other architectures that
define PAGE_KERNEL_EXEC.
Rick Edgecombe (2):
vmalloc: New flag for flush before releasing pages
x86/modules: Make x86 allocs to flush when free
arch/x86/kernel/module.c | 4 ++--
include/linux
.
Suggested-by: Dave Hansen
Suggested-by: Andy Lutomirski
Suggested-by: Will Deacon
Signed-off-by: Rick Edgecombe
---
include/linux/vmalloc.h | 1 +
mm/vmalloc.c| 13 +++--
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/include/linux/vmalloc.h b/include
Change the module allocations to flush before freeing the pages.
Signed-off-by: Rick Edgecombe
---
arch/x86/kernel/module.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index b052e883dd8c..1694daf256b3 100644
--- a
lls into __weak
> functions to allow them to be overridden in arch code.
>
> Signed-off-by: Ard Biesheuvel
> ---
It looks like some of the architectures call module_alloc directly in their
bpf_jit_compile implementations as well.
Rick
d by Linus Walleij
Rick Chen (3):
clocksource/drivers/atcpit100: Add andestech atcpit100 timer
clocksource/drivers/atcpit100: VDSO support
dt-bindings: timer: Add andestech atcpit100 timer binding doc
.../bindings/timer/andestech,atcpit100-timer.txt | 33 +++
drivers/clocksour
VDSO needs real-time cycle count to ensure the time accuracy.
Unlike others, nds32 architecture does not define clock source,
hence VDSO needs atcpit100 offering real-time cycle count
to derive the correct time.
Signed-off-by: Vincent Chen
Signed-off-by: Rick Chen
Signed-off-by: Greentime Hu
Add a document to describe Andestech atcpit100 timer and
binding information.
Signed-off-by: Rick Chen
Signed-off-by: Greentime Hu
Acked-by: Rob Herring
---
.../bindings/timer/andestech,atcpit100-timer.txt | 33 ++
1 file changed, 33 insertions(+)
create mode 100644
restart again.
It also set channel 0 32-bit timer0 as clock event and count
downwards until condition match. It will generate an interrupt
for handling periodically.
Signed-off-by: Rick Chen
Signed-off-by: Greentime Hu
Reviewed-by: Linus Walleij
---
drivers/clocksource/Kconfig | 7
> Forbidden
>
> You don't have permission to access /lists/kernel/ on this server.
> Apache/2.4.6 (CentOS) Server at www.spinics.net Port 443
I'm moving soon so I was working on DNS and replaced an IP address
that I shouldn't have. Should be working now.
. As one approaches the wire limit for
bitrate, the likes of a netperf service demand can be used to
demonstrate the performance change - though there isn't an easy way to
do that for parallel flows.
happy benchmarking,
rick jones
performance improved?
happy benchmarking,
rick jones
sane defaults. For example, the issues
we've seen with VMs sending traffic getting reordered when the driver
took it upon itself to enable xps.
rick jones
On 02/03/2017 10:22 AM, Benjamin Serebrin wrote:
Thanks, Michael, I'll put this text in the commit log:
XPS settings aren't write-able from userspace, so the only way I know
to fix XPS is in the driver.
??
root@np-cp1-c0-m1-mgmt:/home/stack# cat
/sys/devices/pci:00/:00:02.0/:04:0
On 01/17/2017 11:13 AM, Eric Dumazet wrote:
On Tue, Jan 17, 2017 at 11:04 AM, Rick Jones wrote:
Drifting a bit, and it doesn't change the value of dealing with it, but out
of curiosity, when you say mostly in CLOSE_WAIT, why aren't the server-side
applications reacting to the read
AIT, why aren't the
server-side applications reacting to the read return of zero triggered
by the arrival of the FIN?
happy benchmarking,
rick jones
rrors.
Straight-up defaults with netperf, or do you use specific -s/S or -m/M
options?
happy benchmarking,
rick jones
tionally, even under no stress at
all, you really should complain then.
Isn't that behaviour based (in part?) on the observation/belief that it
is fewer cycles to copy the small packet into a small buffer than to
send the larger buffer up the stack and have to allocate and map a
replacement?
rick jones
- (2 * VLAN_HLEN) which this patch is
doing. It will be useful in the next patch which allows
XDP program to extend the packet by adding new header(s).
Is mlx4 the only driver doing page-per-packet?
rick jones
gives a feel for by how much this alternative mechanism would
have to reduce path-length to maintain the CPU overhead, were the
mechanism to preclude GRO.
rick
On 12/01/2016 12:18 PM, Tom Herbert wrote:
On Thu, Dec 1, 2016 at 11:48 AM, Rick Jones wrote:
Just how much per-packet path-length are you thinking will go away under the
likes of TXDP? It is admittedly "just" netperf but losing TSO/GSO does some
non-trivial things to effectiv
even if one does have the CPU cycles to burn so to speak, the effect
on power consumption needs to be included in the calculus.
happy benchmarking,
rick jones
1024
So I'm not sure what might be going-on there.
You can get netperf to use write() instead of send() by adding a
test-specific -I option.
happy benchmarking,
rick
My udp_flood tool[1] cycle through the different syscalls:
taskset -c 2 ~/git/network-testing/src/udp_flood 198.18.50.1
On 11/28/2016 10:33 AM, Rick Jones wrote:
On 11/17/2016 12:16 AM, Jesper Dangaard Brouer wrote:
time to try IP_MTU_DISCOVER ;)
To Rick, maybe you can find a good solution or option with Eric's hint,
to send appropriate sized UDP packets with Don't Fragment (DF).
Jesper -
Top of t
On 11/17/2016 12:16 AM, Jesper Dangaard Brouer wrote:
time to try IP_MTU_DISCOVER ;)
To Rick, maybe you can find a good solution or option with Eric's hint,
to send appropriate sized UDP packets with Don't Fragment (DF).
Jesper -
Top of trunk has a change adding an omni, test-s
On 11/17/2016 04:37 PM, Julian Anastasov wrote:
On Thu, 17 Nov 2016, Rick Jones wrote:
raj@tardy:~/netperf2_trunk$ strace -v -o /tmp/netperf.strace src/netperf -F
src/nettest_omni.c -t UDP_STREAM -l 1 -- -m 1472
...
socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 4
getsockopt(4, SOL_SOCKET
tf(where,\n\t\ttput_fmt_1_l"..., 1472, 0,
{sa_family=AF_INET, sin_port=htons(58088),
sin_addr=inet_addr("127.0.0.1")}, 16) = 1472
Of course, it will continue to send the same messages from the send_ring
over and over instead of putting different data into the buffers each
time, but if one has a sufficiently large -W option specified...
happy benchmarking,
rick jones
On 11/17/2016 12:16 AM, Jesper Dangaard Brouer wrote:
time to try IP_MTU_DISCOVER ;)
To Rick, maybe you can find a good solution or option with Eric's hint,
to send appropriate sized UDP packets with Don't Fragment (DF).
Well, I suppose adding another setsockopt() to the data socke
On 11/16/2016 02:40 PM, Jesper Dangaard Brouer wrote:
On Wed, 16 Nov 2016 09:46:37 -0800
Rick Jones wrote:
It is a wild guess, but does setting SO_DONTROUTE affect whether or not
a connect() would have the desired effect? That is there to protect
people from themselves (long story about
tperf users on
Windows and there wasn't (at the time) support for git under Windows.
But I am not against the idea in principle.
happy benchmarking,
rick jones
PS - rick.jo...@hp.com no longer works. rick.jon...@hpe.com should be
used instead.
ms
with a large PAGE_SIZE?
/* avoid msg truncation on > 4096 byte PAGE_SIZE platforms */
or something like that.
rick jones
the
can, while "back in the day" (when some of the first ethtool changes to
report speeds other than the "normal" ones went in) the speed of a
flexnic was fixed, today, it can actually operate in a range. From a
minimum guarantee to an "if there is bandwidth available" cap.
rick jones
On 10/25/2016 08:31 AM, Paul Menzel wrote:
To my knowledge, the firmware files haven’t changed since years [1].
Indeed - it looks like I read "bnx2" and thought "bnx2x" Must remember
to hold-off on replying until after the morning orange juice is consumed :)
rick
version of
the firmware. Usually, finding a package "out there" with the newer
version of the firmware, and installing it onto the system is sufficient.
happy benchmarking,
rick jones
On 10/10/2016 09:08 AM, Rick Jones wrote:
On 10/09/2016 03:33 PM, Eric Dumazet wrote:
OK, I am adding/CC Rick Jones, netperf author, since it seems a netperf
bug, not a kernel one.
I believe I already mentioned fact that "UDP_STREAM -- -N" was not doing
a connect() on the receiver
On 10/09/2016 03:33 PM, Eric Dumazet wrote:
OK, I am adding/CC Rick Jones, netperf author, since it seems a netperf
bug, not a kernel one.
I believe I already mentioned fact that "UDP_STREAM -- -N" was not doing
a connect() on the receiver side.
I can confirm that the receive s
currently
selecting different TXQ.
Just for completeness, in my testing, the VMs were single-vCPU.
rick jones
nnectX-3 Pro,E5-2670v3 12421 12612
BE3, E5-26408178 8484
82599, E5-2640 8499 8549
BCM57840, E5-2640 8544 8560
Skyhawk, E5-26408537 8701
happy benchmarking,
Drew Balliet
Jeurg Haefliger
rick jones
true long-term bw
estimate
variable?
We could do that.
We used to have variables (aka module params) while BBR was cooking in
our kernels ;)
Are there better than epsilon odds of someone perhaps wanting to poke
those values as it gets exposure beyond Google?
happy benchmarking,
rick jones
conn-tracking work.
What is that first sentence trying to say? It appears to be incomplete,
and is that supposed to be "L3-symmetric?"
happy benchmarking,
rick jones
with one doorbell.
With small packets and the "default" ring size for this NIC/driver
combination, is the BQL large enough that the ring fills before one hits
the BQL?
rick jones
On 08/31/2016 04:11 PM, Eric Dumazet wrote:
On Wed, 2016-08-31 at 15:47 -0700, Rick Jones wrote:
With regard to drops, are both of you sure you're using the same socket
buffer sizes?
Does it really matter ?
At least at points in the past I have seen different drop counts at the
SO_R
With regard to drops, are both of you sure you're using the same socket
buffer sizes?
In the meantime, is anything interesting happening with TCP_RR or
TCP_STREAM?
happy benchmarking,
rick jones
kinda feel the same way about this situation.
I'm working on XFS (as the transmit analogue to RFS). We'll track
flows enough so that we should know when it's safe to move them.
Is the XFS you are working on going to subsume XPS or will the two
continue to exist in parallel a la RPS and RFS?
rick jones
From: Rick Jones
Since XPS was first introduced two things have happened. Some drivers
have started enabling XPS on their own initiative, and it has been
found that when a VM is sending data through a host interface with XPS
enabled, that traffic can end-up seriously out of order.
Signed-off
From: Rick Jones
Since XPS was first introduced two things have happened. Some drivers
have started enabling XPS on their own initiative, and it has been
found that when a VM is sending data through a host interface with XPS
enabled, that traffic can end-up seriously out of order.
Signed-off
ecluding others going down that path.
happy benchmarking,
rick
steps to pin VMs can enable XPS in that case. It isn't clear that
one should always pin VMs - for example if a (public) cloud needed to
oversubscribe the cores.
happy benchmarking,
rick jones
lt. For others the functionality remains
+disabled until explicitly configured. To enable XPS, the bitmap of
+CPUs that may use a transmit queue is configured using the sysfs file
+entry:
/sys/class/net//queues/tx-/xps_cpus
The original wording leaves the impression that XPS is not enabled by
default.
rick
when the NIC at the sending end
is a BCM57840. It does not appear that the bnx2x driver in the 4.4
kernel is enabling XPS.
So, it would seem that there are three cases of enabling XPS resulting
in out-of-order traffic, two of which result in a non-trivial loss of
performance.
happy benc
On 08/24/2016 10:23 AM, Eric Dumazet wrote:
From: Eric Dumazet
per_cpu_inc() is faster (at least on x86) than per_cpu_ptr(xxx)++;
Is it possible it is non-trivially slower on other architectures?
rick jones
Signed-off-by: Eric Dumazet
---
include/net/sch_generic.h |2 +-
1 file
8695
Average 4108 8940 8859 8885 8671
happy benchmarking,
rick jones
The sample counts below may not fully support the additional statistics
but for the curious:
raj@tardy:/tmp$ ~/netperf2_trunk/doc/examples/parse_single_stream.py -r
6 waxon_performance.log
trigger an interrupt. Presumably setting
rx_max_coalesced_frames to 1 to disable interrupt coalescing.
happy benchmarking,
rick jones
resently? I believe Phil
posted something several messages back in the thread.
happy benchmarking,
rick jones
On 07/07/2016 09:34 AM, Eric W. Biederman wrote:
Rick Jones writes:
300 routers is far from the upper limit/goal. Back in HP Public
Cloud, we were running as many as 700 routers per network node (*),
and more than four network nodes. (back then it was just the one
namespace per router and
espace per
router and network). Mileage will of course vary based on the "oomph" of
one's network node(s).
happy benchmarking,
rick jones
* Didn't want to go much higher than that because each router had a port
on a common linux bridge and getting to > 1024 would be an unpleasant day.
problematic
since it takes up server resources for sockets sitting in TCP_CLOSE_WAIT.
Isn't the server application expected to act on the read return of zero
(which is supposed to be) triggered by the receipt of the FIN segment?
rick jones
We are also in the process of contacting Appl
onnection
which has been reset? Is it limited to those errno values listed in the
read() manpage, or does it end-up getting an errno value from those
listed in the recv() manpage? Or, perhaps even one not (presently)
listed in either?
rick jones
and
so could indeed productively use TCP FastOpen.
"Overall, very good success-rate"
though tempered by
"But... middleboxes were a big issue in some ISPs..."
Though it doesn't get into how big (some connections, many, most, all?)
and how many ISPs.
rick jones
Just an anecdote.
On 06/24/2016 02:46 PM, Tom Herbert wrote:
On Fri, Jun 24, 2016 at 2:36 PM, Rick Jones wrote:
How would you define "severely?" Has it actually been more severe than for
say ECN? Or it was for say SACK or PAWS?
ECN is probably even a bigger disappointment in terms of seeing
YN packets with data have together
severely hindered what otherwise should have been straightforward and
useful feature to deploy.
How would you define "severely?" Has it actually been more severe than
for say ECN? Or it was for say SACK or PAWS?
rick jones
On 06/22/2016 04:10 PM, Rick Jones wrote:
My systems are presently in the midst of an install but I should be able
to demonstrate it in the morning (US Pacific time, modulo the shuttle
service of a car repair place)
The installs finished sooner than I thought. So, receiver:
root@np-cp1
1 - 100 of 554 matches
Mail list logo