On Mon, Mar 21, 2016 at 5:02 PM, Alexander Bluhm
wrote:
> there are several improvements for the sendsyslog(2) man page
> floating around. I have put them into a single diff. Do we want
> all of them?
Looks good to me; refinement can happen in tree.
ok guenther@
On 21/03/2016 19:04, Peter Hessler wrote:
On 2016 Mar 21 (Mon) at 16:22:53 +0100 (+0100), Claudio Jeker wrote:
:On Mon, Mar 21, 2016 at 03:54:38PM +0100, Peter Hessler wrote:
:> We ran into a situation where we accidentally blackholed traffic going to
:> a new Internet Exchange. When we added th
On Mon, Mar 21, 2016 at 11:15:48AM +0100, Patrick Wildt wrote:
> Hi,
>
> I would like to get rid of even more unused CPUs, so we end up with only
> armish, zaurus (armv5) and armv7. This diff removes ARM9E, but I also
> have diffs prepared to get rid of ARM10 and ARM11.
Tested here, and OK bme
Hi,
there are several improvements for the sendsyslog(2) man page
floating around. I have put them into a single diff. Do we want
all of them?
bluhm
Index: lib/libc/sys/sendsyslog.2
===
RCS file: /data/mirror/openbsd/cvs/src/lib/l
Hi,
compile tested.
-Artturi
Index: sys/arch/arm/include/pmap.h
===
RCS file: /cvs/src/sys/arch/arm/include/pmap.h,v
retrieving revision 1.38
diff -u -p -u -r1.38 pmap.h
--- sys/arch/arm/include/pmap.h 19 Mar 2016 09:36:57 -
> Date: Mon, 21 Mar 2016 20:02:28 +0100
> From: Stefan Kempf
>
> Recently we found that amaps consume a good deal of kernel address space.
> See this thread: https://marc.info/?l=openbsd-tech&m=145752756005014&w=2.
> And we found a way to reduce kernel mem pressure for some architectures
> at lea
Recently we found that amaps consume a good deal of kernel address space.
See this thread: https://marc.info/?l=openbsd-tech&m=145752756005014&w=2.
And we found a way to reduce kernel mem pressure for some architectures
at least. See the diffs in that thread.
Besides that, it's possible to shrink
Hi all,
We are contemplating this in FreeBSD land and thought I'd take your
opinion.
https://lists.freebsd.org/pipermail/freebsd-transport/2016-March/000100.html
Is there a reason for snd_wnd decrease here?
Cheers,
Hiren
ps: keep me cc'd as I am not subscribed.
pgpwnplmaR_Ai.pgp
Description:
On 2016 Mar 21 (Mon) at 16:22:53 +0100 (+0100), Claudio Jeker wrote:
:On Mon, Mar 21, 2016 at 03:54:38PM +0100, Peter Hessler wrote:
:> We ran into a situation where we accidentally blackholed traffic going to
:> a new Internet Exchange. When we added the new vlans and new peers, the
:> nexthop ad
On Mon, Mar 21, 2016 at 05:11:04PM +0100, Peter Hessler wrote:
> On 2016 Mar 21 (Mon) at 16:22:53 +0100 (+0100), Claudio Jeker wrote:
> :On Mon, Mar 21, 2016 at 03:54:38PM +0100, Peter Hessler wrote:
> :> We ran into a situation where we accidentally blackholed traffic going to
> :> a new Internet
On 2016 Mar 21 (Mon) at 16:22:53 +0100 (+0100), Claudio Jeker wrote:
:On Mon, Mar 21, 2016 at 03:54:38PM +0100, Peter Hessler wrote:
:> We ran into a situation where we accidentally blackholed traffic going to
:> a new Internet Exchange. When we added the new vlans and new peers, the
:> nexthop ad
> Date: Sat, 19 Mar 2016 13:53:07 +0100
> From: Martin Pieuchot
>
> Applications using multiple threads often call sched_yield(2) to
> indicate that one of the threads cannot make any progress because
> it is waiting for a resource held by another one.
>
> One example of this scenario is the _sp
On 21/03/16(Mon) 15:54, Peter Hessler wrote:
> We ran into a situation where we accidentally blackholed traffic going to
> a new Internet Exchange. When we added the new vlans and new peers, the
> nexthop address on that vlan was *not* our neighbor's address, but
> instead used our own IP address
On Mon, Mar 21, 2016 at 03:54:38PM +0100, Peter Hessler wrote:
> We ran into a situation where we accidentally blackholed traffic going to
> a new Internet Exchange. When we added the new vlans and new peers, the
> nexthop address on that vlan was *not* our neighbor's address, but
> instead used o
We ran into a situation where we accidentally blackholed traffic going to
a new Internet Exchange. When we added the new vlans and new peers, the
nexthop address on that vlan was *not* our neighbor's address, but
instead used our own IP address on that new interface. "bgpctl show rib",
showed the
When entries are displayed the SIN_PROXY bit is never set, so remove
this dead code.
proxy entries correspond to "published" one as explained in the manual.
Index: arp.8
===
RCS file: /cvs/src/usr.sbin/arp/arp.8,v
retrieving revisio
> Date: Mon, 21 Mar 2016 15:31:42 +0300
> From: Alexei Malinin
>
> Hello.
>
> I'm not sure but it seems to me that there are several missed things:
> - checking path against NULL,
POSIX says that we should return EINVAL in that case.
> - setting errno to ENOMEM in case of malloc() failure,
ma
On Mon, Mar 21, 2016 at 03:31:42PM +0300, Alexei Malinin wrote:
> Hello.
>
> I'm not sure but it seems to me that there are several missed things:
> - checking path against NULL,
> - setting errno to ENOMEM in case of malloc() failure,
> - clarification in comments.
>
>
> --
> Alexei Malinin
S
With the diff I just sent all the ARP regression tests suddenly pass.
But I did not fix anything! This is because the actual proxy ARP test
is incomplete.
Diff below fixes it by adding *two* different entries for a given IP.
One is "published" and should be returned when an echo request is
receiv
When the caller of arplookup() asked for a proxy'd ARP entry, make
sure the entry returned by rtalloc(9) is indeed "published".
This is currently always true for ARP entries added with arp(8) but
it is not the case if you add your own entry with the 33rd bit set
but without setting RTF_ANNOUNCE.
Hello.
I'm not sure but it seems to me that there are several missed things:
- checking path against NULL,
- setting errno to ENOMEM in case of malloc() failure,
- clarification in comments.
--
Alexei Malinin
--- src/lib/libc/stdlib/realpath.c.orig Tue Oct 13 23:55:37 2015
+++ src/lib/libc/st
Sat, 19 Mar 2016 20:33:23 -0600 (MDT) Philip Guenther
> CVSROOT: /cvs
> Module name: www
> Changes by: guent...@cvs.openbsd.org2016/03/19 20:33:23
>
> Modified files:
> faq: current.html
>
> Log message:
> Need to build ld.so first
Closing a href tag spotted i
On Sun, Mar 20, 2016 at 07:28:45PM +0100, Alexander Bluhm wrote:
> On Sat, Mar 19, 2016 at 10:41:06PM +0100, Alexander Bluhm wrote:
> > Perhaps the tcps_sc_seedrandom counter with a netstat -s line should
> > be commited anyway to show the problem.
>
> ok?
OK claudio@
> bluhm
>
> Index: sys/ne
On Mon, Mar 21, 2016 at 08:25:59PM +1000, David Gwynne wrote:
> how can i judge if this is better than just using a single hash with a strong
> function?
The attack I see is that you can measure the bucket distribution
by timing the SYN+ACK response. You can collect samples that end
in the same
could someone test this on a strict arch?
ok?
Index: if_bridge.c
===
RCS file: /cvs/src/sys/net/if_bridge.c,v
retrieving revision 1.276
diff -u -p -r1.276 if_bridge.c
--- if_bridge.c 8 Mar 2016 09:09:43 - 1.276
+++ if_bridg
> On 21 Mar 2016, at 4:28 AM, Alexander Bluhm wrote:
>
> On Sat, Mar 19, 2016 at 10:41:06PM +0100, Alexander Bluhm wrote:
>> Perhaps the tcps_sc_seedrandom counter with a netstat -s line should
>> be commited anyway to show the problem.
>
> ok?
how can i judge if this is better than just using
Hi,
I would like to get rid of even more unused CPUs, so we end up with only
armish, zaurus (armv5) and armv7. This diff removes ARM9E, but I also
have diffs prepared to get rid of ARM10 and ARM11.
ok?
Patrick
diff --git sys/arch/arm/arm/cpu.c sys/arch/arm/arm/cpu.c
index a3fe271..6fa6bc3 1006
On 20/03/16(Sun) 19:19, Alexander Bluhm wrote:
> On Sat, Mar 19, 2016 at 10:41:06PM +0100, Alexander Bluhm wrote:
> > The drawback is that the the cache lookup has to be done in two syn
> > caches when an ACK arrives.
>
> This can be prevented most of the time. Switch the cache only after
> 1
The heart of our ongoing network stack work to take the IP forwarding
path out of the kernel lock relies on an MP-safe routing table. Our
plan is to use a lock-free lookup based on ART and SRPs.
But before turning ART MP-safe I'd like to squash the remaining bugs.
The only way to be sure to find
Am 03/21/16 um 01:29 schrieb Mark Kettenis:
>
> No. It's a hack. It points out aproblem that should be investigated
> deeper.
>
Maybe that's not only thread related. This diff only makes a difference
with multi-threaded processes. It may be worth considering looking at
the process level as wel
30 matches
Mail list logo