On Mon, Jun 27, 2016 at 05:10:14PM +0300, Kapetanakis Giannis wrote:
> new version with all changes
I have polished the diff a bit and would like to commit it.
ok?
bluhm
Index: usr.sbin/syslogd/syslogd.8
===
RCS file: /data/mirror/
On Mon, Jul 11, 2016 at 04:39:05PM -0400, Ted Unangst wrote:
> Tim Newsham wrote:
> > The tmpfs filesystem allows the mounting user to specify a
> > username, a groupname or a device name for the root node of
> > the filesystem. A user that specifies a value of VNOVAL for
> > any of these fields w
On Mon, Jul 11, 2016 at 05:06:33PM -0400, Ted Unangst wrote:
> Todd C. Miller wrote:
> > On Mon, 11 Jul 2016 16:39:05 -0400, "Ted Unangst" wrote:
> >
> > > sigh. i don't know what else can trigger that kassert, so just fix the
> > > caller
> > > to do the same check and return an error.
> >
> >
On Mon, 11 Jul 2016 17:06:33 -0400, "Ted Unangst" wrote:
> those checks are equally useless. UID_MAX is UINT_MAX so the tests don't fire
> .
>
> the question is what other tmpfs code blows up when nodes owned by -1 start
> showing up.
Fair enough. But this bit can never be true:
(args.ta_r
Todd C. Miller wrote:
> On Mon, 11 Jul 2016 16:39:05 -0400, "Ted Unangst" wrote:
>
> > sigh. i don't know what else can trigger that kassert, so just fix the
> > caller
> > to do the same check and return an error.
>
> Checking for VNOVAL is kind of bogus. How about we try something
> more sens
On Mon, 11 Jul 2016 16:39:05 -0400, "Ted Unangst" wrote:
> sigh. i don't know what else can trigger that kassert, so just fix the caller
> to do the same check and return an error.
Checking for VNOVAL is kind of bogus. How about we try something
more sensible?
- todd
Index: tmpfs_subr.c
=
Tim Newsham wrote:
> The tmpfs filesystem allows the mounting user to specify a
> username, a groupname or a device name for the root node of
> the filesystem. A user that specifies a value of VNOVAL for
> any of these fields will trigger an assert in tmpfs_alloc_node():
>
> /* XXX pedro: we
Here's a bug related to tmpfs mounts.
Forwarded Message
Subject:[Bug49] Tmpfs mount with bad args can lead to a panic
Date: Mon, 11 Jul 2016 10:07:33 -1000
From: Tim Newsham
To: dera...@openbsd.org, Jesse Hertz
Hi Theo, here's a low-severity DoS issue.. root
Theo Buehler wrote:
> Last fall there was a thread containing some examples of composite
> numbers that were not factored properly by factor(6):
>
> https://marc.info/?t=14415584232&r=1&w=2
>
> Some suggestions were made and some (incomplete) diffs were posted, but
> no action was taken. Belo
> Except that the flipper isn't enabled yet and that the backpressure
> mechanism is busted somewhow. At least that is what the recent
> experiment with cranking up the buffer cache limit showed us.
> People screamed and we backed the change out again. And there were
> problems on amd64 and spa
Last fall there was a thread containing some examples of composite
numbers that were not factored properly by factor(6):
https://marc.info/?t=14415584232&r=1&w=2
Some suggestions were made and some (incomplete) diffs were posted, but
no action was taken. Below is a diff that uses Newton's met
> From: "Theo de Raadt"
> Date: Mon, 11 Jul 2016 09:29:16 -0600
>
> > > And bufs don't need it either. Have you actually cranked your buffer
> > > cache that high? I have test this, on sparc64 which has unlimited DMA
> > > reach due to the iommu. The system comes to a crawl when there are
> >
> On Mon, 11 Jul 2016, Theo de Raadt wrote:
> > > No, I didn't know that. I assumed that having a few more GBs of bufcache
> > > would help the performance. Until that is the case, 64bit dma does not
> > > make much sense.
> >
> > BTW, my tests were on a 128GB sun4v machine. Sun T5140. They ar
On Mon, 11 Jul 2016, Theo de Raadt wrote:
> > No, I didn't know that. I assumed that having a few more GBs of bufcache
> > would help the performance. Until that is the case, 64bit dma does not
> > make much sense.
>
> BTW, my tests were on a 128GB sun4v machine. Sun T5140. They are
> actually
> > And bufs don't need it either. Have you actually cranked your buffer
> > cache that high? I have test this, on sparc64 which has unlimited DMA
> > reach due to the iommu. The system comes to a crawl when there are
> > too many mbufs or bufs, probably due to management structures unable
> > t
On Mon, 11 Jul 2016, Theo de Raadt wrote:
> > Openbsd on amd64 assumes that DMA is only possible to the lower 4GB.
>
> Not exactly. On an architecture-by-architecture basis, OpenBSD is
> capable of insisting DMA reachable memory only lands in a smaller zone
> of memory -- because it makes the ot
> BTW, for usb devices, it probably depends on the host controller if 64bit
> dma is possible or not. I guess most xhci controllers will be able to do
> it.
The 4GB limitation is a simple solution to a wide variety of problems.
Please describe a situation where 4GB of dma memory is a limitation
> Openbsd on amd64 assumes that DMA is only possible to the lower 4GB.
Not exactly. On an architecture-by-architecture basis, OpenBSD is
capable of insisting DMA reachable memory only lands in a smaller zone
of memory -- because it makes the other layers of code easier.
> More interesting would
On Mon, 11 Jul 2016, Ted Unangst wrote:
> Stefan Fritsch wrote:
> > On Mon, 11 Jul 2016, Reyk Floeter wrote:
> > > The intentional 4GB limit is for forwarding: what if you forward mbufs
> > > from a 64bit-capable interface to another one that doesn't support 64bit
> > > DMA? And even if you woul
> From: "Ted Unangst"
> Date: Mon, 11 Jul 2016 10:45:19 -0400
>
> Stefan Fritsch wrote:
> > On Mon, 11 Jul 2016, Reyk Floeter wrote:
> > > The intentional 4GB limit is for forwarding: what if you forward mbufs
> > > from a 64bit-capable interface to another one that doesn't support 64bit
> > >
Stefan Fritsch wrote:
> On Mon, 11 Jul 2016, Reyk Floeter wrote:
> > The intentional 4GB limit is for forwarding: what if you forward mbufs
> > from a 64bit-capable interface to another one that doesn't support 64bit
> > DMA? And even if you would only enable it if all interfaces are
> > 64bit-c
> Date: Mon, 11 Jul 2016 16:10:04 +0200 (CEST)
> From: Stefan Fritsch
>
> On Mon, 11 Jul 2016, Reyk Floeter wrote:
> > The intentional 4GB limit is for forwarding: what if you forward mbufs
> > from a 64bit-capable interface to another one that doesn't support 64bit
> > DMA? And even if you wou
On Sun, Jul 10, 2016 at 09:12:03PM +0200, Mark Kettenis wrote:
> Currently the armv7 port has several bits ehci(4) glue code.
> Basically there is one of these for every SoC platform that we
> support. I converted the glue for the i.MX6 platform to use the FDT
> and that works fine. However, this
On Mon, 11 Jul 2016, Reyk Floeter wrote:
> The intentional 4GB limit is for forwarding: what if you forward mbufs
> from a 64bit-capable interface to another one that doesn't support 64bit
> DMA? And even if you would only enable it if all interfaces are
> 64bit-capable, what if you plug in a 32
Hi,
The intentional 4GB limit is for forwarding: what if you forward mbufs from a
64bit-capable interface to another one that doesn't support 64bit DMA? And even
if you would only enable it if all interfaces are 64bit-capable, what if you
plug in a 32bit USB/hotplug interface? We did not want t
Brent Cook wrote:
> Noted by VS2013, const values should be initialized (though I think
> the 'static' should also implicitly zero).
this sounds like the compiler doesn't know C?
> This also removes some unused code that also contained uninitialized
> static consts.
that part looks fine.
Hi,
following the discussion about mbufs, I have some questions about 64bit
DMA in general.
Openbsd on amd64 assumes that DMA is only possible to the lower 4GB. But
there are many devices (PCIe, virtio, ...) that can do DMA to the whole
memory. Is it feasible to have known good devices opt-in
Noted by VS2013, const values should be initialized (though I think
the 'static' should also implicitly zero).
This also removes some unused code that also contained uninitialized
static consts.
ok?
Index: evp/e_chacha20poly1305.c
=
On Fri, Jul 08, 2016 at 07:20:32PM -0600, Bob Beck wrote:
> One thing I am considering here (and for y'all to know, this is a
> major API addition and won't
> go in until after the soon upcoming openbsd release cycle happens). is
> that the way
> we have done this in the past with libtls is to just
On 07/08/16 23:35, Alexander Bluhm wrote:
> On Tue, Jun 28, 2016 at 08:30:16AM +0200, Martin Pieuchot wrote:
>> With this diff if your next hop becomes invalid after being cached you'll
>> also need two ICMP6_PACKET_TOO_BIG to restore the MTU, is this wanted?
>
> No, a single ICMP6_PACKET_TOO_BIG
On 07/06/16 18:07, Martin Pieuchot wrote:
> KAME people started putting multicast addresses in the routing table.
> But since it hasn't been designed for that they used workarounds.
>
> I believe it's time to embrace and consolidate this choice.
>
> . The first reason is that multicast address
31 matches
Mail list logo