On Fri, Jan 10, 2014 at 6:31 AM, Ted Unangst wrote:
> On Fri, Jan 10, 2014 at 05:14, Miod Vallat wrote:
>>> The only caller of kcopy is uiomove. There is no way a function like
>>> this can ever work. If you need to rely on your copy function to save
>>> you from pointers outside the address space
P_BIGLOCK is only used to figure out if the process holds the biglock.
The problem with this is that the first entry point from a sleepable context
to the kernel needs to call KERNEL_PROC_LOCK while recursive (or non-process)
entry points need to call KERNEL_LOCK. Pedro showed at least one entry
po
ok
//art
On Tue, Jul 5, 2011 at 6:46 PM, Ted Unangst wrote:
> Remove a broken optimization found by the new pool_chk code. It
> leaves an empty page in curpage, and this inconsistency slowly spreads
> until finally one of the other pool checks freaks out.
>
> This is only likely to occur if you
this diff doesn't apply
> diff --git a/arch/amd64/isa/clock.c b/arch/amd64/isa/clock.c
> index 23fc0f0..1b4ff7c 100644
> --- a/arch/amd64/isa/clock.c
> +++ b/arch/amd64/isa/clock.c
On Tue, May 10, 2011 at 5:33 AM, Jeff Licquia wrote:
> My question to you is: do you consider the FHS to be relevant to current
and
> future development of OpenBSD? If not, is this simply due to lack of
> maintenance; would your interest in the FHS be greater with more consistent
> updates?
Mor
On Tue, Apr 26, 2011 at 10:09 PM, Amit Kulkarni wrote:
> What do you guys think if the page size is dynamically adjusted to the
> datasize of FFS1 i.e when I fire up disklabel it is by default 16Kb
> on FFS1 on amd64. And higher on FFS2 only systems?
Implement it. Let us know the numbers.
//art
Free the correct memory when we failed to allocate va.
//art
Index: uvm/uvm_km.c
===
RCS file: /cvs/src/sys/uvm/uvm_km.c,v
retrieving revision 1.97
diff -u -r1.97 uvm_km.c
--- uvm/uvm_km.c18 Apr 2011 19:23:46 - 1.97
A repeat of an earlier diff.
Change stack and exec arguments allocation from old allocators to km_alloc(9).
//art
Index: kern/kern_exec.c
===
RCS file: /cvs/src/sys/kern/kern_exec.c,v
retrieving revision 1.117
diff -u -r1.117 kern_
On Tue, Apr 5, 2011 at 11:16 PM, Mark Kettenis
wrote:
>> + uaddr = km_alloc(USPACE, &kv_fork, &kp_dma_zero, &kd_waitok);
>> if (uaddr == 0) {
>
> ...you should use NULL in the comparison here and drop the (struct
> user *) cast a bit further down.
>
Yup. I'll fix that after commit.
//
A few more conversions to km_alloc: exec arguments, kernel stacks and
pipe buffers.
Tested on amd64, i386 and sparc. Please give it a spin on other architectures,
I would be especially interested in mips64 since it's the only one that needs
kernel stack alignment.
//art
Index: kern/kern_exec.c
=
On Tue, Apr 5, 2011 at 8:05 AM, Anton Maksimenkov wrote:
> That is why kmthread exists?
No, but it's a nice side effect that we can use it to resolve the static kentry
problem.
The reason for kmthread is that we want to reduce the use of kmem_map
since it has problems with locking and recursion
First proper use of the new km_alloc.
- Change pool constraints to use kmem_pa_mode instead of uvm_constraint_range
- Use km_alloc for all backend allocations in pools.
- Use km_alloc for the emergmency kentry allocations in uvm_mapent_alloc
- Garbage collect uvm_km_getpage, uvm_km_getpage_pla
There were two problems with vslock_device functions that are
used for magic page flipping for physio and bigmem.
- Fix error handling so that we free stuff on error.
- We use the mappings to keep track of which pages need to be
freed so don't unmap before freeing (this is theoretically
in
Ariane van der Steldt writes:
> Why are the pventries allocated from the kmem_map anyway? I think they
> should be allocated using the uvm_km_getpage instead. Or even better,
> from a pvpool like amd64.
Recursion.
caller holds lock on kernel_map. getpage pool is empty, caller wakes
up the getpa
Vladimir Kirillov writes:
> Index: sched.h
> ===
> RCS file: /cvs/src/sys/sys/sched.h,v
> retrieving revision 1.22
> diff -N -u -p sched.h
> --- sched.h 14 Apr 2009 09:13:25 - 1.22
> +++ sched.h 29 Jul 2009 10:30:52 -000
Otto Moerbeek writes:
>> AFAIK the whole work was done to make the cache more sane. The current
>> version is just insane enough that Bob was crying, shouting and playing
>> with red wine bottles during c2k9.
>
> That's not enough reason to change the data structure.
Yes, it is. Code is primaril
Otto Moerbeek writes:
> What's the reason to move to RB trees? In general they are slower,
> have larger memory overhead
slower - not in practice. Especially in this case where we have one tree
per parent vnode instead of one global hash. This also allows better
locking granularity (if that will
17 matches
Mail list logo