On 26-May-01 Dima Dorfman wrote:
> Is there a reason vm_pager_allocate acquires vm_mtx itself if
> necessary but vm_pager_deallocate does not? At the moment, detaching
> an md(4) disk will panic the system with a failed mtx_assert in
> vm_pager_deallocate. This can be fixed one
Alfred Perlstein wrote:
> * Dima Dorfman <[EMAIL PROTECTED]> [010525 22:22] wrote:
> > Is there a reason vm_pager_allocate acquires vm_mtx itself if
> > necessary but vm_pager_deallocate does not? At the moment, detaching
> > an md(4) disk will panic the system
In message <[EMAIL PROTECTED]>, Dima Dorfman write
s:
>Alfred Perlstein <[EMAIL PROTECTED]> writes:
>> * Dima Dorfman <[EMAIL PROTECTED]> [010525 22:22] wrote:
>> > Is there a reason vm_pager_allocate acquires vm_mtx itself if
>> > necessary but
Alfred Perlstein <[EMAIL PROTECTED]> writes:
> * Dima Dorfman <[EMAIL PROTECTED]> [010525 22:22] wrote:
> > Is there a reason vm_pager_allocate acquires vm_mtx itself if
> > necessary but vm_pager_deallocate does not? At the moment, detaching
> > an md(4) disk wi
* Dima Dorfman <[EMAIL PROTECTED]> [010525 22:22] wrote:
> Is there a reason vm_pager_allocate acquires vm_mtx itself if
> necessary but vm_pager_deallocate does not? At the moment, detaching
> an md(4) disk will panic the system with a failed mtx_assert in
> vm_pager_deallo
Is there a reason vm_pager_allocate acquires vm_mtx itself if
necessary but vm_pager_deallocate does not? At the moment, detaching
an md(4) disk will panic the system with a failed mtx_assert in
vm_pager_deallocate. This can be fixed one of two ways:
vm_pager_deallocate could be made to deal
Those who dial will know its meaning: 6545666,555,666,654555654
On Tue, 24 Apr 2001, Julian Elischer wrote:
> Poul-Henning Kamp wrote:
> >
>
> > I'm sure you are fully aware of the implications of the strategically
> > placed "supposed" in your own sentence. I have never heard anybody
> > g
Poul-Henning Kamp wrote:
>
> I'm sure you are fully aware of the implications of the strategically
> placed "supposed" in your own sentence. I have never heard anybody
> get Mach code multithreaded yet.
Mach has successfully run in multiprocessor multithreadded
systems since 1991.
>
--
> The Mach code we originally inherited was supposed to already by
> multiprocessor safe. Did we manage to eliminate that capability?
Yes and no. The vm_map layer still has the necessary locking calls,
but the vm_object and pmap layers don't. The pmap is still similar
enough that the original
* Poul-Henning Kamp <[EMAIL PROTECTED]> [010424 08:36] wrote:
> In message <[EMAIL PROTECTED]>, Garrett Wollman write
> s:
> >< said:
> >
> >> You can find the work I've done so far to make a giant vm mutex
> >> here:
> >
> >The Mach code we originally inherited was supposed to already by
> >multi
In message <[EMAIL PROTECTED]>, Garrett Wollman write
s:
>< said:
>
>> You can find the work I've done so far to make a giant vm mutex
>> here:
>
>The Mach code we originally inherited was supposed to already by
>multiprocessor safe. Did we manage to eliminate that capability?
I'm sure you are f
< said:
> You can find the work I've done so far to make a giant vm mutex
> here:
The Mach code we originally inherited was supposed to already by
multiprocessor safe. Did we manage to eliminate that capability?
-GAWollman
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe free
* Alfred Perlstein <[EMAIL PROTECTED]> [010423 21:51] wrote:
> You can find the work I've done so far to make a giant vm mutex
> here:
>
> http://people.freebsd.org/~alfred/vm.diff
I've refreshed the diff, it now makes it to:
vfs_default.c 545 <
page_protect
>
> There is potential for nasty lock ordering conflicts here.
>
> Page faults will govm_mtx -> vm_page_queues_mtx
> The pageout code goes vm_page_queues_mtx -> vm_mtx
Actually vm_page_queues_mtx == vm_mtx. At a later date I may look
at finegraining the vm_mtx
On Mon, 23 Apr 2001, Alfred Perlstein wrote:
> requires vm_page_queues_mtx:
> manipulation of vm_page_queues
[snip]
> pmaps spotted:
> pmap_copy_page
> pmap_page_protect
There is potential for nasty lock ordering conflicts here.
Page faults will govm_mtx -> vm_p
ues
vm_add_new_page() asserts
vm_page_io_start/finish
vm_page_wakeup
vm_page_busy
vm_page_flash
vm_page_flag_clear
vm_page_flag_set
vm_page_cache
vm_page_free
vm_page_free_zero
vm_object_page_clean
mtx_assert(&vm_mtx, MA_OWNED);
283
vm_page_rename
vm_page_insert
vm_object_shad
16 matches
Mail list logo