ok, I've committed it but I've left the default at "don't pin."
That way the existing behaviour hasn't changed and it's easy to flip
on to play with.
Thanks for the feedback John / Peter!
-a
___
freebsd-current@freebsd.org mailing list
http://lists.fr
On 9 May 2014 16:49, Peter Grehan wrote:
>> Yup. I've just done that.
>>
>> http://people.freebsd.org/~adrian/norse/20140509-swi-pin-1.diff
>
>
> Thanks, that'll work.
>
>
>> Which workloads are you thinking about? Maybe we could introduce some
>> higher level description of which CPU(s) at boot
Yup. I've just done that.
http://people.freebsd.org/~adrian/norse/20140509-swi-pin-1.diff
Thanks, that'll work.
Which workloads are you thinking about? Maybe we could introduce some
higher level description of which CPU(s) at boot time to do "freebsd
stuff" on, and then don't start things li
On Friday, May 09, 2014 3:50:28 pm Peter Grehan wrote:
> > How about i instead do the comprimise:
> >
> > * i'll pin all other swi's
> > * default swi isn't pinned by default, but one can flip on a sysctl at
> > boot time to pin it
> >
> > How's that sound?
>
> And also please a sysctl that disa
On 9 May 2014 12:50, Peter Grehan wrote:
>> How about i instead do the comprimise:
>>
>> * i'll pin all other swi's
>> * default swi isn't pinned by default, but one can flip on a sysctl at
>> boot time to pin it
>>
>> How's that sound?
>
>
> And also please a sysctl that disables any swi pinning
How about i instead do the comprimise:
* i'll pin all other swi's
* default swi isn't pinned by default, but one can flip on a sysctl at
boot time to pin it
How's that sound?
And also please a sysctl that disables any swi pinning.
It is sometimes useful to change the default cpuset, for ins
On 9 May 2014 10:49, John Baldwin wrote:
> On Thursday, May 08, 2014 11:43:39 pm Adrian Chadd wrote:
>> Hi,
>>
>> I'd like to revisit this now.
>>
>> I'd like to commit this stuff as-is and then take some time to revisit
>> the catch-all softclock from cpu0 swi. It's more complicated than it
>> ne
On Thursday, May 08, 2014 11:43:39 pm Adrian Chadd wrote:
> Hi,
>
> I'd like to revisit this now.
>
> I'd like to commit this stuff as-is and then take some time to revisit
> the catch-all softclock from cpu0 swi. It's more complicated than it
> needs to be as it just assumes timeout_cpu == cpuid
Hi,
I'd like to revisit this now.
I'd like to commit this stuff as-is and then take some time to revisit
the catch-all softclock from cpu0 swi. It's more complicated than it
needs to be as it just assumes timeout_cpu == cpuid of cpu 0. So
there's no easy way to slide in a new catch-all softclock.
On 20 February 2014 11:17, John Baldwin wrote:
> (A further variant of this would be to divorce cpu0's swi from the
> catch-all softclock and let the catch-all softclock float, but bind
> all the per-cpu swis)
I like this idea. If something (eg per-CPU TCP timers, if it's turned
on) makes a very
On Wednesday, February 19, 2014 4:02:54 pm John Baldwin wrote:
> On Wednesday, February 19, 2014 3:04:51 pm Adrian Chadd wrote:
> > On 19 February 2014 11:59, Alexander Motin wrote:
> >
> > >> So if we're moving towards supporting (among others) a pcbgroup / RSS
> > >> hash style work load distri
On Thu, Feb 20, 2014 at 12:09:04AM +0200, Alexander Motin wrote:
> On 19.02.2014 23:44, Slawa Olhovchenkov wrote:
> > On Wed, Feb 19, 2014 at 11:04:49PM +0200, Alexander Motin wrote:
> >
> >> On 19.02.2014 22:04, Adrian Chadd wrote:
> >>> On 19 February 2014 11:59, Alexander Motin wrote:
> >>>
>
On 19 February 2014 14:09, Alexander Motin wrote:
> On 19.02.2014 23:44, Slawa Olhovchenkov wrote:
>>
>> On Wed, Feb 19, 2014 at 11:04:49PM +0200, Alexander Motin wrote:
>>
>>> On 19.02.2014 22:04, Adrian Chadd wrote:
On 19 February 2014 11:59, Alexander Motin wrote:
>> So if w
On 19.02.2014 23:44, Slawa Olhovchenkov wrote:
On Wed, Feb 19, 2014 at 11:04:49PM +0200, Alexander Motin wrote:
On 19.02.2014 22:04, Adrian Chadd wrote:
On 19 February 2014 11:59, Alexander Motin wrote:
So if we're moving towards supporting (among others) a pcbgroup / RSS
hash style work lo
On Wed, Feb 19, 2014 at 11:04:49PM +0200, Alexander Motin wrote:
> On 19.02.2014 22:04, Adrian Chadd wrote:
> > On 19 February 2014 11:59, Alexander Motin wrote:
> >
> >>> So if we're moving towards supporting (among others) a pcbgroup / RSS
> >>> hash style work load distribution across CPUs to
On Wednesday, February 19, 2014 3:04:51 pm Adrian Chadd wrote:
> On 19 February 2014 11:59, Alexander Motin wrote:
>
> >> So if we're moving towards supporting (among others) a pcbgroup / RSS
> >> hash style work load distribution across CPUs to minimise
> >> per-connection lock contention, we re
On 19.02.2014 22:04, Adrian Chadd wrote:
On 19 February 2014 11:59, Alexander Motin wrote:
So if we're moving towards supporting (among others) a pcbgroup / RSS
hash style work load distribution across CPUs to minimise
per-connection lock contention, we really don't want the scheduler to
decid
On 2/19/14, 12:04 PM, Adrian Chadd wrote:
On 19 February 2014 11:59, Alexander Motin wrote:
So if we're moving towards supporting (among others) a pcbgroup / RSS
hash style work load distribution across CPUs to minimise
per-connection lock contention, we really don't want the scheduler to
dec
On 19 February 2014 11:59, Alexander Motin wrote:
>> So if we're moving towards supporting (among others) a pcbgroup / RSS
>> hash style work load distribution across CPUs to minimise
>> per-connection lock contention, we really don't want the scheduler to
>> decide it can schedule things on othe
On 19.02.2014 21:51, Adrian Chadd wrote:
On 19 February 2014 11:40, Alexander Motin wrote:
Clock interrupt threads, same as other ones are only softly bound to
specific CPUs by scheduler preferring to run them on CPUs where they are
scheduled. So far that was enough to balance load, but allowed
On 19 February 2014 11:40, Alexander Motin wrote:
> Hi.
>
> Clock interrupt threads, same as other ones are only softly bound to
> specific CPUs by scheduler preferring to run them on CPUs where they are
> scheduled. So far that was enough to balance load, but allowed threads to
> migrate, if need
Hi.
Clock interrupt threads, same as other ones are only softly bound to
specific CPUs by scheduler preferring to run them on CPUs where they are
scheduled. So far that was enough to balance load, but allowed threads
to migrate, if needed. Is it too flexible for some use case?
--
Alexander M
22 matches
Mail list logo