On Mon, Nov 23, 2020 at 10:39:34PM +, Alex Belits wrote:
>
> On Mon, 2020-11-23 at 23:29 +0100, Frederic Weisbecker wrote:
> > External Email
> >
> > ---
> > ---
> > On Mon, Nov 23, 2020 at
On Mon, Nov 23, 2020 at 05:58:42PM +, Alex Belits wrote:
> From: Yuri Norov
>
> Make sure that kick_all_cpus_sync() does not call CPUs that are running
> isolated tasks.
>
> Signed-off-by: Yuri Norov
> [abel...@marvell.com: use safe task_isolation_cpumask() implementation]
> Signed-off-by:
Hi Alex,
On Mon, Nov 23, 2020 at 05:58:22PM +, Alex Belits wrote:
> From: Yuri Norov
>
> For nohz_full CPUs the desirable behavior is to receive interrupts
> generated by tick_nohz_full_kick_cpu(). But for hard isolation it's
> obviously not desirable because it breaks isolation.
>
> This p
gt; > requires isolation for maintaining lower latency for the listed CPUs.
> >
> > Suggested-by: Frederic Weisbecker
Ah and yes there is this tag :-p
So that's my bad, I really thought this thing was about managed IRQ.
The problem is that I can't find a single documenta
On Mon, Oct 19, 2020 at 01:11:37PM +0200, Peter Zijlstra wrote:
> > > And what are the (desired) semantics vs hotplug? Using a cpumask without
> > > excluding hotplug is racy.
> >
> > The housekeeping_mask should still remain constant, isn't?
> > In any case, I can double check this.
>
> The goal
On Sun, Oct 04, 2020 at 03:22:09PM +, Alex Belits wrote:
>
> On Thu, 2020-10-01 at 16:44 +0200, Frederic Weisbecker wrote:
> > > @@ -268,7 +269,8 @@ static void tick_nohz_full_kick(void)
> > > */
> > > void tick_nohz_full_kick_cpu(int cpu)
> > &
On Mon, Oct 05, 2020 at 02:52:49PM -0400, Nitesh Narayan Lal wrote:
>
> On 10/4/20 7:14 PM, Frederic Weisbecker wrote:
> > On Sun, Oct 04, 2020 at 02:44:39PM +, Alex Belits wrote:
> >> On Thu, 2020-10-01 at 15:56 +0200, Frederic Weisbecker wrote:
On Sun, Oct 04, 2020 at 02:44:39PM +, Alex Belits wrote:
> On Thu, 2020-10-01 at 15:56 +0200, Frederic Weisbecker wrote:
> > External Email
> >
> > ---
> > ---
> > On Wed, Jul 22, 2020 at 02
| 2 +-
> 4 files changed, 30 insertions(+), 2 deletions(-)
Acked-by: Frederic Weisbecker
Peter, if you're ok with the set, I guess this should go through
the scheduler tree?
Thanks.
On Wed, Jul 22, 2020 at 02:58:24PM +, Alex Belits wrote:
> From: Yuri Norov
>
> If CPU runs isolated task, there's no any backlog on it, and
> so we don't need to flush it.
What guarantees that we have no backlog on it?
> Currently flush_all_backlogs()
> enqueues corresponding work on all C
On Wed, Jul 22, 2020 at 02:57:33PM +, Alex Belits wrote:
> From: Yuri Norov
>
> For nohz_full CPUs the desirable behavior is to receive interrupts
> generated by tick_nohz_full_kick_cpu(). But for hard isolation it's
> obviously not desirable because it breaks isolation.
>
> This patch adds
On Wed, Jul 22, 2020 at 02:49:49PM +, Alex Belits wrote:
> +/**
> + * task_isolation_kernel_enter() - clear low-level task isolation flag
> + *
> + * This should be called immediately after entering kernel.
> + */
> +static inline void task_isolation_kernel_enter(void)
> +{
> + unsigned lon
On Wed, Jul 22, 2020 at 02:49:49PM +, Alex Belits wrote:
> +/*
> + * Description of the last two tasks that ran isolated on a given CPU.
> + * This is intended only for messages about isolation breaking. We
> + * don't want any references to actual task while accessing this from
> + * CPU that
On Wed, Sep 23, 2020 at 02:11:23PM -0400, Nitesh Narayan Lal wrote:
> Introduce a new API hk_num_online_cpus(), that can be used to
> retrieve the number of online housekeeping CPUs that are meant to handle
> managed IRQ jobs.
>
> This API is introduced for the drivers that were previously relying
On Thu, Sep 24, 2020 at 10:40:29AM +0200, pet...@infradead.org wrote:
> On Wed, Sep 23, 2020 at 02:11:23PM -0400, Nitesh Narayan Lal wrote:
> > Introduce a new API hk_num_online_cpus(), that can be used to
> > retrieve the number of online housekeeping CPUs that are meant to handle
> > managed IRQ
On Tue, Sep 22, 2020 at 09:54:58AM -0400, Nitesh Narayan Lal wrote:
> >> If min_vecs > num_housekeeping, for example:
> >>
> >> /* PCI MSI/MSIx support */
> >> #define XGBE_MSI_BASE_COUNT 4
> >> #define XGBE_MSI_MIN_COUNT (XGBE_MSI_BASE_COUNT + 1)
> >>
> >> Then the protection fails.
> > R
On Tue, Sep 22, 2020 at 09:50:55AM -0400, Nitesh Narayan Lal wrote:
> On 9/22/20 6:08 AM, Frederic Weisbecker wrote:
> TBH I don't have a very strong case here at the moment.
> But still, IMHO, this will force the user to have both managed irqs and
> nohz_full in their environmen
On Tue, Sep 22, 2020 at 09:34:02AM -0400, Nitesh Narayan Lal wrote:
> On 9/22/20 5:54 AM, Frederic Weisbecker wrote:
> > But I don't also want to push toward a complicated solution to handle CPU
> > hotplug
> > if there is no actual problem to solve there.
>
>
On Mon, Sep 21, 2020 at 11:16:51PM -0400, Nitesh Narayan Lal wrote:
>
> On 9/21/20 7:40 PM, Frederic Weisbecker wrote:
> > On Wed, Sep 09, 2020 at 11:08:16AM -0400, Nitesh Narayan Lal wrote:
> >> +/*
> >> + * num_housekeeping_cpus() - Read the number of housekeepin
On Mon, Sep 21, 2020 at 11:08:20PM -0400, Nitesh Narayan Lal wrote:
>
> On 9/21/20 6:58 PM, Frederic Weisbecker wrote:
> > On Thu, Sep 17, 2020 at 11:23:59AM -0700, Jesse Brandeburg wrote:
> >> Nitesh Narayan Lal wrote:
> >>
> >>> In a realtime enviro
On Wed, Sep 09, 2020 at 11:08:16AM -0400, Nitesh Narayan Lal wrote:
> +/*
> + * num_housekeeping_cpus() - Read the number of housekeeping CPUs.
> + *
> + * This function returns the number of available housekeeping CPUs
> + * based on __num_housekeeping_cpus which is of type atomic_t
> + * and is i
On Thu, Sep 17, 2020 at 11:23:59AM -0700, Jesse Brandeburg wrote:
> Nitesh Narayan Lal wrote:
>
> > In a realtime environment, it is essential to isolate unwanted IRQs from
> > isolated CPUs to prevent latency overheads. Creating MSIX vectors only
> > based on the online CPUs could lead to a poten
On Fri, Apr 21, 2017 at 10:52:29AM -0700, Linus Torvalds wrote:
> On Thu, Apr 20, 2017 at 7:30 AM, Mel Gorman
> wrote:
> >> The end result was a revert, and this is waiting in AKPMs quilt queue:
> >>
> >> http://ozlabs.org/~akpm/mmots/broken-out/revert-mm-page_alloc-only-use-per-cpu-allocator-f
On Thu, Apr 20, 2017 at 11:00:42AM +0200, Jesper Dangaard Brouer wrote:
> Hi Linus,
>
> Just wanted to give a heads-up on two regressions in 4.11-rc series.
>
> (1) page allocator optimization revert
>
> Mel Gorman and I have been playing with optimizing the page allocator,
> but Tariq spotted t
On Wed, Mar 29, 2017 at 11:30:30AM +0200, Jesper Dangaard Brouer wrote:
> On Tue, 28 Mar 2017 23:11:22 +0200
> Frederic Weisbecker wrote:
>
> > On Tue, Mar 28, 2017 at 05:23:03PM +0200, Jesper Dangaard Brouer wrote:
> > > On Tue, 28 Mar 2017 16:34:36 +0200
> >
On Tue, Mar 28, 2017 at 05:23:03PM +0200, Jesper Dangaard Brouer wrote:
> On Tue, 28 Mar 2017 16:34:36 +0200
> Frederic Weisbecker wrote:
>
> > On Tue, Mar 28, 2017 at 10:14:03AM +0200, Jesper Dangaard Brouer wrote:
> > >
> > > (While evaluating some changes to
nt kcpustat directly on irqtime
> account")
>
> a499a5a14dbd1d0315a96fc62a8798059325e9e6 is the first bad commit
> commit a499a5a14dbd1d0315a96fc62a8798059325e9e6
> Author: Frederic Weisbecker
> Date: Tue Jan 31 04:09:32 2017 +0100
>
> sched/cpu
On Tue, Mar 28, 2017 at 02:26:42PM +0200, Peter Zijlstra wrote:
> On Tue, Mar 28, 2017 at 06:34:52PM +0800, Wanpeng Li wrote:
> >
> > sched_clock_cpu(cpu) should be converted from cputime to ns.
>
> Uhm, no. sched_clock_cpu() returns u64 in ns.
Yes, and most of the cputime_t have been converted
On Mon, Jul 20, 2015 at 05:22:12PM -0400, Chris Metcalf wrote:
> On 07/11/2015 10:30 AM, Frederic Weisbecker wrote:
> >On Fri, Jul 10, 2015 at 03:05:02PM -0400, Chris Metcalf wrote:
> >>The tilegx chips typically don't do cpu offlining anyway, since
> >>we
On Fri, Jul 10, 2015 at 03:37:25PM -0400, Chris Metcalf wrote:
> Normally the tilegx networking shim sends irqs to all the cores
> to distribute the load of processing incoming-packet interrupts,
> so that you can get to multiple Gb's of traffic inbound.
>
> However, in nohz_full mode we don't wan
On Fri, Jul 10, 2015 at 03:05:02PM -0400, Chris Metcalf wrote:
> On 07/10/2015 02:24 PM, Frederic Weisbecker wrote:
> >Indeed we are doing more and more references on housekeeping_mask, so
> >we should probably think about an off-case.
> >
> >Now the nohz-ful
On Fri, Jul 10, 2015 at 01:33:44PM -0400, Chris Metcalf wrote:
> In nohz_full mode, by default distribute networking shim
> interrupts across the housekeeping cores, not all the cores.
I can't really tell, I have no idea what this driver does. It seems
to be about networking CPUs but I have no ide
32 matches
Mail list logo