On Wed, 23 Oct 2024 19:27:03 +0300
Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)"
>
> Hi,
>
> This is an updated version of execmem ROX caches.
>
FYI, I booted a kernel before and after applying these patches with my
change:
https://lore.kernel.org/20241017113105.1edfa...@gandal
On Thu, 17 Oct 2024 14:25:05 +0300
Mike Rapoport wrote:
> With this series the module text is allocated as ROX at the first place, so
> the modifications ftrace does to module text have to either use text poking
> even before complete_formation() or deal with a writable copy like I did
> for relo
On Wed, 16 Oct 2024 17:01:28 -0400
Steven Rostedt wrote:
> If this is only needed for module load, can we at least still use the
> text_poke_early() at boot up?
>
> if (ftrace_poke_late) {
> text_poke_queue((void *)ip, new_code, MCOUNT_INSN_SIZE, NULL);
&g
On Wed, 16 Oct 2024 15:24:22 +0300
Mike Rapoport wrote:
> diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
> index 8da0e66ca22d..b498897b213c 100644
> --- a/arch/x86/kernel/ftrace.c
> +++ b/arch/x86/kernel/ftrace.c
> @@ -118,10 +118,13 @@ ftrace_modify_code_direct(unsigned long ip
On Mon, 9 Sep 2024 17:34:48 +0300
Mike Rapoport wrote:
> > This is insane, just force BUILDTIME_MCOUNT_SORT
>
> The comment in ftrace.c says "... while mcount loc in modules can not be
> sorted at build time"
>
> I don't know enough about objtool, but I'd presume it's because the sorting
> s
On Mon, 26 Aug 2024 09:55:29 +0300
Mike Rapoport wrote:
> From: Song Liu
>
> ftrace_process_locs sorts module mcount, which is inside RO memory. Add a
> ftrace_swap_func so that archs can use RO-memory-poke function to do the
> sorting.
Can you add the above as a comment above the ftrace_swap_
good starting point.
For the tracing portions:
Reviewed-by: Steven Rostedt (Google)
-- Steve
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc
On Tue, 11 Oct 2022 17:40:26 +0100
Valentin Schneider wrote:
> > You could keep the tracepoint as a mask, and then make it pretty, like
> > cpus=3-5,8
> > in user-space. For example with a trace-cmd/perf loadable plugin,
> > libtracefs helper.
> >
>
> That's a nice idea, the one downside I s
On Tue, 11 Oct 2022 17:17:04 +0100
Valentin Schneider wrote:
> tep_get_field_val() just yields an unsigned long long of value 0x200018,
> which AFAICT is just the [length, offset] thing associated with dynamic
> arrays. Not really usable, and I don't see anything exported in the lib to
> extract
On Fri, 7 Oct 2022 17:01:33 -0300
Marcelo Tosatti wrote:
> > As for the targeted CPUs, the existing tracepoint does export them, albeit
> > in
> > cpumask form, which is quite inconvenient from a tooling perspective. For
> > instance, as far as I'm aware, it's not possible to do event filtering
On Fri, 7 Oct 2022 16:45:32 +0100
Valentin Schneider wrote:
> }
>
> +static inline void irq_work_raise(void)
> +{
> + if (arch_irq_work_has_interrupt())
> + trace_ipi_send_cpu(_RET_IP_, smp_processor_id());
To save on the branch, let's make the above:
if (trace_ipi_se
Nit, the subject should have "tracing:" an not "ftrace:" as the former
encompasses the tracing infrastructure and the latter is for the function
hook part of that.
On Mon, 19 Sep 2022 12:00:12 +0200
Peter Zijlstra wrote:
> CONFIG_GENERIC_ENTRY disallows any and all tracing when RCU isn't
> ena
On Thu, 9 Jun 2022 15:02:20 +0200
Petr Mladek wrote:
> > I'm somewhat curious whether we can actually remove that trace event.
>
> Good question.
>
> Well, I think that it might be useful. It allows to see trace and
> printk messages together.
Yes people still use it. I was just asked about
On Thu, 24 Sep 2020 19:55:10 +0200
Thomas Gleixner wrote:
> On Thu, Sep 24 2020 at 08:32, Steven Rostedt wrote:
> > On Thu, 24 Sep 2020 08:57:52 +0200
> > Thomas Gleixner wrote:
> >
> >> > Now as for migration disabled nesting, at least now we would have
>
On Thu, 24 Sep 2020 14:42:41 +0200
Peter Zijlstra wrote:
> On Thu, Sep 24, 2020 at 08:32:41AM -0400, Steven Rostedt wrote:
> > Anyway, instead of blocking. What about having a counter of number of
> > migrate disabled tasks per cpu, and when taking a migrate_disable(), a
On Thu, 24 Sep 2020 08:57:52 +0200
Thomas Gleixner wrote:
> > Now as for migration disabled nesting, at least now we would have
> > groupings of this, and perhaps the theorists can handle that. I mean,
> > how is this much different that having a bunch of tasks blocked on a
> > mutex with the own
On Wed, 23 Sep 2020 22:55:54 +0200
Thomas Gleixner wrote:
> > Perhaps make migrate_disable() an anonymous local_lock()?
> >
> > This should lower the SHC in theory, if you can't have stacked migrate
> > disables on the same CPU.
>
> I'm pretty sure this ends up in locking hell pretty fast and
On Wed, 23 Sep 2020 10:40:32 +0200
pet...@infradead.org wrote:
> However, with migrate_disable() we can have each task preempted in a
> migrate_disable() region, worse we can stack them all on the _same_ CPU
> (super ridiculous odds, sure). And then we end up only able to run one
> task, with the
On Thu, 2 Apr 2020 01:17:01 +
Vineet Gupta wrote:
> +CC Claudiu
>
> On 3/27/20 10:10 AM, Steven Rostedt wrote:
> > On Fri, 27 Mar 2020 18:53:55 +0300
> > Eugeniy Paltsev wrote:
>
> Maybe add a comment that gcc does the heavy lifting: I have following in
On Fri, 27 Mar 2020 18:53:55 +0300
Eugeniy Paltsev wrote:
> +
> +noinline void _mcount(unsigned long parent_ip)
> +{
> + unsigned long ip = (unsigned long)__builtin_return_address(0);
> +
> + if (unlikely(ftrace_trace_function != ftrace_stub))
> + ftrace_trace_function(ip - MC
On Wed, 13 Nov 2019 09:47:22 +0100
Petr Mladek wrote:
> At the moment, I am in favor of this patchset. It is huge and
> needed a lot of manual work. But the result is straightforward and
> easy to understand.
I'm in favor of this patchset too. If there's other areas that need to
adjust the curre
On Wed, 6 Nov 2019 23:25:13 +
Russell King - ARM Linux admin wrote:
> On Wed, Nov 06, 2019 at 09:34:40PM +0100, Peter Zijlstra wrote:
> > I suppose I'm surprised there are backtraces that are not important.
> > Either badness happened and it needs printing, or the user asked for it
> > and it
On Wed, 6 Nov 2019 21:34:40 +0100
Peter Zijlstra wrote:
> I suppose I'm surprised there are backtraces that are not important.
> Either badness happened and it needs printing, or the user asked for it
> and it needs printing.
Unfortunately that is the case. As my tests will fail if a backtrace i
On Tue, 12 Nov 2019 11:17:47 +0900
Sergey Senozhatsky wrote:
> void show_stack(struct task_struct *task, unsigned long *sp, int log_level)
> {
> printk_emergency_enter(log_level);
> __show_stack(task, sp);
> printk_emergency_exit();
> }
> // - - - - - - - - - - - - - - - - - - -
On Tue, 12 Nov 2019 13:44:47 +0900
Sergey Senozhatsky wrote:
> > > I do recall that we talked about per-CPU printk state bit which would
> > > start/end "just print it" section. We probably can extend it to "just
> > > log_store" type of functionality. Doesn't look like a very bad idea.
> >
>
On Thu, 4 Apr 2019 21:17:58 +0300
"Dmitry V. Levin" wrote:
> There are several places listed below where I'd prefer to see more readable
> equivalents, but feel free to leave it to respective arch maintainers.
I was going to do similar changes, but figured I'd do just that (let
the arch maintain
From: "Steven Rostedt (VMware)"
After removing the start and count arguments of syscall_get_arguments() it
seems reasonable to remove them from syscall_set_arguments(). Note, as of
today, there are no users of syscall_set_arguments(). But we are told that
there will be soon. But f
From: "Steven Rostedt (Red Hat)"
At Linux Plumbers, Andy Lutomirski approached me and pointed out that the
function call syscall_get_arguments() implemented in x86 was horribly
written and not optimized for the standard case of passing in 0 and 6 for
the starting index and the number
From: "Steven Rostedt (Red Hat)"
At Linux Plumbers, Andy Lutomirski approached me and pointed out that the
function call syscall_get_arguments() implemented in x86 was horribly
written and not optimized for the standard case of passing in 0 and 6 for
the starting index and the number
From: "Steven Rostedt (VMware)"
After removing the start and count arguments of syscall_get_arguments() it
seems reasonable to remove them from syscall_set_arguments(). Note, as of
today, there are no users of syscall_set_arguments(). But we are told that
there will be soon. But f
On Fri, 25 Aug 2017 15:20:12 +
Eugeniy Paltsev wrote:
> On Fri, 2017-08-25 at 11:12 -0400, Steven Rostedt wrote:
> > On Fri, 25 Aug 2017 18:00:26 +0300
> > Eugeniy Paltsev wrote:
> >
> > > Move extern declarations of "of_root", "of_chosen"
On Fri, 25 Aug 2017 18:00:26 +0300
Eugeniy Paltsev wrote:
> Move extern declarations of "of_root", "of_chosen", "of_aliases",
> "of_stdout" pointers inside "CONFIG_OF" ifdef to be able to get rid
> of "CONFIG_OF" ifdef in thei
On Fri, 25 Aug 2017 16:14:51 +0300
Eugeniy Paltsev wrote:
> In the current implementation we take the first console that
> registers if we didn't select one.
>
> But if we specify console via "stdout-path" property in device tree
> we don't want first console that registers here to be selected.
33 matches
Mail list logo