From: Vincent Whitchurch <[email protected]>

This comment describes the behaviour before commit 2a820bf74918
("tracing: Use percpu stack trace buffer more intelligently").  Since
that commit, interrupts and NMIs do use the per-cpu stacks so the
comment is no longer correct.  Remove it.

(Note that the FTRACE_STACK_SIZE mentioned in the comment has never
existed, it probably should have said FTRACE_STACK_ENTRIES.)

Link: 
https://lkml.kernel.org/r/[email protected]

Signed-off-by: Vincent Whitchurch <[email protected]>
Signed-off-by: Steven Rostedt (VMware) <[email protected]>
---
 kernel/trace/trace.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 4aab712f9567..dbcacdd56b02 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2930,12 +2930,6 @@ static void __ftrace_trace_stack(struct trace_buffer 
*buffer,
                skip++;
 #endif
 
-       /*
-        * Since events can happen in NMIs there's no safe way to
-        * use the per cpu ftrace_stacks. We reserve it and if an interrupt
-        * or NMI comes in, it will just have to use the default
-        * FTRACE_STACK_SIZE.
-        */
        preempt_disable_notrace();
 
        stackidx = __this_cpu_inc_return(ftrace_stack_reserve) - 1;
-- 
2.26.2


Reply via email to