> On Jun 23, 2020, at 8:22 AM, Daniel Borkmann <dan...@iogearbox.net> wrote:
> 
> On 6/23/20 9:08 AM, Song Liu wrote:
>> This helper can be used with bpf_iter__task to dump all /proc/*/stack to
>> a seq_file.
>> Signed-off-by: Song Liu <songliubrav...@fb.com>
>> ---
>>  include/uapi/linux/bpf.h       | 10 +++++++++-
>>  kernel/trace/bpf_trace.c       | 21 +++++++++++++++++++++
>>  scripts/bpf_helpers_doc.py     |  2 ++
>>  tools/include/uapi/linux/bpf.h | 10 +++++++++-
>>  4 files changed, 41 insertions(+), 2 deletions(-)
>> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
>> index 19684813faaed..a30416b822fe3 100644
>> --- a/include/uapi/linux/bpf.h
>> +++ b/include/uapi/linux/bpf.h
>> @@ -3252,6 +3252,13 @@ union bpf_attr {
>>   *          case of **BPF_CSUM_LEVEL_QUERY**, the current skb->csum_level
>>   *          is returned or the error code -EACCES in case the skb is not
>>   *          subject to CHECKSUM_UNNECESSARY.
>> + *
>> + * int bpf_get_task_stack_trace(struct task_struct *task, void *entries, 
>> u32 size)
>> + *  Description
>> + *          Save a task stack trace into array *entries*. This is a wrapper
>> + *          over stack_trace_save_tsk().
>> + *  Return
>> + *          Number of trace entries stored.
>>   */
>>  #define __BPF_FUNC_MAPPER(FN)               \
>>      FN(unspec),                     \
>> @@ -3389,7 +3396,8 @@ union bpf_attr {
>>      FN(ringbuf_submit),             \
>>      FN(ringbuf_discard),            \
>>      FN(ringbuf_query),              \
>> -    FN(csum_level),
>> +    FN(csum_level),                 \
>> +    FN(get_task_stack_trace),
>>  /* integer value in 'imm' field of BPF_CALL instruction selects which helper
>>   * function eBPF program intends to call
>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
>> index e729c9e587a07..2c13bcb5c2bce 100644
>> --- a/kernel/trace/bpf_trace.c
>> +++ b/kernel/trace/bpf_trace.c
>> @@ -1488,6 +1488,23 @@ static const struct bpf_func_proto 
>> bpf_get_stack_proto_raw_tp = {
>>      .arg4_type      = ARG_ANYTHING,
>>  };
>> +BPF_CALL_3(bpf_get_task_stack_trace, struct task_struct *, task,
>> +       void *, entries, u32, size)
>> +{
>> +    return stack_trace_save_tsk(task, (unsigned long *)entries, size, 0);
> 
> nit: cast not needed.

Will fix. 

> 
>> +}
>> +
>> +static int bpf_get_task_stack_trace_btf_ids[5];
>> +static const struct bpf_func_proto bpf_get_task_stack_trace_proto = {
>> +    .func           = bpf_get_task_stack_trace,
>> +    .gpl_only       = true,
>> +    .ret_type       = RET_INTEGER,
>> +    .arg1_type      = ARG_PTR_TO_BTF_ID,
>> +    .arg2_type      = ARG_PTR_TO_MEM,
>> +    .arg3_type      = ARG_CONST_SIZE_OR_ZERO,
> 
> Is there a use-case to pass in entries == NULL + size == 0?

Not really. Will fix. 


> 
>> +    .btf_id         = bpf_get_task_stack_trace_btf_ids,
>> +};
>> +

Reply via email to