On Tue, Feb 26, 2019 at 04:29:45PM +0100, Daniel Borkmann wrote:
> On 02/26/2019 05:27 AM, Alexei Starovoitov wrote:
> > On 2/25/19 2:36 AM, Daniel Borkmann wrote:
> >>
> >> Not through the stack, but was more thinking something like low-overhead
> >> kprobes-style extension for a BPF prog where su
On 02/26/2019 05:27 AM, Alexei Starovoitov wrote:
> On 2/25/19 2:36 AM, Daniel Borkmann wrote:
>>
>> Not through the stack, but was more thinking something like low-overhead
>> kprobes-style extension for a BPF prog where such sequence would be added
>> 'inline' at beginning / exit of BPF prog invo
On 2/25/19 2:36 AM, Daniel Borkmann wrote:
>
> Not through the stack, but was more thinking something like low-overhead
> kprobes-style extension for a BPF prog where such sequence would be added
> 'inline' at beginning / exit of BPF prog invocation with normal ctx access
> and helpers as the nati
On 02/23/2019 03:38 AM, Alexei Starovoitov wrote:
> On Sat, Feb 23, 2019 at 02:06:56AM +0100, Daniel Borkmann wrote:
>>
>> In general, having some stats and timing info would be useful, but I
>> guess people might want to customize it in future even more specifically
>> beyond number of runs + time
On Sat, Feb 23, 2019 at 02:06:56AM +0100, Daniel Borkmann wrote:
>
> In general, having some stats and timing info would be useful, but I
> guess people might want to customize it in future even more specifically
> beyond number of runs + time it takes. One thing that would be super
> useful is to
On Sat, Feb 23, 2019 at 12:34:38AM +, Roman Gushchin wrote:
> On Fri, Feb 22, 2019 at 03:36:41PM -0800, Alexei Starovoitov wrote:
> > JITed BPF programs are indistinguishable from kernel functions, but unlike
> > kernel code BPF code can be changed often.
> > Typical approach of "perf record" +
On Fri, Feb 22, 2019 at 04:05:55PM -0800, Eric Dumazet wrote:
>
>
> On 02/22/2019 03:36 PM, Alexei Starovoitov wrote:
>
> >
> > +static void bpf_prog_get_stats(const struct bpf_prog *prog,
> > + struct bpf_prog_stats *stats)
> > +{
> > + u64 nsecs = 0, cnt = 0;
> > +
On 02/23/2019 01:34 AM, Roman Gushchin wrote:
> On Fri, Feb 22, 2019 at 03:36:41PM -0800, Alexei Starovoitov wrote:
>> JITed BPF programs are indistinguishable from kernel functions, but unlike
>> kernel code BPF code can be changed often.
>> Typical approach of "perf record" + "perf report" profil
On Fri, Feb 22, 2019 at 03:36:41PM -0800, Alexei Starovoitov wrote:
> JITed BPF programs are indistinguishable from kernel functions, but unlike
> kernel code BPF code can be changed often.
> Typical approach of "perf record" + "perf report" profiling and tunning of
> kernel code works just as well
On 02/22/2019 03:36 PM, Alexei Starovoitov wrote:
>
> +static void bpf_prog_get_stats(const struct bpf_prog *prog,
> +struct bpf_prog_stats *stats)
> +{
> + u64 nsecs = 0, cnt = 0;
> + int cpu;
> +
> + for_each_possible_cpu(cpu) {
> + nsecs
10 matches
Mail list logo