On Thu, Jun 11, 2020 at 3:23 PM Alexei Starovoitov
<[email protected]> wrote:
>
> /* dummy _ops. The verifier will operate on target program's ops. */
> const struct bpf_verifier_ops bpf_extension_verifier_ops = {
> @@ -205,14 +206,12 @@ static int bpf_trampoline_update(struct bpf_trampoline
> *tr)
> tprogs[BPF_TRAMP_MODIFY_RETURN].nr_progs)
> flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
>
> - /* Though the second half of trampoline page is unused a task could be
> - * preempted in the middle of the first half of trampoline and two
> - * updates to trampoline would change the code from underneath the
> - * preempted task. Hence wait for tasks to voluntarily schedule or go
> - * to userspace.
> + /* the same trampoline can hold both sleepable and non-sleepable
> progs.
> + * synchronize_rcu_tasks_trace() is needed to make sure all sleepable
> + * programs finish executing. It also ensures that the rest of
> + * generated tramopline assembly finishes before updating trampoline.
> */
> -
> - synchronize_rcu_tasks();
> + synchronize_rcu_tasks_trace();
Hi Paul,
I've been looking at rcu_trace implementation and I think above change
is correct.
Could you please double check my understanding?
Also see benchmarking numbers in the cover letter :)
> err = arch_prepare_bpf_trampoline(new_image, new_image + PAGE_SIZE /
> 2,
> &tr->func.model, flags, tprogs,
> @@ -344,7 +343,14 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
> if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT])))
> goto out;
> bpf_image_ksym_del(&tr->ksym);
> - /* wait for tasks to get out of trampoline before freeing it */
> + /* This code will be executed when all bpf progs (both sleepable and
> + * non-sleepable) went through
> + * bpf_prog_put()->call_rcu[_tasks_trace]()->bpf_prog_free_deferred().
> + * Hence no need for another synchronize_rcu_tasks_trace() here,
> + * but synchronize_rcu_tasks() is still needed, since trampoline
> + * may not have had any sleepable programs and we need to wait
> + * for tasks to get out of trampoline code before freeing it.
> + */
> synchronize_rcu_tasks();
> bpf_jit_free_exec(tr->image);
> hlist_del(&tr->hlist);
> @@ -394,6 +400,21 @@ void notrace __bpf_prog_exit(struct bpf_prog *prog, u64
> start)
> rcu_read_unlock();
> }
>
> +/* when rcu_read_lock_trace is held it means that some sleepable bpf program
> is
> + * running. Those programs can use bpf arrays and preallocated hash maps.
> These
> + * map types are waiting on programs to complete via
> + * synchronize_rcu_tasks_trace();
> + */
> +void notrace __bpf_prog_enter_sleepable(void)
> +{
> + rcu_read_lock_trace();
> +}
> +
> +void notrace __bpf_prog_exit_sleepable(void)
> +{
> + rcu_read_unlock_trace();
> +}
> +