On 8/22/17 6:40 PM, Alexei Starovoitov wrote:
>> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
>> index df2e0f14a95d..7480cebab073 100644
>> --- a/kernel/cgroup/cgroup.c
>> +++ b/kernel/cgroup/cgroup.c
>> @@ -5186,4 +5186,22 @@ int cgroup_bpf_update(struct cgroup *cgrp, struct 
>> bpf_prog *prog,
>>      mutex_unlock(&cgroup_mutex);
>>      return ret;
>>  }
>> +
>> +int cgroup_bpf_run_filter_sk(struct sock *sk,
>> +                         enum bpf_attach_type type)
>> +{
>> +    struct cgroup *cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data);
>> +    int ret = 0;
>> +
>> +    while (cgrp) {
>> +            ret = __cgroup_bpf_run_filter_sk(cgrp, sk, type);
>> +            if (ret < 0)
>> +                    break;
>> +
>> +            cgrp = cgroup_parent(cgrp);
>> +    }
> 
> I think this walk changes semantics for existing setups, so we cannot do it
> by default and have to add new attach flag.

I can add a flag similar to the override.

> Also why break on (ret < 0) ?

Because __cgroup_bpf_run_filter_sk returns either 0 or -EPERM.

> The caller of this does:
>   err = BPF_CGROUP_RUN_PROG_INET_SOCK(sk);
>   if (err) {
>           sk_common_release(sk);
> so we should probably break out of the loop on if (ret) too.
> 

I'll do that in v2.

Reply via email to