>
> > > I suggest the following logic in pseudo-code:
> > >
> > > static void
> > > emit_ld_mbuf(struct a64_jit_ctx *ctx, uint32_t op, uint8_t tmp1,
> > > uint8_t tmp2,
> > > uint8_t src, int32_t imm)
> > > {
> > > // ...
> > >
> > > /* r1 = off: for ABS use imm, for IND use src + imm */
> > > if (mode == BPF_ABS) {
> > > assert imm >= 0, "verified by verifier"
> > > emit MOV W1, #<imm>
> > > } else {
> > > /* add signed imm to the source register */
> > > emit(s) X1 = src + #<imm>
> > > /* verify dynamically that offset is within the domain of
> > > __rte_pktmbuf_read */
> > > emit(s) jump_to_epilogue if X1 <s 0 || X1 > INT32_MAX
Shouldn't it be UINT32_MAX?
> > > }
> > >
> > > // ...
> > > }
I wonder does x86 jit and VM perform that check?
If not, then it is probably not fair to demand that patch to fix these things
for arm:
It probably has to be another patch that will fix that issue for all targets
(vm, x86, arm).
After all - it wouldn't cause any memory corruption, right?
In the worst case (wraparound) bpf will read some valid data from the
unexpected location in the packet instead of simply returning 0.
BTW, I think we still need to add check for overflow in __rte_pktmbuf_read() -
bpf jit is not the only user of it.
Unless, of-course, we want rte_pktmbuf_read() to work with wrapped-around
values.
> >
> > I don't know if it's the sum of src+imm that determines special meaning, or
> > it's
> the imm itself.
> > If it's the imm itself, a simple fix would be to update the validator's
> > .imm.max
> values for BPF_IND
> > from UINT32_MAX to INT32_MAX.
> >
>
> Even if we do it (I have no particular opinion), it won't prevent sum of
> 64-bit
> register and immediate from being negative or greater than the range of values
> supported by 32-bit argument of __rte_pktmbuf_read.