> > +static void
> > +emit_ld_mbuf(struct a64_jit_ctx *ctx, uint32_t op, uint8_t tmp1,
> uint8_t tmp2,
> > +   uint8_t src, uint32_t imm)
> 
> Handling immediate as unsigned is questionable, especially in the
> BPF_IND case
> it may produce incorrect results.

In Classic BPF (cBPF), when the immediate "k" is negative (when cast to signed 
integer), it is used for getting packet metadata (e.g. SKF_AD_VLAN_TAG gets the 
VLAN ID); otherwise it is considered unsigned.

> 
> To make things worse, `__rte_pktmbuf_read` is also buggy when passed
> very large
> lengths (again, technically not ARM eBPF fault).

Are you referring to the potential integer wraparound in the off+len > 
rte_pktmbuf_pkt_len(m) comparison?
[BZ1724]
Or some other bug in __rte_pktmbuf_read()?

[BZ1724]: https://bugs.dpdk.org/show_bug.cgi?id=1724

> 
> > +{
> > +   uint8_t r0 = ebpf_to_a64_reg(ctx, EBPF_REG_0);
> > +   uint8_t r6 = ebpf_to_a64_reg(ctx, EBPF_REG_6);
> > +   uint32_t mode = BPF_MODE(op);
> > +   uint32_t opsz = BPF_SIZE(op);
> > +   uint32_t sz = bpf_size(opsz);
> > +   int16_t jump_to_epilogue;
> > +
> > +   /* r0 = mbuf (R6) */
> > +   emit_mov_64(ctx, A64_R(0), r6);
> > +
> > +   /* r1 = off: for ABS use imm, for IND use src + imm */
> > +   if (mode == BPF_ABS) {
> > +           emit_mov_imm(ctx, 1, A64_R(1), imm);
> > +   } else {
> > +           emit_mov_imm(ctx, 1, tmp2, imm);
> > +           emit_add(ctx, 1, tmp2, src);
> > +           emit_mov_64(ctx, A64_R(1), tmp2);
> > +   }
> > +
> > +   /* r2 = len */
> > +   emit_mov_imm(ctx, 1, A64_R(2), sz);
> > +
> > +   /* r3 = buf (SP) */
> > +   emit_mov_64(ctx, A64_R(3), A64_SP);
> > +
> > +   /* call __rte_pktmbuf_read */
> > +   emit_call(ctx, tmp1, __rte_pktmbuf_read);
> > +   /* check return value of __rte_pktmbuf_read */
> > +   emit_cbnz(ctx, 1, A64_R(0), 3);
> > +   emit_mov_imm(ctx, 1, r0, 0);
> > +   jump_to_epilogue = (ctx->program_start + ctx->program_sz) - ctx-
> >idx;
> > +   emit_b(ctx, jump_to_epilogue);
> 
> Could we call emit_return_zero_if_src_zero here instead?
> 
> > +
> > +   /* r0 points to the data, load 1/2/4 bytes */
> > +   emit_ldr(ctx, opsz, A64_R(0), A64_R(0), A64_ZR);
> > +   if (sz != sizeof(uint8_t))
> > +           emit_be(ctx, A64_R(0), sz * CHAR_BIT);
> > +   emit_mov_64(ctx, r0, A64_R(0));
> > +}
> > +
> 
> I would also pass final verdict on ARM code to ARM folks. To my
> untrained eye
> it looks correct apart from the signed immediate issue. Optimizations
> are
> possible, but since we're only implementing slow path for now maybe not
> worth
> the effort.

Reply via email to