Martin Sebor <mse...@gmail.com> writes:
> I haven't done nearly a thorough review but the dtor followed by
> the placement new in the POLY_SET_COEFF() macro caught my eye so
> I thought I'd ask sooner rather than later.  Given the macro
> definition:
>
> +   The dummy comparison against a null C * is just a way of checking
> +   that C gives the right type.  */
> +#define POLY_SET_COEFF(C, RES, I, VALUE) \
> +  ((void) (&(RES).coeffs[0] == (C *) 0), \
> +   wi::int_traits<C>::precision_type == wi::FLEXIBLE_PRECISION \
> +   ? (void) ((RES).coeffs[I] = VALUE) \
> +   : (void) ((RES).coeffs[I].~C (), new (&(RES).coeffs[I]) C (VALUE)))
>
> is the following use well-defined?
>
> +template<unsigned int N, typename C>
> +inline poly_int_pod<N, C>&
> +poly_int_pod<N, C>::operator <<= (unsigned int a)
> +{
> +  POLY_SET_COEFF (C, *this, 0, this->coeffs[0] << a);
>
> It looks to me as though the VALUE argument in the ctor invoked
> by the placement new expression is evaluated after the dtor has
> destroyed the very array element the VALUE argument expands to.

Good catch!  It should simply have been doing <<= on each coefficient --
I must have got carried away when converting to POLY_SET_COEFF.

I double-checked the other uses and think that's the only one.

> Whether or not is, in fact, a problem, it seems to me that using
> a function template rather than a macro would be a clearer and
> safer way to do the same thing.  (Safer in that the macro also
> evaluates its arguments multiple times, which is often a source
> of subtle bugs.)

That would slow down -O0 builds though, by introducing an extra
function call and set of temporaries even when the coefficients
are primitive integers.

> Other than that, I would suggest changing 't' to something a bit
> less terse, like perhaps 'type' in traits like the following:
>
> +struct if_lossless;
> +template<typename T1, typename T2, typename T3>
> +struct if_lossless<T1, T2, T3, true>
> +{
> +  typedef T3 t;
> +};

OK, done in v2.

Thanks,
Richard

Reply via email to