On 08.09.2023 10:48, Nicola Vetrini wrote:
> On 07/09/2023 03:33, Stefano Stabellini wrote:
>> On Wed, 6 Sep 2023, Jan Beulich wrote:
>>> On 06.09.2023 17:57, Nicola Vetrini wrote:
>>>> On 05/09/2023 10:33, Jan Beulich wrote:
>>>>> On 05.09.2023 10:20, Nicola Vetrini wrote:
>>>>>> On 05/09/2023 09:46, Jan Beulich wrote:
>>>>>>> On 05.09.2023 09:31, Nicola Vetrini wrote:
>>>>>>>> Given its use in the declaration
>>>>>>>> 'DECLARE_BITMAP(features, IOMMU_FEAT_count)' the argument
>>>>>>>> 'bits' has essential type 'enum iommu_feature', which is not
>>>>>>>> allowed by the Rule as an operand to the addition operator.
>>>>>>>> Given that its value can be represented by a signed integer,
>>>>>>>> the explicit cast resolves the violation.
>>>>>>>
>>>>>>> Wait - why would this lead to a change to BITS_TO_LONGS()? And if
>>>>>>> that
>>>>>>> was to be changed, why plain int? I don't think negative input makes
>>>>>>> sense there, and in principle I'd expect values beyond 4 billion to
>>>>>>> also be permissible (even if likely no such use will ever appear in a
>>>>>>> DECLARE_BITMAP(), but elsewhere it may make sense). Even going to
>>>>>>> "unsigned long" may be too limiting ...
>>>>>>>
>>>>>>
>>>>>> You have a point. I can think of doing it like this:
>>>>>> DECLARE_BITMAP(features, (int)IOMMU_FEAT_count)
>>
>> I think this is a good solution for this case (even more so if we can't
>> find a better implementation of BITS_TO_LONGS)
>>
>>
>>>>>> on the grounds that the enum constant is representable in an int, and
>>>>>> it
>>>>>> does not seem likely
>>>>>> to get much bigger.
>>>>>> Having an unsigned cast requires making the whole expression
>>>>>> essentially unsigned, otherwise Rule 10.4 is violated because
>>>>>> BITS_PER_LONG is
>>>>>> essentially signed. This can be done, but it depends on how
>>>>>> BITS_TO_LONGS will be/is used.
>>>>>
>>>>> It'll need looking closely, yes, but I expect that actually wants to be
>>>>> an
>>>>> unsigned constant. I wouldn't be surprised if some use of
>>>>> DECLARE_BITMAP()
>>>>> appeared (or already existed) where the 2nd argument involves sizeof()
>>>>> in
>>>>> some way.
>>>>>
>>>>
>>>> I think there's one with ARRAY_SIZE. In my opinion this can be resolved
>>>> as follows:
>>>>
>>>> #define BYTES_PER_LONG (1U << LONG_BYTEORDER) // the essential type gets
>>>> from signed to unsigned
>>>>
>>>> #define BITS_TO_LONGS(bits) \
>>>>          (((unsigned long long)(bits)+BITS_PER_LONG-1U)/BITS_PER_LONG) //
>>>> same here
>>>
>>> Except, as said before, I consider any kind of cast on "bits" latently
>>> problematic.
>>
>> Can't we just do this (same but without the cast):
>>
>> #define BYTES_PER_LONG (1U << LONG_BYTEORDER)
>> #define BITS_TO_LONGS(bits) \
>>          (((bits)+BITS_PER_LONG-1U)/BITS_PER_LONG)
>>
>> Then we just need to make sure to pass an unsigned to BITS_TO_LONGS. In
>> the case above we would do:
>>
>> DECLARE_BITMAP(features, (unsigned int)IOMMU_FEAT_count)
> 
> There is a build error due to -Werror because of a pointer comparison at 
> line 469 of common/numa.c:
> i = min(PADDR_BITS, BITS_PER_LONG - 1);
> where
> #define PADDR_BITS              52
> 
> I guess PADDR_BITS can become unsigned or gain a cast

While generally converting constants to unsigned comes with a certain
risk, I think for this (and its siblings) this ought to be okay. As to
the alternative of a cast - before considering that, please consider
e.g. adding 0u (as we do elsewhere in the code base to deal with such
cases).

Jan

Reply via email to