Daniel P. Berrangé <berra...@redhat.com> writes:

> On Mon, Nov 25, 2024 at 07:56:10PM +0000, Jean-Philippe Brucker wrote:

[...]

>> diff --git a/qapi/qom.json b/qapi/qom.json
>> index f982850bca..901ba67634 100644
>> --- a/qapi/qom.json
>> +++ b/qapi/qom.json
>> @@ -1068,6 +1068,20 @@
>>    'data': { '*cpu-affinity': ['uint16'],
>>              '*node-affinity': ['uint16'] } }
>>  
>> +##
>> +# @RmeGuestMeasurementAlgorithm:
>> +#
>> +# @sha256: Use the SHA256 algorithm
>> +#
>> +# @sha512: Use the SHA512 algorithm
>> +#
>> +# Algorithm to use for realm measurements
>> +#
>> +# Since: 9.3
>> +##
>> +{ 'enum': 'RmeGuestMeasurementAlgorithm',
>> +  'data': ['sha256', 'sha512'] }
>
>
> A design question for Markus....
>
>
> We have a "QCryptoHashAlg" enum that covers both of these strings
> and more besides.
>
> We have a choice of using a dedicated enum strictly limited to
> just what RME needs, vs using the common enum type, but rejecting
> unsupported algorithms at runtime.  Do we prefer duplication for
> improve specificity, or removal of duplication ?

With a dedicated enum, introspection shows precisely the accepted
values.

If we reuse a wider enum, introspection shows additional, invalid
values.

Say we later make one of these valid here.  If we reuse the wider enum
now, nothing changes in introspection then, i.e. introspection cannot
tell us whether this particular QEMU supports this additional algorithm.
With a dedicated enum, it can.  Whether that's needed in practice I find
hard to predict.

I lean towards lean towards dedicated enum.

Questions?

[...]


Reply via email to