>
I think somebody was being too clever. The real purpose for specifying the
format is to set the byte size of the item written down:
OptionValueUInt64 &byte_size_value = m_format_options.GetByteSizeValue();
size_t item_byte_size = byte_size_value.GetCurrentValue();
and to do some special magic for c-strings and the like. Somebody (not me)
must have thought overloading "hex" format to mean "you can drop the 0x" was a
tasty little side benefit.
But it's been that way forever, and I don't know who's relying on that
behavior, so I'd rather not change it (intentionally) if we don't have to.
Jim
> On Wed, Apr 5, 2017 at 3:37 PM Jim Ingham via lldb-dev
> <[email protected]> wrote:
> memory write's argument ingestion was changed as part of the StringRefifying
> of Args so that we get:
>
> (lldb) memory write &buffer 0x62
> error: '0x62' is not a valid hex string value.
>
> That seems unexpected and not desirable. What's going on is that the default
> format is hex, and if the format is hex, the command also supports:
>
> (lldb) memory write -f x &buffer 62
> (lldb) fr v/x buffer[0]
> (char) buffer[0] = 0x62
>
> The StringRef version of the args parsing is:
>
> case eFormatDefault:
> case eFormatBytes:
> case eFormatHex:
> case eFormatHexUppercase:
> case eFormatPointer:
> // Decode hex bytes
> if (entry.ref.getAsInteger(16, uval64)) {
>
> The problem is that passing "0x62" to getAsInteger with a radix of 16 rejects
> "0x62".
>
> We do want to hint the radix. But it seems weird to reject an explicit
> indicator. Is there some clever way to use the StringRef functions to get
> the desired effect, or do I have to hack around this by manually stripping
> the 0x if I see it?
>
> Jim
>
>
> _______________________________________________
> lldb-dev mailing list
> [email protected]
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
_______________________________________________
lldb-dev mailing list
[email protected]
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev