On 28/10/2019 11:57, Omair Javaid wrote:
On Fri, 25 Oct 2019 at 17:53, Pavel Labath via Phabricator
<revi...@reviews.llvm.org> wrote:
labath added a comment.
In D69371#1721077 <https://reviews.llvm.org/D69371#1721077>, @omjavaid wrote:
We ll be dealing with Linux user mode and mostly aarch64 data registers except
for cpsr, fpsr and fpcr. I think we should be fine but let me confirm this
again from documentation.
Right, but you're creating a general interface for all architectures, not just
several aarch64 registers. Even if they don't make use of that facility now, it
would be good to make sure they can do that in the future.
For instance, on x86, the kernel may decide to reject
https://github.com/torvalds/linux/blob/master/arch/x86/kernel/ptrace.c#L187 some values
of some registers, and silently ignore some bits in others
https://github.com/torvalds/linux/blob/master/arch/x86/kernel/ptrace.c#L349. That's why I
think it would be better to commit changes to memory automatically/immediately, and
minimize the chances that subsequent "read" operations will return data which
does not reflect the actual values held by the OS.
So I gave fixed or undefined bits a thought and also considered
implications of reading/writing certain status or control registers.
User visible registers dont really have big implications and we can
afford to keep user-corrupted values until resume as in theory all
state changes are going to happen on resume and target/thread state is
halted.
I don't agree with that assessment. While doing the register write
"late" will not corrupt the state of the application, it will make it
harder for the user to figure out what's going on. If we return the
ptrace error immediately, the user will get an error message relating to
the write command. If we don't do that, the command will appear to
succeed, and even subsequent "read" commands will return new "bogus"
value. If we postpone the write, we will get an error while processing
the "continue" command. At that point we we only have two options:
return an error (which will be interpreted as a failure to resume the
process) or drop it (and leave the user wondering why did the register
values suddenly change back).
But even if we don't want the user to be writing fixed value bit
fields, we can easily choose to invalidate register caches in case of
certain registers.
For example
if (regno == cpsr)
InvalidateAllRegisters().
In case of arm64, NativeRegisterContextLinux_arm64::WriteRegister may
call NativeRegisterContextLinux_arm64::InvalidateAllRegisters() if a
register like cpsr, fpsr or fpcr is being written.
Other architectures can use similar implementation or ignore register
caching altogether.
Yes, that is certainly possible, but it increases the code complexity (
== likelihood of something going wrong). Which is why I was asking what
is the use case you are optimizing for. Is it reading of registers? Is
it writing? Unwinding? Expression evaluation? Have you got any numbers? etc.
Register reads are a lot more common than writes, so I expect we can go
a long way by just making sure the read operations don't cause ptrace
traffic. As caching reads is a much safer option, I'd like to better
understand the tradeoff here.
BTW, another thing which you might be interested in is
<https://reviews.llvm.org/D62931>. Once that patch lands, register reads
should be done via the $g packet. That will increase the read speed much
more than this patch, as it will cut out the link latency too (though I
am still not opposed to caching stuff at lldb-server layer too).
pl
_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits