[Lldb-commits] [PATCH] D119548: [lldb] Fix race condition between lldb-vscode and stop hooks executor

2022-02-11 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin created this revision.
ilya-nozhkin added reviewers: jingham, clayborg.
ilya-nozhkin requested review of this revision.
Herald added a project: LLDB.
Herald added a subscriber: lldb-commits.

The race is between these two pieces of code that are executed in two separate
lldb-vscode threads (the first is in the main thread and another is in the
event-handling thread):

  // lldb-vscode.cpp
  g_vsc.debugger.SetAsync(false);
  g_vsc.target.Launch(launch_info, error);
  g_vsc.debugger.SetAsync(true);



  // Target.cpp
  bool old_async = debugger.GetAsyncExecution();
  debugger.SetAsyncExecution(true);
  debugger.GetCommandInterpreter().HandleCommands(GetCommands(), exc_ctx,
  options, result);
  debugger.SetAsyncExecution(old_async);

The sequence that leads to the bug is this one:

1. Main thread enables synchronous mode and launches the process.
2. When the process is launched, it generates the first stop event.
3. This stop event is catched by the event-handling thread and DoOnRemoval is 
invoked.
4. Inside DoOnRemoval, this thread runs stop hooks. And before running stop 
hooks, the current synchronization mode is stored into old_async (and right now 
it is equal to "false").
5. The main thread finishes the launch and returns to lldb-vscode, the 
synchronization mode is restored to asynchronous by lldb-vscode.
6. Event-handling thread finishes stop hooks processing and restores the 
synchronization mode according to old_async (i.e. makes the mode synchronous)
7. And now the mode is synchronous while lldb-vscode expects it to be 
asynchronous. Synchronous mode forbids the process to broadcast public stop 
events, so, VS Code just hangs because lldb-vscode doesn't notify it about 
stops.

So, this diff introduces a lock of synchronization mode for cases when it should
be saved before some action and then restored.

The bug is only present on Windows but it seems to be just a coincidence. On
Linux, as I understand, the first stop event is intercepted by hijacking
listener and is never popped out of the queue, so, stop hooks are not executed
and the race doesn't happen.

So, this diff also fixes some problems with lldb-vscode tests on Windows to make
it possible to run the related test. Other tests still can't be enabled because
the debugged program prints something into stdout and LLDB can't intercept this
output and redirect it to lldb-vscode properly.


Repository:
  rG LLVM Github Monorepo

https://reviews.llvm.org/D119548

Files:
  lldb/include/lldb/Core/Debugger.h
  lldb/include/lldb/Interpreter/CommandInterpreter.h
  lldb/packages/Python/lldbsuite/test/lldbtest.py
  lldb/packages/Python/lldbsuite/test/tools/lldb-vscode/lldbvscode_testcase.py
  lldb/source/Core/Debugger.cpp
  lldb/source/Interpreter/CommandInterpreter.cpp
  lldb/source/Target/Target.cpp
  lldb/test/API/tools/lldb-vscode/stop-hooks/Makefile
  lldb/test/API/tools/lldb-vscode/stop-hooks/TestVSCode_stop_hooks.py
  lldb/test/API/tools/lldb-vscode/stop-hooks/main.c

Index: lldb/test/API/tools/lldb-vscode/stop-hooks/main.c
===
--- /dev/null
+++ lldb/test/API/tools/lldb-vscode/stop-hooks/main.c
@@ -0,0 +1 @@
+int main() { return 0; }
Index: lldb/test/API/tools/lldb-vscode/stop-hooks/TestVSCode_stop_hooks.py
===
--- /dev/null
+++ lldb/test/API/tools/lldb-vscode/stop-hooks/TestVSCode_stop_hooks.py
@@ -0,0 +1,35 @@
+"""
+Test stop hooks
+"""
+
+
+from lldbsuite.test.decorators import *
+from lldbsuite.test.lldbtest import *
+import lldbvscode_testcase
+
+
+class TestVSCode_stop_hooks(lldbvscode_testcase.VSCodeTestCaseBase):
+
+mydir = TestBase.compute_mydir(__file__)
+
+@skipIfRemote
+def test_stop_hooks_before_run(self):
+'''
+Test that there is no race condition between lldb-vscode and
+stop hooks executor
+'''
+program = self.getBuildArtifact("a.out")
+preRunCommands = ['target stop-hook add -o help']
+self.build_and_launch(program, stopOnEntry=True, preRunCommands=preRunCommands)
+
+# The first stop is on entry.
+self.continue_to_next_stop()
+
+breakpoint_ids = self.set_function_breakpoints(['main'])
+# This request hangs if the race happens, because, in that case, the
+# command interpreter is in synchronous mode while lldb-vscode expects
+# it to be in asynchronous mode, so, the process doesn't send the stop
+# event to "lldb.Debugger" listener (which is monitored by lldb-vscode).
+self.continue_to_breakpoints(breakpoint_ids)
+
+self.continue_to_exit()
Index: lldb/test/API/tools/lldb-vscode/stop-hooks/Makefile
===
--- /dev/null
+++ lldb/test/API/tools/lldb-vscode/stop-hooks/Makefile
@@ -0,0 +1,3 @@
+C_SOURCES := main.c
+
+include Makefile.rules
Ind

[Lldb-commits] [PATCH] D119548: [lldb] Fix race condition between lldb-vscode and stop hooks executor

2022-02-14 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin added a comment.

In D119548#3318806 , @labath wrote:

> it still seems like there is a zillion of other things that the two processes 
> (event handler thread and stop hook executor) race on.

Stop hooks are executed in the event handler thread, so, these two particular 
things don't have any conflicts. Event handler thread (and stop hooks execution 
inside it) races with the main lldb-vscode thread (the one that handles VS Code 
requests).

In D119548#3318806 , @labath wrote:

> Isn't the right fix to make sure that the event gets broadcast only after the 
> stop hooks finish their work?

It would be cool because of some other reasons but it won't solve this 
particular race. I.e. we need some synchronization between stop hooks execution 
and the main lldb-vscode thread. So, even if the event is not broadcasted until 
stop hooks are executed (which means that they are executed by some other 
thread and it should not be the process' private state thread) then the main 
lldb-vscode thread will race with this thread that executes stop hooks.

In D119548#3318806 , @labath wrote:

> How does that work on other platforms?

I've only tested it on Linux and Windows, so, I don't know whether this bug is 
present on other platforms. On Linux, the first stop event is pushed to the 
event queue of `lldb.PlatformLinux.DebugProcess.hijack` listener but then this 
listener gets unsubscribed from the process events and deleted, so, the first 
stop event is not handled at all, `DoOnRemoval` method is not invoked thus stop 
hooks are not executed.
In the command line mode, as I understand, the synchronicity flag is not 
switched as often as lldb-vscode switches it, so, it seems that the race is 
avoided by chance.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D119548/new/

https://reviews.llvm.org/D119548

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D119548: [lldb] Fix race condition between lldb-vscode and stop hooks executor

2022-02-14 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin updated this revision to Diff 408401.
ilya-nozhkin added a comment.

Moved the body of this giant lambda to a separate method.
Also, renamed the locking method because "do with locked synchronization" looks 
like the synchronized mode will be locked. "Do with locked synchronicity" looks 
better IMO.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D119548/new/

https://reviews.llvm.org/D119548

Files:
  lldb/include/lldb/Core/Debugger.h
  lldb/include/lldb/Interpreter/CommandInterpreter.h
  lldb/packages/Python/lldbsuite/test/lldbtest.py
  lldb/packages/Python/lldbsuite/test/tools/lldb-vscode/lldbvscode_testcase.py
  lldb/source/Core/Debugger.cpp
  lldb/source/Interpreter/CommandInterpreter.cpp
  lldb/source/Target/Target.cpp
  lldb/test/API/tools/lldb-vscode/stop-hooks/Makefile
  lldb/test/API/tools/lldb-vscode/stop-hooks/TestVSCode_stop_hooks.py
  lldb/test/API/tools/lldb-vscode/stop-hooks/main.c

Index: lldb/test/API/tools/lldb-vscode/stop-hooks/main.c
===
--- /dev/null
+++ lldb/test/API/tools/lldb-vscode/stop-hooks/main.c
@@ -0,0 +1 @@
+int main() { return 0; }
Index: lldb/test/API/tools/lldb-vscode/stop-hooks/TestVSCode_stop_hooks.py
===
--- /dev/null
+++ lldb/test/API/tools/lldb-vscode/stop-hooks/TestVSCode_stop_hooks.py
@@ -0,0 +1,35 @@
+"""
+Test stop hooks
+"""
+
+
+from lldbsuite.test.decorators import *
+from lldbsuite.test.lldbtest import *
+import lldbvscode_testcase
+
+
+class TestVSCode_stop_hooks(lldbvscode_testcase.VSCodeTestCaseBase):
+
+mydir = TestBase.compute_mydir(__file__)
+
+@skipIfRemote
+def test_stop_hooks_before_run(self):
+'''
+Test that there is no race condition between lldb-vscode and
+stop hooks executor
+'''
+program = self.getBuildArtifact("a.out")
+preRunCommands = ['target stop-hook add -o help']
+self.build_and_launch(program, stopOnEntry=True, preRunCommands=preRunCommands)
+
+# The first stop is on entry.
+self.continue_to_next_stop()
+
+breakpoint_ids = self.set_function_breakpoints(['main'])
+# This request hangs if the race happens, because, in that case, the
+# command interpreter is in synchronous mode while lldb-vscode expects
+# it to be in asynchronous mode, so, the process doesn't send the stop
+# event to "lldb.Debugger" listener (which is monitored by lldb-vscode).
+self.continue_to_breakpoints(breakpoint_ids)
+
+self.continue_to_exit()
Index: lldb/test/API/tools/lldb-vscode/stop-hooks/Makefile
===
--- /dev/null
+++ lldb/test/API/tools/lldb-vscode/stop-hooks/Makefile
@@ -0,0 +1,3 @@
+C_SOURCES := main.c
+
+include Makefile.rules
Index: lldb/source/Target/Target.cpp
===
--- lldb/source/Target/Target.cpp
+++ lldb/source/Target/Target.cpp
@@ -3457,11 +3457,12 @@
   options.SetAddToHistory(false);
 
   // Force Async:
-  bool old_async = debugger.GetAsyncExecution();
-  debugger.SetAsyncExecution(true);
-  debugger.GetCommandInterpreter().HandleCommands(GetCommands(), exc_ctx,
-  options, result);
-  debugger.SetAsyncExecution(old_async);
+  debugger.DoWithLockedAsynchronicity(
+  true,
+  [&debugger, &commands = GetCommands(), &exc_ctx, &options, &result]() {
+debugger.GetCommandInterpreter().HandleCommands(commands, exc_ctx,
+options, result);
+  });
   lldb::ReturnStatus status = result.GetStatus();
   if (status == eReturnStatusSuccessContinuingNoResult ||
   status == eReturnStatusSuccessContinuingResult)
Index: lldb/source/Interpreter/CommandInterpreter.cpp
===
--- lldb/source/Interpreter/CommandInterpreter.cpp
+++ lldb/source/Interpreter/CommandInterpreter.cpp
@@ -2447,124 +2447,21 @@
   RestoreExecutionContext();
 }
 
-void CommandInterpreter::HandleCommands(const StringList &commands,
-const CommandInterpreterRunOptions &options,
-CommandReturnObject &result) {
-  size_t num_lines = commands.GetSize();
-
-  // If we are going to continue past a "continue" then we need to run the
-  // commands synchronously. Make sure you reset this value anywhere you return
-  // from the function.
-
-  bool old_async_execution = m_debugger.GetAsyncExecution();
-
-  if (!options.GetStopOnContinue()) {
-m_debugger.SetAsyncExecution(false);
-  }
+void CommandInterpreter::HandleCommands(
+const StringList &commands, const CommandInterpreterRunOptions &options,
+CommandReturnObject &result) {
 
-  for (size_t idx = 0; idx < num_lines && !WasInterrupted(); idx++) {
-const ch

[Lldb-commits] [PATCH] D119548: [lldb] Fix race condition between lldb-vscode and stop hooks executor

2022-02-14 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin added a comment.

In D119548#3318995 , @labath wrote:

> I think that could be achieved by "hijacking" the process events in the case 
> of a synchronous launch, similar to how we do it for synchronous resumes 
> (compare Process::Resume and Process:ResumeSynchronous) -- except that this 
> may be slightly trickier to implement because some of our (platform-specific) 
> launch code already hijacks these events.

Oh, I didn't notice your answer when I was updating the diff. Yeah, I think, it 
will work perfectly even if platform-specific code hijacks events (it is done 
in the same thread, so, no race should happen, we only need to catch any events 
that weren't catched by platform-specific code). I'll try to implement this 
approach.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D119548/new/

https://reviews.llvm.org/D119548

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D119548: [lldb] Fix race condition between lldb-vscode and stop hooks executor

2022-02-14 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin added a comment.

In D119548#3321357 , @jingham wrote:

> SBTarget.Launch calls Target::Launch.  That sets up a hijacker for the "stop 
> at the first instruction" event regardless of the Sync mode.

The problem that it sets up this hijacker too late. See this part of the log 
(lines starting with // are my comments):

  // The process has been launched successfully, its internal hijacker is 
removed.
   0x02875FCB1AE0 Broadcaster("lldb.process")::RestoreBroadcaster (about to 
pop listener("LaunchEventHijack")=0x02875FFCF890)
   Process::SetPublicState (state = stopped, restarted = 0)
   Process::SetPublicState (stopped) -- unlocking run lock
   Process::lldb_private::Process::StartPrivateStateThread() starting private 
state thread
  
  // Some internal magic happens inside the private state thread.
   Process::lldb_private::Process::ControlPrivateStateThread (signal = 4)
   Sending control event of type: 4.
   028760372F90 
Broadcaster("lldb.process.internal_state_control_broadcaster")::BroadcastEvent 
(event_sp = {028760571460 Event: broadcaster = 02876047C4C0 
(lldb.process.internal_state_control_broadcaster), type = 0x0004 
(control-resume), data = {Generic Event Data}}, unique =0) hijack = 

   02875FFD00D0 Listener('lldb.process.internal_state_listener')::AddEvent 
(event_sp = {028760571460})
   Process::lldb_private::Process::RunPrivateStateThread (arg = 
02876047C350, pid = 9944) thread starting...
   timeout = , event_sp)...
   this = 0x02875FFD00D0, timeout =  for 
lldb.process.internal_state_listener
   02875FFD00D0 'lldb.process.internal_state_listener' 
Listener::FindNextEventInternal(broadcaster=02876047C4C0, 
broadcaster_names=[0], event_type_mask=0x, remove=1) 
event 028760571460
   Process::lldb_private::Process::RunPrivateStateThread (arg = 
02876047C350, pid = 9944) got a control event: 4
   02875FFCF5D0 Listener::StartListeningForEvents (broadcaster = 
02876047C9C8, mask = 0x0020) acquired_mask = 0x0020 for 
Communication::SyncronizeWithReadThread
   timeout = , event_sp)...
   this = 0x02875FFD00D0, timeout =  for 
lldb.process.internal_state_listener
   Process::ShouldBroadcastEvent (0287604D4CB0) => new state: stopped, last 
broadcast state: stopped - YES
   Process::lldb_private::Process::HandlePrivateEvent (pid = 9944) broadcasting 
new state stopped (old state stopped) to public
  
  // Process broadcasts the stop event.
   02875FCB1AE0 Broadcaster("lldb.process")::BroadcastEvent (event_sp = 
{0287604D4CB0 Event: broadcaster = 02876047C378 (lldb.process), type = 
0x0001 (state-changed), data = { process = 02876047C350 (pid = 9944), 
state = stopped}}, unique =0) hijack = 
  
  // It is catched by "lldb.Debugger"!
   02875DB0BD40 Listener('lldb.Debugger')::AddEvent (event_sp = 
{0287604D4CB0})
  
  // And only now the target adds its hijacking listener!
   0x02875FCB1AE0 Broadcaster("lldb.process")::HijackBroadcaster 
(listener("lldb.Target.Launch.hijack")=0x02875FFCFE10)


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D119548/new/

https://reviews.llvm.org/D119548

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D119548: [lldb] Fix race condition between lldb-vscode and stop hooks executor

2022-02-14 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin added a comment.

In D119548#3321376 , @clayborg wrote:

> FYI: I have a patch coming up that will add the ability to not have to play 
> with the async mode when launching or attaching. I will try and post that 
> diff here, it should be up today.
>
> We were running into a race condition when we used "launchCommands" or 
> "attachCommands" as they are not executed in sync mode, so we ended up 
> getting events before the IDE was ready. I am adding code to wait for the 
> process to stop (we always stop at entry when launching via lldb-vscode) 
> after the "launchCommands" or "attachCommands", and we might be able to get 
> rid of the setting of async mode before and after launch.

Oh, that's great to hear! It will probably solve this race condition on stop 
hooks as well.
Then, I think, I'd rather wait for your changes and run my test case against 
them.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D119548/new/

https://reviews.llvm.org/D119548

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D119548: [lldb] Fix race condition between lldb-vscode and stop hooks executor

2022-02-14 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin added a comment.

In D119548#3321452 , @clayborg wrote:

> Can anyone try out the following patch and see if this stops the race 
> condition
>
> https://reviews.llvm.org/D119797
>
> It might solve the problem.

Yeah, now the test passes and VS Code doesn't fall into infinite waiting loop.
I'm still concerned about that weird event dispatching happening during the 
launch, but lldb-vscode now works, so, I can close this review if you want to 
investigate this bug sometime later.

In D119548#3321459 , @jingham wrote:

> Target::Launch consumes the "stop at entry" event when it calls 
> WaitForProcessToStop.

Actually, there is a gap between returning from `Process::Launch` and setting 
up the target's hijacking listener. I think, the stop event sneaks past the 
target right during this time interval. So, `Target::Launch` can't consume it 
because there is nothing to consume (i.e. the event has been added to another 
listener's queue). But I agree that it's odd that this event is broadcasted at 
all.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D119548/new/

https://reviews.llvm.org/D119548

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D119797: Fix race condition when launching and attaching.

2022-02-14 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin added inline comments.



Comment at: lldb/tools/lldb-vscode/VSCode.cpp:9
 
+#include 
 #include 

Incompatible with MSVC. Maybe use std::chrono instead?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D119797/new/

https://reviews.llvm.org/D119797

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D119548: [lldb] Fix race condition between lldb-vscode and stop hooks executor

2022-02-15 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin updated this revision to Diff 409091.
ilya-nozhkin edited the summary of this revision.
ilya-nozhkin added a comment.
Herald added a subscriber: mgrang.

Implemented the approach suggested by @labath. I.e. now the target's hijacking 
listener is activated before `Process::Launch`.

There are actually two ways of implementing that. The first of them (which is 
used in this diff) is to make the target activate the hijacking listener right 
after the process creation. The second one is to specify the hijacking listener 
in the launch info and then pass it to the platform, but it requires to modify 
all platforms to force them to activate the hijacking listener passed via 
launch info (currently, no platform actually does that). I've chosen the first 
way because it seems to be reliable and safe. I.e. the target won't depend on 
platforms, so, it'll be harder to break event dispatching again by some 
non-related modifications of platform's code.

I've also decided not to remove platform-specific hijacking listeners yet 
because some platforms define a hijacking listener even in asynchronous mode 
and then use it for waiting for the process to stop (for example, 
PlatformQemuUser does that). I'll try to test tomorrow whether it is possible 
to remove hijacking listeners at least from some subset of platforms.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D119548/new/

https://reviews.llvm.org/D119548

Files:
  lldb/include/lldb/Target/Target.h
  lldb/packages/Python/lldbsuite/test/lldbtest.py
  lldb/packages/Python/lldbsuite/test/tools/lldb-vscode/lldbvscode_testcase.py
  lldb/source/Target/Process.cpp
  lldb/source/Target/Target.cpp
  lldb/test/API/tools/lldb-vscode/stop-hooks/Makefile
  lldb/test/API/tools/lldb-vscode/stop-hooks/TestVSCode_stop_hooks.py
  lldb/test/API/tools/lldb-vscode/stop-hooks/main.c
  vscode-race-fix-3.patch

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D119548: [lldb] Fix race condition between lldb-vscode and stop hooks executor

2022-02-16 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin updated this revision to Diff 409404.
ilya-nozhkin added a comment.
Herald added a subscriber: emaste.

Applied requested changes.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D119548/new/

https://reviews.llvm.org/D119548

Files:
  lldb/include/lldb/Target/Process.h
  lldb/include/lldb/Target/Target.h
  lldb/packages/Python/lldbsuite/test/lldbtest.py
  lldb/packages/Python/lldbsuite/test/tools/lldb-vscode/lldbvscode_testcase.py
  lldb/source/Plugins/Platform/POSIX/PlatformPOSIX.cpp
  lldb/source/Plugins/Platform/QemuUser/PlatformQemuUser.cpp
  lldb/source/Plugins/Platform/Windows/PlatformWindows.cpp
  lldb/source/Plugins/Platform/gdb-server/PlatformRemoteGDBServer.cpp
  lldb/source/Target/Process.cpp
  lldb/source/Target/Target.cpp
  lldb/test/API/tools/lldb-vscode/stop-hooks/Makefile
  lldb/test/API/tools/lldb-vscode/stop-hooks/TestVSCode_stop_hooks.py
  lldb/test/API/tools/lldb-vscode/stop-hooks/main.c

Index: lldb/test/API/tools/lldb-vscode/stop-hooks/main.c
===
--- /dev/null
+++ lldb/test/API/tools/lldb-vscode/stop-hooks/main.c
@@ -0,0 +1 @@
+int main() { return 0; }
Index: lldb/test/API/tools/lldb-vscode/stop-hooks/TestVSCode_stop_hooks.py
===
--- /dev/null
+++ lldb/test/API/tools/lldb-vscode/stop-hooks/TestVSCode_stop_hooks.py
@@ -0,0 +1,35 @@
+"""
+Test stop hooks
+"""
+
+
+from lldbsuite.test.decorators import *
+from lldbsuite.test.lldbtest import *
+import lldbvscode_testcase
+
+
+class TestVSCode_stop_hooks(lldbvscode_testcase.VSCodeTestCaseBase):
+
+mydir = TestBase.compute_mydir(__file__)
+
+@skipIfRemote
+def test_stop_hooks_before_run(self):
+'''
+Test that there is no race condition between lldb-vscode and
+stop hooks executor
+'''
+program = self.getBuildArtifact("a.out")
+preRunCommands = ['target stop-hook add -o help']
+self.build_and_launch(program, stopOnEntry=True, preRunCommands=preRunCommands)
+
+# The first stop is on entry.
+self.continue_to_next_stop()
+
+breakpoint_ids = self.set_function_breakpoints(['main'])
+# This request hangs if the race happens, because, in that case, the
+# command interpreter is in synchronous mode while lldb-vscode expects
+# it to be in asynchronous mode, so, the process doesn't send the stop
+# event to "lldb.Debugger" listener (which is monitored by lldb-vscode).
+self.continue_to_breakpoints(breakpoint_ids)
+
+self.continue_to_exit()
Index: lldb/test/API/tools/lldb-vscode/stop-hooks/Makefile
===
--- /dev/null
+++ lldb/test/API/tools/lldb-vscode/stop-hooks/Makefile
@@ -0,0 +1,3 @@
+C_SOURCES := main.c
+
+include Makefile.rules
Index: lldb/source/Target/Target.cpp
===
--- lldb/source/Target/Target.cpp
+++ lldb/source/Target/Target.cpp
@@ -200,12 +200,17 @@
 const lldb::ProcessSP &Target::CreateProcess(ListenerSP listener_sp,
  llvm::StringRef plugin_name,
  const FileSpec *crash_file,
- bool can_connect) {
+ bool can_connect,
+ ListenerSP hijack_listener_sp) {
   if (!listener_sp)
 listener_sp = GetDebugger().GetListener();
   DeleteCurrentProcess();
   m_process_sp = Process::FindPlugin(shared_from_this(), plugin_name,
  listener_sp, crash_file, can_connect);
+
+  if (m_process_sp && hijack_listener_sp)
+m_process_sp->HijackProcessEvents(hijack_listener_sp);
+
   return m_process_sp;
 }
 
@@ -3026,6 +3031,14 @@
   if (!launch_info.GetArchitecture().IsValid())
 launch_info.GetArchitecture() = GetArchitecture();
 
+  // Hijacking events of the process to be created to be sure that all events
+  // until the first stop are intercepted (in case if platform doesn't define
+  // its own hijacking listener or if the process is created by the target
+  // manually, without the platform).
+  if (synchronous_execution && !launch_info.GetHijackListener())
+launch_info.SetHijackListener(
+Listener::MakeListener("lldb.Target.Launch.hijack"));
+
   // If we're not already connected to the process, and if we have a platform
   // that can launch a process for debugging, go ahead and do that here.
   if (state != eStateConnected && platform_sp &&
@@ -3053,7 +3066,8 @@
 } else {
   // Use a Process plugin to construct the process.
   const char *plugin_name = launch_info.GetProcessPluginName();
-  CreateProcess(launch_info.GetListener(), plugin_name, nullptr, false);
+  CreateProcess(launch_info.GetListener(), plugin_name, nullptr, false,
+   

[Lldb-commits] [PATCH] D119548: [lldb] Fix race condition between lldb-vscode and stop hooks executor

2022-02-17 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin updated this revision to Diff 409655.
ilya-nozhkin added a comment.

Made requested changes.

I agree with @jingham that if the listener is passed via launch info then we 
don't need to activate it in `CreateProcess`. I mean, activating it in 
`CreateProcess` was a hack that allowed me to hijack events regardless of 
platform implementation. But if the listener is passed via launch info then 
platforms should redirect it to `CreateProcess` anyway. So, why not activate 
the listener in the platform's code? On the one hand, it creates unnecessary 
code duplication but on the other hand, it makes responsibilities of 
`CreateProcess` clearer. A possible compromise is to create a new method 
`Target::PrepareProcessForLaunch` and set up the hijacker there.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D119548/new/

https://reviews.llvm.org/D119548

Files:
  lldb/include/lldb/Target/Process.h
  lldb/packages/Python/lldbsuite/test/lldbtest.py
  lldb/packages/Python/lldbsuite/test/tools/lldb-vscode/lldbvscode_testcase.py
  lldb/source/Plugins/Platform/POSIX/PlatformPOSIX.cpp
  lldb/source/Plugins/Platform/QemuUser/PlatformQemuUser.cpp
  lldb/source/Plugins/Platform/Windows/PlatformWindows.cpp
  lldb/source/Plugins/Platform/gdb-server/PlatformRemoteGDBServer.cpp
  lldb/source/Target/Process.cpp
  lldb/source/Target/Target.cpp
  lldb/test/API/tools/lldb-vscode/stop-hooks/Makefile
  lldb/test/API/tools/lldb-vscode/stop-hooks/TestVSCode_stop_hooks.py
  lldb/test/API/tools/lldb-vscode/stop-hooks/main.c

Index: lldb/test/API/tools/lldb-vscode/stop-hooks/main.c
===
--- /dev/null
+++ lldb/test/API/tools/lldb-vscode/stop-hooks/main.c
@@ -0,0 +1 @@
+int main() { return 0; }
Index: lldb/test/API/tools/lldb-vscode/stop-hooks/TestVSCode_stop_hooks.py
===
--- /dev/null
+++ lldb/test/API/tools/lldb-vscode/stop-hooks/TestVSCode_stop_hooks.py
@@ -0,0 +1,35 @@
+"""
+Test stop hooks
+"""
+
+
+from lldbsuite.test.decorators import *
+from lldbsuite.test.lldbtest import *
+import lldbvscode_testcase
+
+
+class TestVSCode_stop_hooks(lldbvscode_testcase.VSCodeTestCaseBase):
+
+mydir = TestBase.compute_mydir(__file__)
+
+@skipIfRemote
+def test_stop_hooks_before_run(self):
+'''
+Test that there is no race condition between lldb-vscode and
+stop hooks executor
+'''
+program = self.getBuildArtifact("a.out")
+preRunCommands = ['target stop-hook add -o help']
+self.build_and_launch(program, stopOnEntry=True, preRunCommands=preRunCommands)
+
+# The first stop is on entry.
+self.continue_to_next_stop()
+
+breakpoint_ids = self.set_function_breakpoints(['main'])
+# This request hangs if the race happens, because, in that case, the
+# command interpreter is in synchronous mode while lldb-vscode expects
+# it to be in asynchronous mode, so, the process doesn't send the stop
+# event to "lldb.Debugger" listener (which is monitored by lldb-vscode).
+self.continue_to_breakpoints(breakpoint_ids)
+
+self.continue_to_exit()
Index: lldb/test/API/tools/lldb-vscode/stop-hooks/Makefile
===
--- /dev/null
+++ lldb/test/API/tools/lldb-vscode/stop-hooks/Makefile
@@ -0,0 +1,3 @@
+C_SOURCES := main.c
+
+include Makefile.rules
Index: lldb/source/Target/Target.cpp
===
--- lldb/source/Target/Target.cpp
+++ lldb/source/Target/Target.cpp
@@ -3026,6 +3026,14 @@
   if (!launch_info.GetArchitecture().IsValid())
 launch_info.GetArchitecture() = GetArchitecture();
 
+  // Hijacking events of the process to be created to be sure that all events
+  // until the first stop are intercepted (in case if platform doesn't define
+  // its own hijacking listener or if the process is created by the target
+  // manually, without the platform).
+  if (!launch_info.GetHijackListener())
+launch_info.SetHijackListener(
+Listener::MakeListener("lldb.Target.Launch.hijack"));
+
   // If we're not already connected to the process, and if we have a platform
   // that can launch a process for debugging, go ahead and do that here.
   if (state != eStateConnected && platform_sp &&
@@ -3057,8 +3065,10 @@
 }
 
 // Since we didn't have a platform launch the process, launch it here.
-if (m_process_sp)
+if (m_process_sp) {
+  m_process_sp->HijackProcessEvents(launch_info.GetHijackListener());
   error = m_process_sp->Launch(launch_info);
+}
   }
 
   if (!m_process_sp && error.Success())
@@ -3067,35 +3077,35 @@
   if (!error.Success())
 return error;
 
-  auto at_exit =
-  llvm::make_scope_exit([&]() { m_process_sp->RestoreProcessEvents(); });
+  bool rebroadcast_first_stop =
+  !synchronous_execution &&
+  launch_info.Ge

[Lldb-commits] [PATCH] D119548: [lldb] Fix race condition between lldb-vscode and stop hooks executor

2022-02-17 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin added a comment.

In D119548#3330226 , @clayborg wrote:

> Try out the https://reviews.llvm.org/D119797 patch and see if this fixes this 
> issue? I have just updated it after responding to comments.

Yes, it fixes this particular race. But I think, that the issue is not only 
with stop hooks and lldb-vscode. Some other tools can do something similar to 
lldb-vscode and get the same issue. So, I think, it's still worth fixing the 
event dispatching.




Comment at: lldb/source/Target/Process.cpp:2458
+  if (launch_info.GetFlags().Test(eLaunchFlagStopAtEntry))
+HandlePrivateEvent(first_stop_event_sp);
+

Btw, this seems to be the line that sends the public stop event. So, adding 
synchronicity check to the condition should fix the issue as well. But I'd keep 
the target's hijacking listener too because we still want to intercept all 
event during the launch, not to remove them.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D119548/new/

https://reviews.llvm.org/D119548

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D119548: [lldb] Fix race condition between lldb-vscode and stop hooks executor

2022-02-21 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin added a comment.

In D119548#3334776 , @labath wrote:

> Do you need someone to commit this for you? (I can probably do that tomorrow)

Yes, that would be nice. I don't have commit access.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D119548/new/

https://reviews.llvm.org/D119548

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D120319: Set error message if ValueObjectRegister fails to write back to register

2022-02-22 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin created this revision.
ilya-nozhkin added reviewers: labath, clayborg.
Herald added a subscriber: pengfei.
ilya-nozhkin requested review of this revision.
Herald added a project: LLDB.
Herald added a subscriber: lldb-commits.

`SetValueFromCString` and `SetData` methods return false if register can't
be written but they don't set a error message. It sometimes confuses callers of
these methods because they try to get the error message in case of failure but
`Status::AsCString` returns `nullptr`.

For example, `lldb-vscode` crashes due to this bug if some register can't be
written. It invokes `SBError::GetCString` in case of error and doesn't check
whether the result is `nullptr` (see `request_setVariable` implementation in
`lldb-vscode.cpp` for more info).

I couldn't write any test for this issue on x86 architecture because all x86
registers are writable. It might be possible to add such test for some other
testable architecture but I don't know much about them and I don't have a
machine with any architecture other than x86 to actually run the test.


Repository:
  rG LLVM Github Monorepo

https://reviews.llvm.org/D120319

Files:
  lldb/source/Core/ValueObjectRegister.cpp


Index: lldb/source/Core/ValueObjectRegister.cpp
===
--- lldb/source/Core/ValueObjectRegister.cpp
+++ lldb/source/Core/ValueObjectRegister.cpp
@@ -269,26 +269,30 @@
   // The new value will be in the m_data.  Copy that into our register value.
   error =
   m_reg_value.SetValueFromString(&m_reg_info, llvm::StringRef(value_str));
-  if (error.Success()) {
-if (m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
-  SetNeedsUpdate();
-  return true;
-} else
-  return false;
-  } else
+  if (!error.Success())
 return false;
+
+  if (!m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
+error.SetErrorString("unable to write back to register");
+return false;
+  }
+
+  SetNeedsUpdate();
+  return true;
 }
 
 bool ValueObjectRegister::SetData(DataExtractor &data, Status &error) {
   error = m_reg_value.SetValueFromData(&m_reg_info, data, 0, false);
-  if (error.Success()) {
-if (m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
-  SetNeedsUpdate();
-  return true;
-} else
-  return false;
-  } else
+  if (!error.Success())
 return false;
+
+  if (!m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
+error.SetErrorString("unable to write back to register");
+return false;
+  }
+
+  SetNeedsUpdate();
+  return true;
 }
 
 bool ValueObjectRegister::ResolveValue(Scalar &scalar) {


Index: lldb/source/Core/ValueObjectRegister.cpp
===
--- lldb/source/Core/ValueObjectRegister.cpp
+++ lldb/source/Core/ValueObjectRegister.cpp
@@ -269,26 +269,30 @@
   // The new value will be in the m_data.  Copy that into our register value.
   error =
   m_reg_value.SetValueFromString(&m_reg_info, llvm::StringRef(value_str));
-  if (error.Success()) {
-if (m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
-  SetNeedsUpdate();
-  return true;
-} else
-  return false;
-  } else
+  if (!error.Success())
 return false;
+
+  if (!m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
+error.SetErrorString("unable to write back to register");
+return false;
+  }
+
+  SetNeedsUpdate();
+  return true;
 }
 
 bool ValueObjectRegister::SetData(DataExtractor &data, Status &error) {
   error = m_reg_value.SetValueFromData(&m_reg_info, data, 0, false);
-  if (error.Success()) {
-if (m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
-  SetNeedsUpdate();
-  return true;
-} else
-  return false;
-  } else
+  if (!error.Success())
 return false;
+
+  if (!m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
+error.SetErrorString("unable to write back to register");
+return false;
+  }
+
+  SetNeedsUpdate();
+  return true;
 }
 
 bool ValueObjectRegister::ResolveValue(Scalar &scalar) {
___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D120319: Set error message if ValueObjectRegister fails to write back to register

2022-02-23 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin updated this revision to Diff 411015.
ilya-nozhkin edited the summary of this revision.
ilya-nozhkin added a comment.

Added a test based on core files debugging. `RegisterContextCorePOSIX_x86_64` 
indeed forbids writing registers.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D120319/new/

https://reviews.llvm.org/D120319

Files:
  lldb/source/Core/ValueObjectRegister.cpp
  lldb/test/API/functionalities/postmortem/elf-core/TestLinuxCore.py


Index: lldb/test/API/functionalities/postmortem/elf-core/TestLinuxCore.py
===
--- lldb/test/API/functionalities/postmortem/elf-core/TestLinuxCore.py
+++ lldb/test/API/functionalities/postmortem/elf-core/TestLinuxCore.py
@@ -133,6 +133,30 @@
 self.assertEqual(len(bytesread), 16)
 self.dbg.DeleteTarget(target)
 
+@skipIfLLVMTargetMissing("X86")
+def test_write_register(self):
+"""Test that writing to register results in an error and that error
+   message is set."""
+target: lldb.SBTarget = self.dbg.CreateTarget("linux-x86_64.out")
+process: lldb.SBProcess = target.LoadCore("linux-x86_64.core")
+self.assertTrue(process, PROCESS_IS_VALID)
+
+thread: lldb.SBThread = process.GetSelectedThread()
+self.assertTrue(thread)
+
+frame: lldb.SBFrame = thread.GetSelectedFrame()
+self.assertTrue(frame)
+
+reg_value: lldb.SBValue = frame.FindRegister('eax')
+if not reg_value.IsValid():
+  return
+
+error = lldb.SBError()
+success = reg_value.SetValueFromCString('10', error)
+self.assertFalse(success)
+self.assertTrue(error.Fail())
+self.assertIsNotNone(error.GetCString())
+
 @skipIfLLVMTargetMissing("X86")
 def test_FPR_SSE(self):
 # check x86_64 core file
Index: lldb/source/Core/ValueObjectRegister.cpp
===
--- lldb/source/Core/ValueObjectRegister.cpp
+++ lldb/source/Core/ValueObjectRegister.cpp
@@ -269,26 +269,30 @@
   // The new value will be in the m_data.  Copy that into our register value.
   error =
   m_reg_value.SetValueFromString(&m_reg_info, llvm::StringRef(value_str));
-  if (error.Success()) {
-if (m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
-  SetNeedsUpdate();
-  return true;
-} else
-  return false;
-  } else
+  if (!error.Success())
 return false;
+
+  if (!m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
+error.SetErrorString("unable to write back to register");
+return false;
+  }
+
+  SetNeedsUpdate();
+  return true;
 }
 
 bool ValueObjectRegister::SetData(DataExtractor &data, Status &error) {
   error = m_reg_value.SetValueFromData(&m_reg_info, data, 0, false);
-  if (error.Success()) {
-if (m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
-  SetNeedsUpdate();
-  return true;
-} else
-  return false;
-  } else
+  if (!error.Success())
 return false;
+
+  if (!m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
+error.SetErrorString("unable to write back to register");
+return false;
+  }
+
+  SetNeedsUpdate();
+  return true;
 }
 
 bool ValueObjectRegister::ResolveValue(Scalar &scalar) {


Index: lldb/test/API/functionalities/postmortem/elf-core/TestLinuxCore.py
===
--- lldb/test/API/functionalities/postmortem/elf-core/TestLinuxCore.py
+++ lldb/test/API/functionalities/postmortem/elf-core/TestLinuxCore.py
@@ -133,6 +133,30 @@
 self.assertEqual(len(bytesread), 16)
 self.dbg.DeleteTarget(target)
 
+@skipIfLLVMTargetMissing("X86")
+def test_write_register(self):
+"""Test that writing to register results in an error and that error
+   message is set."""
+target: lldb.SBTarget = self.dbg.CreateTarget("linux-x86_64.out")
+process: lldb.SBProcess = target.LoadCore("linux-x86_64.core")
+self.assertTrue(process, PROCESS_IS_VALID)
+
+thread: lldb.SBThread = process.GetSelectedThread()
+self.assertTrue(thread)
+
+frame: lldb.SBFrame = thread.GetSelectedFrame()
+self.assertTrue(frame)
+
+reg_value: lldb.SBValue = frame.FindRegister('eax')
+if not reg_value.IsValid():
+  return
+
+error = lldb.SBError()
+success = reg_value.SetValueFromCString('10', error)
+self.assertFalse(success)
+self.assertTrue(error.Fail())
+self.assertIsNotNone(error.GetCString())
+
 @skipIfLLVMTargetMissing("X86")
 def test_FPR_SSE(self):
 # check x86_64 core file
Index: lldb/source/Core/ValueObjectRegister.cpp
===
--- lldb/source/Core/ValueObjectRegister.cpp
+++ lldb/source/Core/ValueObjectRegister.cpp
@@ -269,26 +269,30 @@
   // The new value will be in the m_data.  Copy that into 

[Lldb-commits] [PATCH] D120319: Set error message if ValueObjectRegister fails to write back to register

2022-02-24 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin updated this revision to Diff 411065.
ilya-nozhkin added a comment.

Replaced redundant check in the test with assert. Also, I've removed type 
annotations to be consistent with the code around.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D120319/new/

https://reviews.llvm.org/D120319

Files:
  lldb/source/Core/ValueObjectRegister.cpp
  lldb/test/API/functionalities/postmortem/elf-core/TestLinuxCore.py


Index: lldb/test/API/functionalities/postmortem/elf-core/TestLinuxCore.py
===
--- lldb/test/API/functionalities/postmortem/elf-core/TestLinuxCore.py
+++ lldb/test/API/functionalities/postmortem/elf-core/TestLinuxCore.py
@@ -133,6 +133,29 @@
 self.assertEqual(len(bytesread), 16)
 self.dbg.DeleteTarget(target)
 
+@skipIfLLVMTargetMissing("X86")
+def test_write_register(self):
+"""Test that writing to register results in an error and that error
+   message is set."""
+target = self.dbg.CreateTarget("linux-x86_64.out")
+process = target.LoadCore("linux-x86_64.core")
+self.assertTrue(process, PROCESS_IS_VALID)
+
+thread = process.GetSelectedThread()
+self.assertTrue(thread)
+
+frame = thread.GetSelectedFrame()
+self.assertTrue(frame)
+
+reg_value = frame.FindRegister('eax')
+self.assertTrue(reg_value)
+
+error = lldb.SBError()
+success = reg_value.SetValueFromCString('10', error)
+self.assertFalse(success)
+self.assertTrue(error.Fail())
+self.assertIsNotNone(error.GetCString())
+
 @skipIfLLVMTargetMissing("X86")
 def test_FPR_SSE(self):
 # check x86_64 core file
Index: lldb/source/Core/ValueObjectRegister.cpp
===
--- lldb/source/Core/ValueObjectRegister.cpp
+++ lldb/source/Core/ValueObjectRegister.cpp
@@ -269,26 +269,30 @@
   // The new value will be in the m_data.  Copy that into our register value.
   error =
   m_reg_value.SetValueFromString(&m_reg_info, llvm::StringRef(value_str));
-  if (error.Success()) {
-if (m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
-  SetNeedsUpdate();
-  return true;
-} else
-  return false;
-  } else
+  if (!error.Success())
 return false;
+
+  if (!m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
+error.SetErrorString("unable to write back to register");
+return false;
+  }
+
+  SetNeedsUpdate();
+  return true;
 }
 
 bool ValueObjectRegister::SetData(DataExtractor &data, Status &error) {
   error = m_reg_value.SetValueFromData(&m_reg_info, data, 0, false);
-  if (error.Success()) {
-if (m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
-  SetNeedsUpdate();
-  return true;
-} else
-  return false;
-  } else
+  if (!error.Success())
 return false;
+
+  if (!m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
+error.SetErrorString("unable to write back to register");
+return false;
+  }
+
+  SetNeedsUpdate();
+  return true;
 }
 
 bool ValueObjectRegister::ResolveValue(Scalar &scalar) {


Index: lldb/test/API/functionalities/postmortem/elf-core/TestLinuxCore.py
===
--- lldb/test/API/functionalities/postmortem/elf-core/TestLinuxCore.py
+++ lldb/test/API/functionalities/postmortem/elf-core/TestLinuxCore.py
@@ -133,6 +133,29 @@
 self.assertEqual(len(bytesread), 16)
 self.dbg.DeleteTarget(target)
 
+@skipIfLLVMTargetMissing("X86")
+def test_write_register(self):
+"""Test that writing to register results in an error and that error
+   message is set."""
+target = self.dbg.CreateTarget("linux-x86_64.out")
+process = target.LoadCore("linux-x86_64.core")
+self.assertTrue(process, PROCESS_IS_VALID)
+
+thread = process.GetSelectedThread()
+self.assertTrue(thread)
+
+frame = thread.GetSelectedFrame()
+self.assertTrue(frame)
+
+reg_value = frame.FindRegister('eax')
+self.assertTrue(reg_value)
+
+error = lldb.SBError()
+success = reg_value.SetValueFromCString('10', error)
+self.assertFalse(success)
+self.assertTrue(error.Fail())
+self.assertIsNotNone(error.GetCString())
+
 @skipIfLLVMTargetMissing("X86")
 def test_FPR_SSE(self):
 # check x86_64 core file
Index: lldb/source/Core/ValueObjectRegister.cpp
===
--- lldb/source/Core/ValueObjectRegister.cpp
+++ lldb/source/Core/ValueObjectRegister.cpp
@@ -269,26 +269,30 @@
   // The new value will be in the m_data.  Copy that into our register value.
   error =
   m_reg_value.SetValueFromString(&m_reg_info, llvm::StringRef(value_str));
-  if (error.Success()) {
-if (m_reg_ctx_sp->WriteRegister(&m_reg_info, m_reg_value)) {
-  SetNeedsUpdat

[Lldb-commits] [PATCH] D120319: Set error message if ValueObjectRegister fails to write back to register

2022-02-24 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin added a comment.

In D120319#3342657 , @labath wrote:

> Another possibility is to use the gdb-client test infrastructure to simulate 
> a server which refuses to write to a register

It would be the most proper way, I agree, but I think that it is an overkill 
for such a simple case.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D120319/new/

https://reviews.llvm.org/D120319

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D120319: Set error message if ValueObjectRegister fails to write back to register

2022-02-27 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin added a comment.

Could someone, please, commit these changes to the main repository? I still 
don't have commit access. I'll apply for it right after merging these changes 
(LLVM Developer Policy states that it'd be better to have several patches 
already submitted before applying for commit access).


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D120319/new/

https://reviews.llvm.org/D120319

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D120319: Set error message if ValueObjectRegister fails to write back to register

2022-02-27 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin added a comment.

In D120319#3348267 , @ChuanqiXu wrote:

> What's your name and email address for this patch?

Ilya Nozhkin
nozhkin...@gmail.com


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D120319/new/

https://reviews.llvm.org/D120319

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D86652: [lldb] Fix ProcessEventData::DoOnRemoval to support multiple listeners

2020-08-26 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin created this revision.
ilya-nozhkin added reviewers: jingham, clayborg, labath.
ilya-nozhkin added a project: LLDB.
Herald added subscribers: lldb-commits, JDevlieghere.
ilya-nozhkin requested review of this revision.

Process does something with its state and threads when an event is
removed from some public listener's event queue. The comment at the
beginning of DoOnRemoval and the condition checking m_update_state
tell that all these actions on the process state and threads must
be executed only once and after the private listener has already
handled an event. However, there is no any modification of
m_update_state except for invocation of SetUpdateStateOnRemoval
in HandlePrivateEvent. It means that m_update_state becomes 1 after an
event is handled by the private listener and is not changed anymore.
So, if more than one listener listen for process events
(for example, if someone add a listener following the example [1])
then the code in DoOnRemoval may be executed more than once. Moreover,
since events are handled asynchronously, DoOnRemoval may be executed by
two threads simultaneously that leads to data race.

This commit replaces m_update_state-based mechanism with a check of the
event broadcaster at the beginning of DoOnRemoval and a boolean flag
telling whether DoOnRemoval's logic has been already executed.
Also, there added a mutex which is locked during the whole execution of
DoOnRemoval. It makes DoOnRemoval thread-safe and ensures that the
public state of a process will be correct for all threads invoking
DoOnRemoval.

[1] https://lldb.llvm.org/python_reference/lldb.SBEvent-class.html


Repository:
  rG LLVM Github Monorepo

https://reviews.llvm.org/D86652

Files:
  lldb/include/lldb/Target/Process.h
  lldb/source/Target/Process.cpp
  lldb/unittests/Process/ProcessEventDataTest.cpp

Index: lldb/unittests/Process/ProcessEventDataTest.cpp
===
--- lldb/unittests/Process/ProcessEventDataTest.cpp
+++ lldb/unittests/Process/ProcessEventDataTest.cpp
@@ -169,7 +169,7 @@
   ProcessEventDataSP event_data_sp =
   std::make_shared(process_sp, eStateStopped);
   EventSP event_sp = std::make_shared(0, event_data_sp);
-  event_data_sp->SetUpdateStateOnRemoval(event_sp.get());
+  process_sp->BroadcastEvent(event_sp);
   event_data_sp->DoOnRemoval(event_sp.get());
   bool result = static_cast(event_data_sp.get())
 ->m_should_stop_hit_count == 1;
@@ -181,7 +181,7 @@
   event_data_sp =
   std::make_shared(process_sp, eStateStepping);
   event_sp = std::make_shared(0, event_data_sp);
-  event_data_sp->SetUpdateStateOnRemoval(event_sp.get());
+  process_sp->BroadcastEvent(event_sp);
   event_data_sp->DoOnRemoval(event_sp.get());
   result = static_cast(event_data_sp.get())
->m_should_stop_hit_count == 0;
Index: lldb/source/Target/Process.cpp
===
--- lldb/source/Target/Process.cpp
+++ lldb/source/Target/Process.cpp
@@ -3774,7 +3774,6 @@
 StateAsCString(GetState()),
 is_hijacked ? "hijacked" : "public");
 }
-Process::ProcessEventData::SetUpdateStateOnRemoval(event_sp.get());
 if (StateIsRunningState(new_state)) {
   // Only push the input handler if we aren't fowarding events, as this
   // means the curses GUI is in use... Or don't push it if we are launching
@@ -3987,12 +3986,12 @@
 
 Process::ProcessEventData::ProcessEventData()
 : EventData(), m_process_wp(), m_state(eStateInvalid), m_restarted(false),
-  m_update_state(0), m_interrupted(false) {}
+  m_do_on_removal_executed(false), m_interrupted(false) {}
 
 Process::ProcessEventData::ProcessEventData(const ProcessSP &process_sp,
 StateType state)
 : EventData(), m_process_wp(), m_state(state), m_restarted(false),
-  m_update_state(0), m_interrupted(false) {
+  m_do_on_removal_executed(false), m_interrupted(false) {
   if (process_sp)
 m_process_wp = process_sp;
 }
@@ -4117,20 +4116,34 @@
 }
 
 void Process::ProcessEventData::DoOnRemoval(Event *event_ptr) {
-  ProcessSP process_sp(m_process_wp.lock());
+  // This function gets called each time an event gets pulled off of some
+  // listener's event queue. However, all the code below must be executed only
+  // once and after an event is already handled by the private state listener.
+  // So, here are two guards: the first of them checks that this event has been
+  // broadcasted by the public broadcaster and the second checks that this code
+  // hasn't been executed yet.
+
+  Broadcaster *broadcaster = event_ptr->GetBroadcaster();
+  if (!broadcaster ||
+  broadcaster->GetBroadcasterName() != Process::GetStaticBroadcasterClass())
+return;
 
-  if (!process_sp)
+  // Since different listeners are usually monitored by different threads, two
+  // threads may enter this function simultaneously. This l

[Lldb-commits] [PATCH] D86652: [lldb] Fix ProcessEventData::DoOnRemoval to support multiple listeners

2020-08-27 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin added a comment.

Yeah, I understand the problem of two listeners trying to change process state 
simultaneously and I agree that informing other listeners once the primary one 
finished its job would be a great solution. But the problem is that primary 
listener just provides an event to some user code (via GetEvent) and there is 
no way to find out when this code has finished without requiring it to invoke a 
handshake explicitly.

One of the possible solutions is to send a specifically marked event to a 
primary listener first. Then, make this event to invoke a handshake in its 
destructor (which would mean that primary listener doesn't need the event 
anymore). And handshake can be implemented as just a rebroadcasting a new 
instance of the event to other listeners (or via unlocking some additional 
mutex / notifying some conditional variable). However, destructor may be 
invoked not immediately after event processing is finished. For example, if a 
primary listener's thread defines an EventSP before the listening cycle and 
doesn't reset it until GetEvent returns a new one (which may not happen until 
the next stop, for example).

Anyway, the concept of primary and secondary listeners requires to forbid 
secondary listeners to do "step" or "continue" at all. So, why can't we do the 
same without any additional code? I mean, I think that the most common case of 
multiple listeners is a Python extension adding a listener to read/write some 
memory/registers on each stop taking the RunLock. So, it does not interfere 
with a primary listener except for simultaneous DoOnRemoval invocation (which 
is fixed by this patch). Moreover, as I understand, there is no any way to 
implement such extension without adding a new listener (because if one would 
try to use Debugger's default listener then all other users such as 
lldb-vscode, for example, couldn't handle process events anymore).


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D86652/new/

https://reviews.llvm.org/D86652

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D86652: [lldb] Fix ProcessEventData::DoOnRemoval to support multiple listeners

2020-08-30 Thread Ilya Nozhkin via Phabricator via lldb-commits
ilya-nozhkin added a comment.
Herald added a subscriber: danielkiss.

I've realized that I just didn't know about stop hooks... So, sorry and thanks, 
it solves almost all my problems except for notifying about "resume" events 
which are important for me too. But I can use synchronous 
`Process::Notifications` to receive them (I couldn't use `Notifications` to 
receive stop events because they are sent before the public state is changed by 
`DoOnRemoval`). But then I thought why don't we just move 
`SynchronouslyNotifyStateChanged` invocation from `ShouldBroadcastEvent` to 
`DoOnRemoval`? It would provide the most convenient way of receiving process 
events without intersections with the primary listener. I can make these 
changes in another patch instead of this one.

In D86652#2244043 , @labath wrote:

> This primary listener technique could be implemented purely in your own code, 
> could it not?

I am trying to write an extension that can work with such things as lldb-vscode 
and lldb-mi without disturbing them. They use default `Debugger`'s listener. 
So, I can't change the code of this listener on my side thus I can't force them 
to notify my listener.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D86652/new/

https://reviews.llvm.org/D86652

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits