Hi Yifan, On 3/23/26 2:18 AM, wuyifan wrote: > Hi Reinette, > > On 3/13/2026 4:53 AM, Reinette Chatre wrote: >> Hi Yifan, >> >> On 3/3/26 8:03 PM, Yifan Wu wrote: >>> 3. Since only the child process is monitored, the CPU affinity should >>> also be set only on the child process to ensure that the PMU (Performance >>> Monitoring Unit) can count memory bandwidth from the benchmark process. >> A child process inherits its parent's CPU affinity mask, no? >> >>> 4. When the parent and child processes are scheduled on the same CPU, >>> the parent process's activity may interfere with the monitoring of >>> the child process. This is particularly problematic in some ARM MPAM >> Which tests are encountering issues? For MBM and MBA I do not think this >> matters since the tests just compare perf and resctrl numbers and perf >> will already contain all bandwidth associated with the PMU so isolation here >> may require to potentially take all traffic off a socket. >> CMT tests may see some impact since it compares cache occupancy to the >> size of the buffer being read. I do have a pending series that aims to >> address some issues with the CMT test: >> https://lore.kernel.org/linux-patches/[email protected]/ >> >>> implementations, where memory bandwidth monitoring real-time values. When >>> the child process is preempted off the CPU, this results in inaccurate >>> monitoring. >> This motivation is not clear to me. Could you please elaborate how inaccurate >> this monitoring gets? The child process may be preempted but it should not be >> moved from the CPU in its affinity mask. > The bandwidth monitoring feature on certain systems only ensures > that tasks currently scheduled to the CPU are correctly counted by the > hardware. The counters for tasks that have been scheduled out of the CPU > will be out of date. Therefore, separating parent and child processes > to run on different CPUs can ensure that bandwidth monitoring correctly > counts the real-time bandwidth. This approach also works on other systems.
Does this imply that when a task is scheduled out of the CPU then its counter is dynamically re-assigned to another task? This sounds like behavior that needs to be addressed by the driver and not accommodated by the test. Since this failed on the first system I tried I cannot trust the argument that this works on other systems. >>> This commit moves the CPU affinity and resctrl FS setup to the child >>> process after fork(), ensuring these settings only affect the benchmark >>> process, thereby maintaining measurement accuracy and making the >>> implementation more portable across platforms. >>> >>> Signed-off-by: Yifan Wu <[email protected]> >>> --- >>> tools/testing/selftests/resctrl/resctrl_val.c | 68 +++++++++++-------- >>> 1 file changed, 39 insertions(+), 29 deletions(-) >>> >>> diff --git a/tools/testing/selftests/resctrl/resctrl_val.c >>> b/tools/testing/selftests/resctrl/resctrl_val.c >>> index 7c08e936572d..85ac96c7cb8f 100644 >>> --- a/tools/testing/selftests/resctrl/resctrl_val.c >>> +++ b/tools/testing/selftests/resctrl/resctrl_val.c >>> @@ -545,7 +545,6 @@ int resctrl_val(const struct resctrl_test *test, >>> cpu_set_t old_affinity; >>> int domain_id; >>> int ret = 0; >>> - pid_t ppid; >>> if (strcmp(param->filename, "") == 0) >>> sprintf(param->filename, "stdio"); >>> @@ -556,22 +555,10 @@ int resctrl_val(const struct resctrl_test *test, >>> return ret; >>> } >>> - ppid = getpid(); >>> - >>> - /* Taskset test to specified CPU. */ >>> - ret = taskset_benchmark(ppid, uparams->cpu, &old_affinity); >>> - if (ret) >>> - return ret; >>> - >>> - /* Write test to specified control & monitoring group in resctrl FS. */ >>> - ret = write_bm_pid_to_resctrl(ppid, param->ctrlgrp, param->mongrp); >>> - if (ret) >>> - goto reset_affinity; >>> - >>> if (param->init) { >>> ret = param->init(param, domain_id); >> write_bm_pid_to_resctrl() does more than just write the task ID to the tasks >> file, >> it also creates the resource groups in its parameters. By moving it after the >> param->init() callback it prevents any test specific initialization from >> being done in the test's resource groups. >> > I see. Since mbm_total_path, which is used in the parent process during > param->measure(), is initialized in initialize_mem_bw_resctrl(), we > cannot move param->init() to the child process. However, we can perform > the test-specific resource group configuration in param->setup(), as we > did in mba_setup(). Setup that only applies to the test self can reasonably be done within the actual test workload but test initialization via init() callback can reasonably expect that the resource groups exist. Consider the CMT series I mentioned to you earlier, I have since submitted v3: https://lore.kernel.org/linux-patches/[email protected]/ It seems better to split resource group creation from the task ID assignment. > >>> if (ret) >>> - goto reset_affinity; >>> + return ret; >>> } >>> /* >>> @@ -586,10 +573,8 @@ int resctrl_val(const struct resctrl_test *test, >>> if (param->fill_buf) { >>> buf = alloc_buffer(param->fill_buf->buf_size, >>> param->fill_buf->memflush); >> This is the buffer on which the workload will operate and by having >> different CPU affinity between parent and child the buffer may be created in >> a >> separate domain. >> >> Consider for example if I apply just this patch on a test system and try out >> the >> MBM test it now fails: >> 1..1 >> # Starting MBM test ... >> # Mounting resctrl to "/sys/fs/resctrl" >> # Benchmark PID: 6127 >> # Writing benchmark parameters to resctrl FS >> # Write schema "MB:0=100" to resctrl FS >> # Checking for pass/fail >> # Fail: Check MBM diff within 15% >> # avg_diff_per: 100% >> # Span (MB): 1280 >> # avg_bw_imc: 50 >> # avg_bw_resc: 0 >> not ok 1 MBM: test >> >> What happened here is that the buffer was created by the parent in one >> domain but >> the child's affinity was set to another domain. The test assumes that it can >> count local memory and all of those counts are zero for resctrl since the >> memory >> is not local. Similarly low numbers from perf for same reason. > Yeah, if the CPU affinity setting is moved to the child process, it will > also be necessary to move the memory allocation to the child process. In > addition, what about setting the mempolicy to the local domain when > setting the CPU affinity? Like this: Is local allocation not the default? Reinette

