On 06/12/2018 10:51, Jan Beulich wrote:
>
>> + unsigned int socket = c->phys_proc_id, core = c->cpu_core_id;
>> + struct ssbd_ls_cfg *cfg;
>> + uint64_t val;
>> +
>> + ASSERT(cpu_has_legacy_ssbd);
>> +
>> + /*
>> + * Update hardware lazily, as these MSRs are expensive. However, on
>> + * the boot paths which pass NULL, force a write to set a consistent
>> + * initial state.
>> + */
>> + if (*this_ssbd == disable && next)
>> + return;
>> +
>> + if (cpu_has_virt_sc_ssbd) {
>> + wrmsrl(MSR_VIRT_SPEC_CTRL,
>> + disable ? SPEC_CTRL_SSBD : 0);
>> + goto done;
>> + }
>> +
>> + val = ls_cfg_base | (disable ? ls_cfg_ssbd_mask : 0);
>> +
>> + if (c->x86 < 0x17 || c->x86_num_siblings == 1) {
>> + /* No threads to be concerned with. */
>> + wrmsrl(MSR_AMD64_LS_CFG, val);
>> + goto done;
>> + }
>> +
>> + /* Check that we won't overflow the worse-case allocation. */
>> + BUG_ON(socket >= ARRAY_SIZE(ssbd_ls_cfg));
>> + BUG_ON(core >= ssbd_max_cores);
> Wouldn't it be better to fail onlining of such CPUs?
How? We've not currently got an ability to fail in the middle of
start_secondary(), which is why the previous patch really does go an
allocate the worst case.
These are here because I don't trust really trust the topology logic
(which turned out to be very wise, in retrospect), not because I
actually expect them to trigger from now on.
>
>> + cfg = &ssbd_ls_cfg[socket][core];
>> +
>> + if (disable) {
>> + spin_lock(&cfg->lock);
>> +
>> + /* First sibling to disable updates hardware. */
>> + if (!cfg->disable_count)
>> + wrmsrl(MSR_AMD64_LS_CFG, val);
>> + cfg->disable_count++;
>> +
>> + spin_unlock(&cfg->lock);
>> + } else {
>> + spin_lock(&cfg->lock);
>> +
>> + /* Last sibling to enable updates hardware. */
>> + cfg->disable_count--;
>> + if (!cfg->disable_count)
>> + wrmsrl(MSR_AMD64_LS_CFG, val);
>> +
>> + spin_unlock(&cfg->lock);
>> + }
> Any reason for duplicating the spin_{,un}lock() calls?
To avoid having a context-dependent jump in the critical region. Then
again, I suppose that is completely dwarfed by the WRMSR.
~Andrew
_______________________________________________
Xen-devel mailing list
[email protected]
https://lists.xenproject.org/mailman/listinfo/xen-devel