On 01.09.20 23:36, Richard Henderson wrote: > On 9/1/20 11:34 AM, Helge Deller wrote: >> diff --git a/hw/hppa/machine.c b/hw/hppa/machine.c >> index 90aeefe2a4..e9d84d0f03 100644 >> --- a/hw/hppa/machine.c >> +++ b/hw/hppa/machine.c >> @@ -72,6 +72,14 @@ static FWCfgState *create_fw_cfg(MachineState *ms) >> fw_cfg_add_file(fw_cfg, "/etc/firmware-min-version", >> g_memdup(&val, sizeof(val)), sizeof(val)); >> >> + val = cpu_to_le64(HPPA_TLB_ENTRIES); > > I guess you don't have a cpu structure here against which you could apply > ARRAY_SIZE?
You mean to calculate the number of TLB entries based on the static size of an array, e.g. ARRAY_SIZE(struct CPUHPPAState.tlb[256]) ? I've replaced it intentionally by a constant, because in a follow-up patch to improve the TLB-lookup speed, the array gets replaced by a GTree. You can see a working version of the patch here: https://github.com/hdeller/qemu-hppa/commit/644790132b91cdb835c8dd38198e6f6ed0b533a1 In this patch: - hppa_tlb_entry tlb[HPPA_TLB_ENTRIES]; + GTree *tlb; + GList *tlb_list; + int tlb_entries; >> /* ??? The number of entries isn't specified by the architecture. */ >> + #define HPPA_TLB_ENTRIES 256 >> + #define HPPA_BTLB_ENTRIES 0 > > The indented defines are weird. How/Why? > What's a btlb entry? Block-TLB entries. Most PA1.1-machines have 16 such BTLBs which can address a range of 4k pages with a single entry. In a follow-up qemu patch it makes sense to add those TLBs. Helge