Marc-Andre,
It looks to me that the 4th and 5th patches somehow has not been sent.
Could you send them, too?
I'd like them to actually build and run the kernel for testing.
> -Original Message-
> From: linux-kernel-ow...@vger.kernel.org
> [mailto:linux-kernel-ow...@vger.kernel.org] On Beh
> -Original Message-
> From: linux-kernel-ow...@vger.kernel.org
> [mailto:linux-kernel-ow...@vger.kernel.org] On Behalf Of Michael S. Tsirkin
> Sent: Tuesday, October 31, 2017 1:19 AM
> To: Hatayama, Daisuke
> Cc: 'marcandre.lur...@redhat.com' ;
&
= "raw", .mode = S_IRUSR },
> + .attr = { .name = "raw", .mode = S_IRUSR | S_IWUSR },
> .read = fw_cfg_sysfs_read_raw,
> + .write = fw_cfg_sysfs_write_raw,
> };
>
> /*
> --
> 2.14.1.146.gd35faa819
Thanks.
HATAYAMA, Daisuke
4
--- a/drivers/firmware/qemu_fw_cfg.c
+++ b/drivers/firmware/qemu_fw_cfg.c
@@ -35,6 +35,7 @@
#include
#include
#include
+#include
MODULE_AUTHOR("Gabriel L. Somlo ");
MODULE_DESCRIPTION("QEMU fw_cfg sysfs support");
@@ -653,6 +654,8 @@ static int fw_cfg_register_file(const struct fw_cfg_file *f)
struct fw_cfg_sysfs_entry *entry;
if (strcmp(f->name, "etc/vmcoreinfo") == 0) {
+ if (is_kdump_kernel())
+ return 0;
if (write_vmcoreinfo(f) < 0)
pr_warn("fw_cfg: failed to write vmcoreinfo");
}
> /* allocate new entry */
> entry = kzalloc(sizeof(*entry), GFP_KERNEL);
> if (!entry)
> --
> 2.14.1.146.gd35faa819
--
Thanks.
HATAYAMA, Daisuke
g.c
@@ -35,6 +35,7 @@
#include
#include
#include
+#include
MODULE_AUTHOR("Gabriel L. Somlo ");
MODULE_DESCRIPTION("QEMU fw_cfg sysfs support");
@@ -653,6 +654,8 @@ static int fw_cfg_register_file(const struct fw_cfg_file *f)
struct fw_cfg_sysfs_entry *entry;
if (strcmp(f->name, "etc/vmcoreinfo") == 0) {
+ if (is_kdump_kernel())
+ return 0;
if (write_vmcoreinfo(f) < 0)
pr_warn("fw_cfg: failed to write vmcoreinfo");
}
> /* allocate new entry */
> entry = kzalloc(sizeof(*entry), GFP_KERNEL);
> if (!entry)
> --
> 2.14.1.146.gd35faa819
--
Thanks.
HATAYAMA, Daisuke
+ offset_eraseinfo
>|: |
>+------+
Layout itself is important information. Not only describing this here,
it's better to put it anywhere of source code or point at IMPLEMENTATION
file in makedumpfile which describes orignal spec.
--
Thanks.
HATAYAMA, Daisuke
From: Luiz Capitulino
Subject: Re: [Qemu-devel] qmp: dump-guest-memory: -p option has issues, fix it
or drop it?
Date: Wed, 19 Sep 2012 10:23:26 -0300
> On Wed, 19 Sep 2012 11:26:51 +0900 (JST)
> HATAYAMA Daisuke wrote:
>
>> From: Wen Congyang
>> Subject: Re: [Qemu-d
mation.
> We allocate memory to store memory mapping. Each memory mapping needs
> less than 40 bytes memory. The num of memory mapping is less than
> (2^48) / (2^12) = 2^36. And 2^36 * 40 = 64G * 40, too many memory
>
> What about this:
> 1. if the num of memory mapping > 10, we only store 10 memory
>mappings.
>
> 2. The memory mapping which has smaller virtual address will be dropped?
>
> In this case, the memory we need is less than 10MB. So we will not allocate
> too many memory.
>
How about dropping making a whole list of memory maps at the same
time, and how about rewriting the code so that it always has at most
one memory mapping by merging virtually consequtive chunks? If
possible, only 40 bytes is needed.
Thanks.
HATAYAMA, Daisuke
act that external crash dump mechanism doesn't touch
guest memory, so it can run safely even if kdump on the guest failed
for example due to lost of its logical integrity caused by broken
memory.
Thanks.
HATAYAMA, Daisuke
return env->host_tid;
> -#else
> -return env->cpu_index + 1;
> -#endif
> -}
> -
I meant in gdbstub.c:
static inline int gdb_id(CPUArchState *env)
{
return cpu_index(env);
}
Thanks.
HATAYAMA, Daisuke
From: Wen Congyang
Subject: Re: [PATCH 05/11 v10] Add API to get memory mapping
Date: Mon, 26 Mar 2012 10:44:40 +0800
> At 03/26/2012 10:31 AM, HATAYAMA Daisuke Wrote:
>> From: Wen Congyang
>> Subject: Re: [PATCH 05/11 v10] Add API to get memory mapping
>> Date: Mon, 26 Ma
From: Wen Congyang
Subject: Re: [PATCH 11/11 v10] introduce a new monitor command
'dump-guest-memory' to dump guest's memory
Date: Mon, 26 Mar 2012 09:39:24 +0800
> At 03/23/2012 05:40 PM, HATAYAMA Daisuke Wrote:
>> From: Wen Congyang
>> Subject: [PATCH 11/1
From: Wen Congyang
Subject: Re: [PATCH 05/11 v10] Add API to get memory mapping
Date: Mon, 26 Mar 2012 09:10:52 +0800
> At 03/23/2012 08:02 PM, HATAYAMA Daisuke Wrote:
>> From: Wen Congyang
>> Subject: [PATCH 05/11 v10] Add API to get memory mapping
>> Date: Tue, 20 Ma
here might be another that
can go in real-mode. If execution enters this path in such situation,
linear addresses are meaningless. But this is really rare case.
> +QLIST_FOREACH(block, &ram_list.blocks, next) {
> +offset = block->offset;
> +length = block->length;
> +create_new_memory_mapping(list, offset, offset, length);
> +}
> +
> +return 0;
> +}
Thanks.
HATAYAMA, Daisuke
gt;cpu_index + 1;
> +#endif
> +}
> +
It seems to me more reasonable to newly introduce helper function
cpu_index(), then use it in gdb_id() and in qemu dump.
Thanks.
HATAYAMA, Daisuke
d in dumpfile. But please give -1 a name. I want
the name to tell me the data is filterred.
Also, I couldn't imagine get_offset() does filtering processing. This
is important for the purspective of dump because there's data not
saved in dumpfile. Could you claify this in some way? By moving
filtering processing to the outside, or splitting it into anotehr
funciton.
Thanks.
HATAYAMA, Daisuke
(Elf32_Ehdr) + sizeof(Elf32_Phdr) *
> s->sh_info;
> +
> +elf_header.e_shoff = cpu_convert_to_target32(shoff, endian);
> +elf_header.e_shentsize = cpu_convert_to_target16(sizeof(Elf32_Shdr),
> + endian);
> + elf
e;
> +} else {
> +s->have_section = true;
> +s->phdr_num = PN_XNUM;
> +s->sh_info = 1; /* PT_NOTE */
It's confusing to use member names, phdr_num and sh_info, from
differnet views. I first think phdr_num is used for an actual number
of program heade
From: Wen Congyang
Subject: Re: [RFC][PATCH 08/14 v9] target-i386: Add API to write cpu status to
core file
Date: Fri, 16 Mar 2012 14:50:06 +0800
> At 03/16/2012 09:48 AM, HATAYAMA Daisuke Wrote:
>> From: Wen Congyang
>> Subject: [RFC][PATCH 08/14 v9] target-i386: Add API to w
option is not default
explicitly: guest machine in a catastrophic state can have corrupted
memory, which we cannot trust.
Thanks.
HATAYAMA, Daisuke
APPING
static inline int qemu_get_guest_memory_mapping(MemoryMappingList *list)
{
return -2;
}
#endif
Thanks.
HATAYAMA, Daisuke
processings in positive and negative cases for e_phnum
and sh_info are different. It's better to make them sorted in the same
order.
if (phdr_num not overflow?) {
not overflow case;
} else {
overflow case;
if (sh_info not overflow?) {
not overflow case;
} else {
overflow case;
}
}
is better.
Thanks.
HATAYAMA, Daisuke
= 0;
> +}
Why not give new type for this note information an explicit name?
Like NT_QEMUCPUSTATE? There might be another type in the future. This
way there's also a merit that we can know all the existing notes
relevant to qemu dump by looking at the names in a header file.
Thanks.
HATAYAMA, Daisuke
i386_t pr_reg; /* GP registers */
__u32 pr_fpvalid; /* True if math co-processor being
used. */
};
> +descsz = 144; /* sizeof(prstatus_t) is 144 on x86 box */
Also.
> +descsz = 144; /* sizeof(prstatus_t) is 144 on x86 box */
Also.
Thanks.
HATAYAMA, Daisuke
From: HATAYAMA Daisuke
Subject: Re: [Qemu-devel] [RFC][PATCH 05/16 v8] Add API to get memory mapping
Date: Mon, 12 Mar 2012 15:16:55 +0900 ( )
>
> The assumption behind my idea is the host is running in a good
> condition but the quest in a bad condition. So we can use qemu dump,
&
From: Jan Kiszka
Subject: Re: [RFC][PATCH 05/16 v8] Add API to get memory mapping
Date: Fri, 09 Mar 2012 14:24:41 +0100
> On 2012-03-09 13:53, HATAYAMA Daisuke wrote:
>> From: Jan Kiszka
>> Subject: Re: [RFC][PATCH 05/16 v8] Add API to get memory mapping
>> Date: Fri, 09 Ma
t always in the state where paging mode is
enabled. Also, CR3 doesn't always refer to page table.
- If guest machine is in catastrophic state, its memory data could
be corrupted. Then, we cannot trust such corrupted page table.
# In this point, managing PT_LOAD program headers based on such
# potencially corruppted data has risk.
The idea of yours that performing paging in debugger side is better
than doing in qemu.
Thanks.
HATAYAMA, Daisuke
From: Wen Congyang
Subject: Re: [RFC][PATCH 05/16 v8] Add API to get memory mapping
Date: Fri, 09 Mar 2012 10:26:56 +0800
> At 03/09/2012 10:05 AM, HATAYAMA Daisuke Wrote:
>> From: Wen Congyang
>> Subject: Re: [RFC][PATCH 05/16 v8] Add API to get memory mapping
>> Date: F
From: Wen Congyang
Subject: Re: [RFC][PATCH 05/16 v8] Add API to get memory mapping
Date: Fri, 09 Mar 2012 09:46:31 +0800
> At 03/09/2012 08:40 AM, HATAYAMA Daisuke Wrote:
>> From: Wen Congyang
>> Subject: Re: [RFC][PATCH 05/16 v8] Add API to get memory mapping
>> Date: T
From: Wen Congyang
Subject: Re: [RFC][PATCH 05/16 v8] Add API to get memory mapping
Date: Thu, 08 Mar 2012 16:52:29 +0800
> At 03/07/2012 11:27 PM, HATAYAMA Daisuke Wrote:
>> From: Wen Congyang
>> Subject: [RFC][PATCH 05/16 v8] Add API to get memory mapping
>> Date: Fri,
kernel is
> not
> + *in paging table
> + * add them into memory mapping's list
> + */
> +QLIST_FOREACH(block, &ram_list.blocks, next) {
How does the memory portion referenced by PT_LOAD program headers with
p_vaddr == 0 looks through gdb? If we cannot access such portions,
part not referenced by the page table CR3 has is unnecessary, isn't
it?
Thanks.
HATAYAMA, Daisuke
From: Wen Congyang
Subject: Re: [RFC][PATCH 03/14 v7] target-i386: implement
cpu_get_memory_mapping()
Date: Thu, 01 Mar 2012 14:21:37 +0800
> At 03/01/2012 02:13 PM, HATAYAMA Daisuke Wrote:
>> From: Wen Congyang
>> Subject: [RFC][PATCH 03/14 v7] target-
From: Wen Congyang
Subject: Re: [RFC][PATCH 06/14 v7] target-i386: Add API to write cpu status to
core file
Date: Thu, 01 Mar 2012 13:05:31 +0800
> At 03/01/2012 01:01 PM, HATAYAMA Daisuke Wrote:
>> From: Wen Congyang
>> Subject: [RFC][PATCH 06/14 v7] target-i386: Add API to w
opaque)
> +{
> +QEMUCPUState state;
> +Elf64_Nhdr *note;
> +note_size = ((sizeof(Elf32_Nhdr) + 3) / 4 + (name_size + 3) / 4 +
ELF64_Nhdr?
Thanks.
HATAYAMA, Daisuke
From: Wen Congyang
Subject: Re: [RFC][PATCH 00/14 v7] introducing a new, dedicated memory dump
mechanism
Date: Thu, 01 Mar 2012 13:16:47 +0800
> At 03/01/2012 12:42 PM, HATAYAMA Daisuke Wrote:
>> From: Wen Congyang
>> Subject: [RFC][PATCH 00/14 v7] introducing a new, dedica
target_phys_addr_t phys_addr,
target_virt_addr_t virt_addr);
bool mapping_physically_contains(MemoryMapping *map,
target_phys_addr_t phys_addr);
bool mapping_physically_virtually_contiguous(MemoryMapping *map,
target_phys_addr_t phys_addr,
target_virt_addr_t virt_addr);
void mapping_merge(MemoryMapping *map, target_phys_addr_t phys_addr,
target_virt_addr_t virt_addr);
I'm not confident of the naming, these are example, and assuming
define all as static inline functions.
Thanks.
HATAYAMA, Daisuke
From: Wen Congyang
Subject: Re: [RFC][PATCH 04/14 v7] Add API to get memory mapping
Date: Thu, 01 Mar 2012 14:17:53 +0800
> At 03/01/2012 02:01 PM, HATAYAMA Daisuke Wrote:
>> From: Wen Congyang
>> Subject: [RFC][PATCH 04/14 v7] Add API to get memory mapping
>> Date: Thu,
ddress width
is 48. Can the number exceed this limit in theory? Also how many
program headers are created typically?
Thanks.
HATAYAMA, Daisuke
I don't know qemu very well, but qemu
dump command runs externally to guest machine, so I think the machine
could be in the state with paging disabled where CR4 doesn't refer to
page table as expected.
Thanks.
HATAYAMA, Daisuke
> +/*
> + * memory_mapping's list does not conatin the region
> + * [offset, memory_mapping->phys_addr)
> + */
> +create_new_memory_mapping(list, offset, 0, length);
> +}
> +}
> +
> +return 0;
> +}
I think it more readable if shortening memory_mapping->phys_addr and
memmory_maping->length at the berinning of the innermost foreach loop.
m_phys_addr = memory_mapping->phys_addr;
m_length = memory_mapping->length;
Then, each conditionals becomes compact.
Thanks.
HATAYAMA, Daisuke
lf64_note() does the same
processing for note data. Is it better to do this in helper functions
in common?
Thanks.
HATAYAMA, Daisuke
n indicating: please count up
QEMUCPUSTATE_VERSION if you have changed definition of QEMUCPUState,
and modify the tools using this information accordingly.
Thanks.
HATAYAMA, Daisuke
st kernel.
> 4. The cpu's state is stored in QEMU note. You neet to modify crash to use
>it to calculate phys_base.
Again, you still need to fix crash utility to recover the 1st kernel's
first 640kB physical memory that has been reserved during switch from
1st kernel to 2nd kernel.
Thanks.
HATAYAMA, Daisuke
ansparent extensions of the file format if needed.
>>>>
>>>>>
>>>>> If the vmcore is generated by 'virsh dump'(use migration to implement
>>>>> dumping),
>>>>> crash calculates the phys_base according to idt.base. The function
>>>>> get_phys_base_addr()
>>>>> uses the same way to calculates the phys_base.
>>>>
>>>> Hmm, where are those special registers (idt, gdt, tr etc.) stored in the
>>>> vmcore file, BTW?
>>>
>>> 'virsh dump' uses mirgation to implement dumping now. So the vmcore has all
>>> registers.
>>
>> This is about the new format. And there we are lacking those special
>
> Yes, this file can be processed with crash. gdb cannot process such file.
>
>> registers. At some point, gdb will understand and need them to do proper
>> system-level debugging. I don't know the format structure here: can we
>> add sections to the core file in a way that consumers that don't know
>> them simply ignore them?
>
> I donot find such section now. If there is such section, I think it is
> better to store all cpu's register in the core file.
>
> I try to let the core file can be processed with crash and gdb. But crash
> still does not work well sometimes.
>
> I think we can add some option to let user choose whether to store memory
> mapping in the core file. Because crash does not need such mapping. If
> the p_vaddr in all PT_LOAD segments is 0, crash know the file is generated
> by qemu dump, and use another way to calculate phys_base.
>
If you store cpu registers in the core file, checking if the
information is contained in the core file is better.
Thanks.
HATAYAMA, Daisuke
> If you agree with it, please ignore this patch.
>
> Thanks
> Wen Congyang
>
>>
>> Jan
>>
>
>
N" command. So far, QEMU's gdbstub does this for gdb
>> when it requests some memory over the remote connection. I bet gdb
>> requires some extension to exploit such information offline from a core
>> file, but I'm also sure that this will come as the importance of gdb for
>> system level debugging will rise.
>>
>> Therefore my question: is there room to encode the mapping relation to a
>> CPU/thread context?
>
> I donot know. But I think the answer is no, because there is no filed
> in the struct Elf32_Phdr/Elf64_Phdr to store the CPU/thread id.
>
See NT_PRSTATUS note, from which gdb knows what CPUs is related to
what thread.
For vmcore generated by kdump, NT_PRSTATUS notes is contained in the
order corresponding to online cpus.
If crash reads the vmcore generated by this command just as by kdump
and not considering this, crash might be understanding each CPU
information wrongly because qemu dump generated all possible CPUs.
Thanks.
HATAYAMA, Daisuke
> Thanks
> Wen Congyang
>
>>
>> Jan
>>
>
>
From: Wen Congyang
Subject: Re: [Qemu-devel] [RFC][PATCT 0/5 v2] dump memory when host pci device
is used by guest
Date: Tue, 13 Dec 2011 17:20:24 +0800
> At 12/13/2011 02:01 PM, HATAYAMA Daisuke Write:
>> From: Wen Congyang
>> Subject: Re: [Qemu-devel] [RFC][PATCT 0/5 v2] d
From: Wen Congyang
Subject: Re: [Qemu-devel] [RFC][PATCT 0/5 v2] dump memory when host pci device
is used by guest
Date: Tue, 13 Dec 2011 11:35:53 +0800
> Hi, hatayama-san
>
> At 12/13/2011 11:12 AM, HATAYAMA Daisuke Write:
>> Hello Wen,
>>
>> From: Wen Congyang
&g
ackup region and get data of 1st kernel's page tables.
But it needs debugging information of guest kernel, and I don't think
it good idea that qemu uses too guest-specific information.
On the other hand, I have a basic question. Can this command used for
creating live dump? or crash dump only?
Thanks.
HATAYAMA, Daisuke
48 matches
Mail list logo