gt; it is no longer necessary to check the vmp0 bit in the LPCR.
>
> Signed-off-by: Suraj Jitindar Singh
> Reviewed-by: David Gibson
> ---
Acked-by: Balbir Singh
0))
> #define LPCR_LPES1(1ull << (63 - 61))
> #define LPCR_RMI (1ull << (63 - 62))
> +#define LPCR_HVICE(1ull << (63 - 62)) /* HV Virtualisation Int
> Enable */
> #define LPCR_HDICE(1ull << (63 - 63))
This patch is missing
#define LPCR_HR (1ull << (63 - 43)) /* HV uses Radix Tree Translation */
See arch/powerpc/include/asm/reg.h in the Linux kernel.
Balbir Singh.
On Mon, Feb 20, 2017 at 03:04:30PM +1100, Suraj Jitindar Singh wrote:
> The DPFD field in the LPCR is 3 bits wide. This has always been defined
> as 0x3 << shift which indicates a 2 bit field, which is incorrect.
> Correct this.
>
> Signed-off-by: Suraj Jitindar Singh
&g
riant */
> +POWERPC_MMU_3_00 = POWERPC_MMU_64 | POWERPC_MMU_1TSEG
> + | POWERPC_MMU_64K
> + | POWERPC_MMU_AMR | 0x0005,
I wonder if we need a POWERPC_MMU_RADIX that we can then attach
with future versions
Balbir Singh.
As per the ISA we need a cause and executing a tabort r9 in libc
for example causes a EXCP_FU exception, we don't wire up the
IC (cause) when we post the exception. The cause is required
for the kernel to do the right thing. The fix applies only to 64
bit ppc targets.
Signed-off-by: B
On 10/11/16 13:46, David Gibson wrote:
> On Thu, Nov 10, 2016 at 01:06:17PM +1100, David Gibson wrote:
>> On Thu, Nov 10, 2016 at 12:42:37PM +1100, Balbir Singh wrote:
>>>
>>>
>>> As per the ISA we need a cause for FU exceptions.Executing a tabort r9
>&
ernel against Cedrics' latest pnv ipmi branch.
Signed-off-by: Balbir singh
---
target-ppc/excp_helper.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/target-ppc/excp_helper.c b/target-ppc/excp_helper.c
index 808760b..cccea8d 100644
--- a/target-ppc/excp_helper.c
+++ b/target-ppc/exc
uest's device tree.
>
> Signed-off-by: Sam Bobroff
> ---
>
Make sense
Acked-by: Balbir Singh
n the
> guest's device tree.
>
> Signed-off-by: Sam Bobroff
> ---
Makes sense
Acked-by: Balbir Singh
>
* Christoph Lameter [2010-11-03 09:35:33]:
> On Fri, 29 Oct 2010, Balbir Singh wrote:
>
> > A lot of the code is borrowed from zone_reclaim_mode logic for
> > __zone_reclaim(). One might argue that the with ballooning and
> > KSM this feature is not very useful,
Balloon unmapped page cache pages first
From: Balbir Singh
This patch builds on the ballooning infrastructure by ballooning unmapped
page cache pages first. It looks for low hanging fruit first and tries
to reclaim clean unmapped pages first.
This patch brings zone_reclaim() and other
Selectively control Unmapped Page Cache (nospam version)
From: Balbir Singh
This patch implements unmapped page cache control via preferred
page cache reclaim. The current patch hooks into kswapd and reclaims
page cache if the user has requested for unmapped page control.
This is useful in the
Provide memory hint during ballooning
From: Balbir Singh
This patch adds an optional hint to the qemu monitor balloon
command. The hint tells the guest operating system to consider
a class of memory during reclaim. Currently the supported
hint is cached memory. The design is generic and can be
This is version 3 of the page cache control patches
From: Balbir Singh
This series has three patches, the first controls
the amount of unmapped page cache usage via a boot
parameter and sysctl. The second patch controls page
and slab cache via the balloon driver. Both the patches
make heavy use
enoy
> Signed-off-by: Sripathi Kodi
> Signed-off-by: Arun R Bharadwaj
Acked-by: Balbir Singh
--
Three Cheers,
Balbir
* Venkateswararao Jujjuri (JV) [2010-10-19 20:46:35]:
> >> I think this is a lot more fragile. You're relying on the fact that
> >> signal will not cause the signalled thread to actually awaken until
> >> we release the lock and doing work after signalling that the
> >> signalled thread needs to
* Venkateswararao Jujjuri (JV) [2010-10-19 14:00:24]:
> >
> > In the case that we just spawned the threadlet, the cond_signal is
> > spurious. If we need predictable scheduling behaviour,
> > qemu_cond_signal needs to happen with queue->lock held.
> >
> > I'd rewrite the function as
> >
> > /
t; Signed-off-by: Gautham R Shenoy
> Signed-off-by: Sripathi Kodi
This change seems reasonable to me
Acked-by: Balbir Singh
--
Three Cheers,
Balbir
* Anthony Liguori [2010-10-19 16:36:31]:
> On 10/19/2010 01:36 PM, Balbir Singh wrote:
> >>+qemu_mutex_lock(&(queue->lock));
> >>+while (1) {
> >>+ThreadletWork *work;
> >>+int ret = 0;
> >>+
* Paolo Bonzini [2010-10-19 21:01:03]:
> On 10/19/2010 08:36 PM, Balbir Singh wrote:
> >Ideally you need
> >
> > s = pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, NULL);
> >
> >But qemu will need to wrap this around as well.
>
> Why? QEMU is nev
* Arun R B [2010-10-19 23:12:45]:
> From: Aneesh Kumar K.V
>
> This patch creates a generic asynchronous-task-offloading infrastructure named
> threadlets. The core idea has been borrowed from the threading framework that
> is being used by paio.
>
> The reason for creating this generic infras
21 matches
Mail list logo