size= epfn_align - spfn,
> + .align = PAGES_PER_SECTION,
> + .min_chunk = PAGES_PER_SECTION,
> + .max_threads = max_threads,
> + };
> +
> + padata_do_multithreaded(&job);
> + deferred_init_mem_pfn_range_in_zone(&i, zone, &spfn, &epfn,
> + epfn_align);
> }
> zone_empty:
> /* Sanity check that the next zone really is unpopulated */
So I am not a huge fan of using deferred_init_mem_pfn_range_in zone
simply for the fact that we end up essentially discarding the i value
and will have to walk the list repeatedly. However I don't think the
overhead will be that great as I suspect there aren't going to be
systems with that many ranges. So this is probably fine.
Reviewed-by: Alexander Duyck
tatistic probably has limited value,
> especially since a zone grows on demand so that the page count can vary,
> just remove it.
>
> The boot message now looks like
>
> node 0 deferred pages initialised in 97ms
>
> Signed-off-by: Daniel Jordan
> Suggested-by: Alexa
On Thu, May 21, 2020 at 8:37 AM Daniel Jordan
wrote:
>
> On Wed, May 20, 2020 at 06:29:32PM -0700, Alexander Duyck wrote:
> > On Wed, May 20, 2020 at 11:27 AM Daniel Jordan
> > > @@ -1814,16 +1815,44 @@ deferred_init_maxorder(u64 *i, struct zone *zone,
> >
On Wed, May 20, 2020 at 6:29 PM Alexander Duyck
wrote:
>
> On Wed, May 20, 2020 at 11:27 AM Daniel Jordan
> wrote:
> >
> > Deferred struct page init is a significant bottleneck in kernel boot.
> > Optimizing it maximizes availability for large-memory systems and a
On Wed, May 20, 2020 at 11:27 AM Daniel Jordan
wrote:
>
> Deferred struct page init is a significant bottleneck in kernel boot.
> Optimizing it maximizes availability for large-memory systems and allows
> spinning up short-lived VMs as needed without having to leave them
> running. It also benefi
On Thu, May 7, 2020 at 1:20 PM Daniel Jordan wrote:
>
> On Thu, May 07, 2020 at 08:26:26AM -0700, Alexander Duyck wrote:
> > On Wed, May 6, 2020 at 3:39 PM Daniel Jordan
> > wrote:
> > > On Tue, May 05, 2020 at 08:27:52AM -0700, Alexander Duyck wrote:
> >
On Wed, May 6, 2020 at 3:39 PM Daniel Jordan wrote:
>
> On Tue, May 05, 2020 at 08:27:52AM -0700, Alexander Duyck wrote:
> > As it turns out that deferred_free_range will be setting the
> > migratetype for the page. In a sparse config the migratetype bits are
> > stored i
On Wed, May 6, 2020 at 3:21 PM Daniel Jordan wrote:
>
> On Tue, May 05, 2020 at 07:55:43AM -0700, Alexander Duyck wrote:
> > One question about this data. What is the power management
> > configuration on the systems when you are running these tests? I'm
> > just cur
On Mon, May 4, 2020 at 5:54 PM Daniel Jordan wrote:
>
> On Mon, May 04, 2020 at 03:10:46PM -0700, Alexander Duyck wrote:
> > So we cannot stop in the middle of a max order block. That shouldn't
> > be possible as part of the issue is that the buddy allocator will
> >
On Mon, May 4, 2020 at 7:11 PM Daniel Jordan wrote:
>
> On Mon, May 04, 2020 at 09:48:44PM -0400, Daniel Jordan wrote:
> > On Mon, May 04, 2020 at 05:40:19PM -0700, Alexander Duyck wrote:
> > > On Mon, May 4, 2020 at 4:44 PM Josh Triplett
> > > wrote:
> > &
On Mon, May 4, 2020 at 4:44 PM Josh Triplett wrote:
>
> On May 4, 2020 3:33:58 PM PDT, Alexander Duyck
> wrote:
> >On Thu, Apr 30, 2020 at 1:12 PM Daniel Jordan
> > wrote:
> >> /*
> >> -* Initialize and free pages in MAX_ORDER sized in
On Thu, Apr 30, 2020 at 1:12 PM Daniel Jordan
wrote:
>
> Deferred struct page init uses one thread per node, which is a
> significant bottleneck at boot for big machines--often the largest.
> Parallelize to reduce system downtime.
>
> The maximum number of threads is capped at the number of CPUs o
On Thu, Apr 30, 2020 at 7:45 PM Daniel Jordan
wrote:
>
> Hi Alex,
>
> On Thu, Apr 30, 2020 at 02:43:28PM -0700, Alexander Duyck wrote:
> > On 4/30/2020 1:11 PM, Daniel Jordan wrote:
> > > padata will soon divide up pfn ranges between threads when parallelizi
On 4/30/2020 1:11 PM, Daniel Jordan wrote:
padata will soon divide up pfn ranges between threads when parallelizing
deferred init, and deferred_init_maxorder() complicates that by using an
opaque index in addition to start and end pfns. Move the index outside
the function to make splitting the j
t MTU changes immediately.
Signed-off-by: Steffen Klassert
Signed-off-by: Alexander Duyck
---
So this version is slightly modified to cover the IPv4 case in addition to
the IPv6 case. With this patch I was able to run netperf over either an
IPv4 or IPv6 address routed over the ip6_vti tunnel.
On 05/28/2015 12:15 PM, Alexander Duyck wrote:
On 05/28/2015 01:40 AM, Steffen Klassert wrote:
On Thu, May 28, 2015 at 12:18:51AM -0700, Alexander Duyck wrote:
On 05/27/2015 10:36 PM, Steffen Klassert wrote:
On Wed, May 27, 2015 at 10:40:32AM -0700, Alexander Duyck wrote:
This change makes it
On 05/28/2015 01:40 AM, Steffen Klassert wrote:
On Thu, May 28, 2015 at 12:18:51AM -0700, Alexander Duyck wrote:
On 05/27/2015 10:36 PM, Steffen Klassert wrote:
On Wed, May 27, 2015 at 10:40:32AM -0700, Alexander Duyck wrote:
This change makes it so that we use icmpv6_send to report PMTU
On 05/27/2015 10:36 PM, Steffen Klassert wrote:
On Wed, May 27, 2015 at 10:40:32AM -0700, Alexander Duyck wrote:
This change makes it so that we use icmpv6_send to report PMTU issues back
into tunnels in the case that the resulting packet is larger than the MTU
of the outgoing interface
came though a tunnel.
Signed-off-by: Alexander Duyck
---
net/ipv6/xfrm6_output.c | 18 --
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/net/ipv6/xfrm6_output.c b/net/ipv6/xfrm6_output.c
index 09c76a7b474d..6f9b514d0e38 100644
--- a/net/ipv6/xfrm6_output.c
+++ b
t restore the original mark after xfrm_policy_check has been completed.
Signed-off-by: Alexander Duyck
---
net/ipv4/ip_vti.c |9 +++--
net/ipv6/ip6_vti.c |9 +++--
2 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
index 4c3
Instead of modifying skb->mark we can simply modify the flowi_mark that is
generated as a result of the xfrm_decode_session. By doing this we don't
need to actually touch the skb->mark and it can be preserved as it passes
out through the tunnel.
Signed-off-by: Alexander Duyck
--
This change makes it so that if a tunnel is defined we just use the mark
from the tunnel instead of the mark from the skb header. By doing this we
can avoid the need to set skb->mark inside of the tunnel receive functions.
Signed-off-by: Alexander Duyck
---
net/xfrm/xfrm_input.c |
ess is the fact that currently if I use
an v6 over v6 VTI tunnel I cannot receive any traffic on the interface as
the skb->mark is bleeding through and causing the traffic to be dropped.
---
Alexander Duyck (3):
ip_vti/ip6_vti: Do not touch skb->mark on xmit
xfrm: Override skb->m
This change makes it so that if a tunnel is defined we just use the mark
from the tunnel instead of the mark from the skb header. By doing this we
can avoid the need to set skb->mark inside of the tunnel receive functions.
Signed-off-by: Alexander Duyck
---
net/xfrm/xfrm_input.c |
ess is the fact that currently if I use
an v6 over v6 VTI tunnel I cannot receive any traffic on the interface as
the skb->mark is bleeding through and causing the traffic to be dropped.
---
Alexander Duyck (3):
ip_vti/ip6_vti: Do not touch skb->mark on xmit
xfrm: Override skb->m
t restore the original mark after xfrm_policy_check has been completed.
Signed-off-by: Alexander Duyck
---
net/ipv4/ip_vti.c |9 +++--
net/ipv6/ip6_vti.c |9 +++--
2 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
index 4c3
Instead of modifying skb->mark we can simply modify the flowi_mark that is
generated as a result of the xfrm_decode_session. By doing this we don't
need to actually touch the skb->mark and it can be preserved as it passes
out through the tunnel.
Signed-off-by: Alexander Duyck
--
27 matches
Mail list logo