/tst_fuzzy_sync.h:320: TINFO: spins: {
avg = 38993 , avg_dev = 7743 , dev_ratio = 0.20 }
[ 1303.074490] wlcore: down
[ 1303.081180] thp04: page allocation failure: order:0,
mode:0x400dc0(GFP_KERNEL_ACCOUNT|__GFP_ZERO), nodemask=(null)
[ 1303.081189] Unable to handle kernel paging request
From: Alexei Starovoitov
In order to be able to generate loader program in the later
patches change the order of data and text relocations.
Also improve the test to include data relos.
Signed-off-by: Alexei Starovoitov
---
tools/lib/bpf/libbpf.c| 82
()
nvmem_get_mac_address()
i.MX6x/7D/8MQ/8MM platforms ethernet MAC address read from
nvmem ocotp eFuses, but it requires to swap the six bytes
order.
The patch add optional property "nvmem_macaddr_swap" to swap
macaddr bytes order.
Signed-off-by: Fugang Duan
Signed-off-by: Joakim Zhang
()
nvmem_get_mac_address()
i.MX6x/7D/8MQ/8MM platforms ethernet MAC address read from
nvmem ocotp eFuses, but it requires to swap the six bytes
order.
The patch add optional property "nvmem_macaddr_swap" to swap
macaddr bytes order.
Signed-off-by: Fugang Duan
Signed-off-by: Joakim Zhang
;), some comments were
> placed at the wrong struct members. Fixing this by reordering the comments.
>
> Signed-off-by: Linus Lüssing
> Signed-off-by: Sven Eckelmann
> Signed-off-by: Simon Wunderlich
>
> [...]
Here is the summary with links:
- [1/3] batman-adv: Fix o
From: Linus Lüssing
During the inlining process of kerneldoc in commit 8b84cc4fb556
("batman-adv: Use inline kernel-doc for enum/struct"), some comments were
placed at the wrong struct members. Fixing this by reordering the comments.
Signed-off-by: Linus Lüssing
Signed-off-by: Sven Eckelmann
S
; > > > This series introduces a bulk order-0 page allocator with sunrpc and
> > > > > the network page pool being the first users. The implementation is not
> > > > > efficient as semantics needed to be ironed out first. If no other
> > > > >
On Thu, Mar 25, 2021 at 03:06:57PM +0100, Uladzislau Rezki wrote:
> > On Thu, Mar 25, 2021 at 12:50:01PM +, Matthew Wilcox wrote:
> > > On Thu, Mar 25, 2021 at 11:42:19AM +, Mel Gorman wrote:
> > > > This series introduces a bulk order-0 page allocator with sun
On Thu, Mar 25, 2021 at 02:09:27PM +, Matthew Wilcox wrote:
> On Thu, Mar 25, 2021 at 03:06:57PM +0100, Uladzislau Rezki wrote:
> > For the vmalloc we should be able to allocating on a specific NUMA node,
> > at least the current interface takes it into account. As far as i see
> > the current
On Thu, Mar 25, 2021 at 03:06:57PM +0100, Uladzislau Rezki wrote:
> For the vmalloc we should be able to allocating on a specific NUMA node,
> at least the current interface takes it into account. As far as i see
> the current interface allocate on a current node:
>
> static inline unsigned long
>
> On Thu, Mar 25, 2021 at 12:50:01PM +, Matthew Wilcox wrote:
> > On Thu, Mar 25, 2021 at 11:42:19AM +, Mel Gorman wrote:
> > > This series introduces a bulk order-0 page allocator with sunrpc and
> > > the network page pool being the first users. The implementa
On Thu, Mar 25, 2021 at 12:50:01PM +, Matthew Wilcox wrote:
> On Thu, Mar 25, 2021 at 11:42:19AM +, Mel Gorman wrote:
> > This series introduces a bulk order-0 page allocator with sunrpc and
> > the network page pool being the first users. The implementation is not
&
On Thu, Mar 25, 2021 at 11:42:19AM +, Mel Gorman wrote:
> This series introduces a bulk order-0 page allocator with sunrpc and
> the network page pool being the first users. The implementation is not
> efficient as semantics needed to be ironed out first. If no other semantic
>
s
o Rebase to 5.12-rc2
This series introduces a bulk order-0 page allocator with sunrpc and
the network page pool being the first users. The implementation is not
efficient as semantics needed to be ironed out first. If no other semantic
changes are needed, it can be made more efficient. Despite
> On Mar 23, 2021, at 10:45 AM, Mel Gorman wrote:
>
> On Tue, Mar 23, 2021 at 12:08:51PM +0100, Jesper Dangaard Brouer wrote:
My results show that, because svc_alloc_arg() ends up calling
__alloc_pages_bulk() twice in this case, it ends up being
twice as expensive as the l
On Tue, 23 Mar 2021 16:08:14 +0100
Jesper Dangaard Brouer wrote:
> On Tue, 23 Mar 2021 10:44:21 +
> Mel Gorman wrote:
>
> > On Mon, Mar 22, 2021 at 09:18:42AM +, Mel Gorman wrote:
> > > This series is based on top of Matthew Wilcox's series "Rationalise
> > > __alloc_pages wrapper" an
On Tue, Mar 23, 2021 at 04:08:14PM +0100, Jesper Dangaard Brouer wrote:
> On Tue, 23 Mar 2021 10:44:21 +
> Mel Gorman wrote:
>
> > On Mon, Mar 22, 2021 at 09:18:42AM +, Mel Gorman wrote:
> > > This series is based on top of Matthew Wilcox's series "Rationalise
> > > __alloc_pages wrapper"
On Tue, 23 Mar 2021 10:44:21 +
Mel Gorman wrote:
> On Mon, Mar 22, 2021 at 09:18:42AM +, Mel Gorman wrote:
> > This series is based on top of Matthew Wilcox's series "Rationalise
> > __alloc_pages wrapper" and does not apply to 5.12-rc2. If you want to
> > test and are not using Andrew's
On Tue, Mar 23, 2021 at 12:08:51PM +0100, Jesper Dangaard Brouer wrote:
> > >
> > > My results show that, because svc_alloc_arg() ends up calling
> > > __alloc_pages_bulk() twice in this case, it ends up being
> > > twice as expensive as the list case, on average, for the same
> > > workload.
> >
On Mon, Mar 22, 2021 at 08:32:54PM +, Chuck Lever III wrote:
> > It's not expected that the array implementation would be worse *unless*
> > you are passing in arrays with holes in the middle. Otherwise, the success
> > rate should be similar.
>
> Essentially, sunrpc will always pass an array
On Mon, 22 Mar 2021 20:58:27 +
Mel Gorman wrote:
> On Mon, Mar 22, 2021 at 08:32:54PM +, Chuck Lever III wrote:
> > >> It is returning some confusing (to me) results. I'd like
> > >> to get these resolved before posting any benchmark
> > >> results.
> > >>
> > >> 1. When it has visited e
On Mon, Mar 22, 2021 at 09:18:42AM +, Mel Gorman wrote:
> This series is based on top of Matthew Wilcox's series "Rationalise
> __alloc_pages wrapper" and does not apply to 5.12-rc2. If you want to
> test and are not using Andrew's tree as a baseline, I suggest using the
> following git tree
>
On Mon, Mar 22, 2021 at 08:32:54PM +, Chuck Lever III wrote:
> >> It is returning some confusing (to me) results. I'd like
> >> to get these resolved before posting any benchmark
> >> results.
> >>
> >> 1. When it has visited every array element, it returns the
> >> same value as was passed in
> On Mar 22, 2021, at 3:49 PM, Mel Gorman wrote:
>
> On Mon, Mar 22, 2021 at 06:25:03PM +, Chuck Lever III wrote:
>>
>>
>>> On Mar 22, 2021, at 5:18 AM, Mel Gorman wrote:
>>>
>>> This series is based on top of Matthew Wilcox's series "Rationalise
>>> __alloc_pages wrapper" and does not
On Mon, Mar 22, 2021 at 06:25:03PM +, Chuck Lever III wrote:
>
>
> > On Mar 22, 2021, at 5:18 AM, Mel Gorman wrote:
> >
> > This series is based on top of Matthew Wilcox's series "Rationalise
> > __alloc_pages wrapper" and does not apply to 5.12-rc2. If you want to
> > test and are not usin
system
> series, the mm patches have to be merged before the sunrpc and net
> users.
>
> Changelog since v2
> o Prep new pages with IRQs enabled
> o Minor documentation update
>
> Changelog since v1
> o Parenthesise binary and boolean comparisons
> o Add reviewed-b
On Mon, Mar 22, 2021 at 01:04:46PM +0100, Jesper Dangaard Brouer wrote:
> On Mon, 22 Mar 2021 09:18:42 +
> Mel Gorman wrote:
>
> > This series is based on top of Matthew Wilcox's series "Rationalise
> > __alloc_pages wrapper" and does not apply to 5.12-rc2. If you want to
> > test and are not
On Mon, 22 Mar 2021 09:18:42 +
Mel Gorman wrote:
> This series is based on top of Matthew Wilcox's series "Rationalise
> __alloc_pages wrapper" and does not apply to 5.12-rc2. If you want to
> test and are not using Andrew's tree as a baseline, I suggest using the
> following git tree
>
> gi
2
o Prep new pages with IRQs enabled
o Minor documentation update
Changelog since v1
o Parenthesise binary and boolean comparisons
o Add reviewed-bys
o Rebase to 5.12-rc2
This series introduces a bulk order-0 page allocator with the
intent that sunrpc and the network page pool become the first user
; > > >
> > > > > git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git
> > > > > mm-bulk-rebase-v4r2
> > > >
> > > > I gave this series a go on my setup, it showed a bump of 10 Mbps on
> > > > UDP forwarding, but drop
is series a go on my setup, it showed a bump of 10 Mbps on
> > > UDP forwarding, but dropped TCP forwarding by almost 50 Mbps.
> > >
> > > (4 core 1.2GHz MIPS32 R2, page size of 16 Kb, Page Pool order-0
> > > allocations with MTU of 1508 bytes, linear frames via build
; > git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git
> > > mm-bulk-rebase-v4r2
> >
> > I gave this series a go on my setup, it showed a bump of 10 Mbps on
> > UDP forwarding, but dropped TCP forwarding by almost 50 Mbps.
> >
> > (4 core 1.2GHz MIPS32
setup, it showed a bump of 10 Mbps on
> UDP forwarding, but dropped TCP forwarding by almost 50 Mbps.
>
> (4 core 1.2GHz MIPS32 R2, page size of 16 Kb, Page Pool order-0
> allocations with MTU of 1508 bytes, linear frames via build_skb(),
> GRO + TSO/USO)
What NIC driver is this?
&
suggest using the
> following git tree
>
> git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux.git
> mm-bulk-rebase-v4r2
I gave this series a go on my setup, it showed a bump of 10 Mbps on
UDP forwarding, but dropped TCP forwarding by almost 50 Mbps.
(4 core 1.2GHz MIPS32
host byte order, then
converting the result when assigning it into the header field.
Signed-off-by: Alex Elder
---
v5: - Use skb_network_header_len() to decide the checksum offset.
.../net/ethernet/qualcomm/rmnet/rmnet_map_data.c | 16 ++--
1 file changed, 6 insertions(+), 10
host byte order, then
converting the result when assigning it into the header field.
Signed-off-by: Alex Elder
---
v5: - Use skb_network_header_len() to decide the checksum offset.
.../net/ethernet/qualcomm/rmnet/rmnet_map_data.c | 16 ++--
1 file changed, 6 insertions(+), 10
forced type
conversion that makes it hard to understand.
Simplify this by computing the offset in host byte order, then
converting the result when assigning it into the header field.
Signed-off-by: Alex Elder
Reviewed-by: Bjorn Andersson
---
.../ethernet/qualcomm/rmnet/rmnet_map_data.c | 22
n that makes it hard to understand.
>
> Simplify this by computing the offset in host byte order, then
> converting the result when assigning it into the header field.
>
> Signed-off-by: Alex Elder
> Reviewed-by: Bjorn Andersson
> ---
> .../ethernet/qualcomm/rmnet/rmnet_ma
host byte order, then
converting the result when assigning it into the header field.
Signed-off-by: Alex Elder
Reviewed-by: Bjorn Andersson
---
.../ethernet/qualcomm/rmnet/rmnet_map_data.c | 22 ++-
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/net
w pages with IRQs enabled
o Minor documentation update
Changelog since v1
o Parenthesise binary and boolean comparisons
o Add reviewed-bys
o Rebase to 5.12-rc2
This series introduces a bulk order-0 page allocator with sunrpc and
the network page pool being the first users. The implementation is not
particular
> but whether it is actually noticable when page zeroing has to happen
> is a completely different story. It's a similar story for SLUB, we know
> lower order allocations hurt some microbenchmarks like hackbench-sockets
> but have not quantified what happens if SLUB batch allocates page
From: Hoang Le
(struct tipc_link_info)->dest is in network order (__be32), so we must
convert the value to network order before assigning. The problem detected
by sparse:
net/tipc/netlink_compat.c:699:24: warning: incorrect type in assignment
(different base types)
net/tipc/netlink_compa
Hello:
This series was applied to netdev/net-next.git (refs/heads/master):
On Thu, 11 Mar 2021 10:33:22 +0700 you wrote:
> From: Hoang Le
>
> (struct tipc_link_info)->dest is in network order (__be32), so we must
> convert the value to network order before assigning. The proble
Changelog since v3
o Prep new pages with IRQs enabled
o Minor documentation update
Changelog since v1
o Parenthesise binary and boolean comparisons
o Add reviewed-bys
o Rebase to 5.12-rc2
This series introduces a bulk order-0 page allocator with sunrpc and
the network page pool being the first
On Wed, Mar 10, 2021 at 03:47:04PM -0800, Andrew Morton wrote:
> On Wed, 10 Mar 2021 10:46:13 + Mel Gorman
> wrote:
>
> > This series introduces a bulk order-0 page allocator with sunrpc and
> > the network page pool being the first users.
>
>
>
> Right
From: Tariq Toukan
Currently we are allocating high-order page for EQs. In case of
fragmented system, VF hot remove/add in VMs for example, there isn't
enough contiguous memory for EQs allocation, which results in crashing
of the VM.
Therefore, use order-0 fragments for the EQ alloca
From: Hoang Le
(struct tipc_link_info)->dest is in network order (__be32), so we must
convert the value to network order before assigning. The problem detected
by sparse:
net/tipc/netlink_compat.c:699:24: warning: incorrect type in assignment
(different base types)
net/tipc/netlink_compa
On Wed, 10 Mar 2021 10:46:13 + Mel Gorman
wrote:
> This series introduces a bulk order-0 page allocator with sunrpc and
> the network page pool being the first users.
Right now, the [0/n] doesn't even tell us that it's a performance
patchset!
The whole point of this pat
Changelog since v1
o Parenthesise binary and boolean comparisons
o Add reviewed-bys
o Rebase to 5.12-rc2
This series introduces a bulk order-0 page allocator with sunrpc and
the network page pool being the first users. The implementation is not
particularly efficient and the intention is to iron
host byte order, then
converting the result when assigning it into the header field.
Signed-off-by: Alex Elder
Reviewed-by: Bjorn Andersson
---
.../ethernet/qualcomm/rmnet/rmnet_map_data.c | 22 ++-
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/net
nt to fix this issue.
>
> [0] https://github.com/openwrt/openwrt/pull/3959
>
> Fixes: 7110d80d53f4 ("libbpf: Makefile set specified permission mode")
> Reported-by: Georgi Valkov
> Signed-off-by: Andrii Nakryiko
>
> [...]
Here is the summary with links:
acOS.
> Move -m to the front to fix this issue.
>
> [0] https://github.com/openwrt/openwrt/pull/3959
>
> [...]
Here is the summary with links:
- [v2,bpf] libbpf: fix INSTALL flag order
https://git.kernel.org/bpf/bpf/c/e7fb6465d4c8
You are awesome, thank you!
--
Deet-
From: Georgi Valkov
It was reported ([0]) that having optional -m flag between source and
destination arguments in install command breaks bpftools cross-build on MacOS.
Move -m to the front to fix this issue.
[0] https://github.com/openwrt/openwrt/pull/3959
Fixes: 7110d80d53f4 ("libbpf: Makef
host byte order, then
converting the result when assigning it into the header field.
Signed-off-by: Alex Elder
Reviewed-by: Bjorn Andersson
---
.../ethernet/qualcomm/rmnet/rmnet_map_data.c | 22 ++-
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/net
It was reported ([0]) that having optional -m flag between source and
destination arguments in install command breaks bpftools cross-build on MacOS.
Move -m to the front to fix this issue.
[0] https://github.com/openwrt/openwrt/pull/3959
Fixes: 7110d80d53f4 ("libbpf: Makefile set specified perm
10 DS: ES: CR0: 80050033
> CR2: 0198 CR3: 4460a001 CR4: 001706f0
> Call Trace:
> dev_hard_start_xmit+0xc7/0x1e0
> sch_direct_xmit+0x10f/0x310
>
> [...]
Here is the summary with links:
- [net] ethernet: alx: fix order of cal
On Fri, 5 Mar 2021 14:17:29 -0800 Jakub Kicinski wrote:
> netif_device_attach() will unpause the queues so we can't call
> it before __alx_open(). This went undetected until
> commit b0999223f224 ("alx: add ability to allocate and free
> alx_napi structures") but now if stack tries to xmit immedia
netif_device_attach() will unpause the queues so we can't call
it before __alx_open(). This went undetected until
commit b0999223f224 ("alx: add ability to allocate and free
alx_napi structures") but now if stack tries to xmit immediately
on resume before __alx_open() we'll crash on the NAPI being
forced type
conversion that makes it hard to understand.
Simplify this by computing the offset in host byte order, then
converting the result when assigning it into the header field.
Signed-off-by: Alex Elder
---
.../ethernet/qualcomm/rmnet/rmnet_map_data.c | 22 ++-
1 file changed
akes it hard to understand.
>
> Simplify this by computing the offset in host byte order, then
> converting the result when assigning it into the header field.
>
> Signed-off-by: Alex Elder
> ---
> .../ethernet/qualcomm/rmnet/rmnet_map_data.c | 22 ++-
>
host byte order, then
converting the result when assigning it into the header field.
Signed-off-by: Alex Elder
---
.../ethernet/qualcomm/rmnet/rmnet_map_data.c | 22 ++-
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/net/ethernet/qualcomm/rmnet
This series introduces a bulk order-0 page allocator with sunrpc and
the network page pool being the first users. The implementation is not
particularly efficient and the intention is to iron out what the semantics
of the API should have for users. Once the semantics are ironed out, it can
be made
This is a followup to Mel Gorman's patchset:
- Message-Id: <20210224102603.19524-1-mgor...@techsingularity.net>
-
https://lore.kernel.org/netdev/20210224102603.19524-1-mgor...@techsingularity.net/
Showing page_pool usage of the API for alloc_pages_bulk().
Maybe Mel Gorman will/can carry these
> > > > zone->free_area[order].nr_free
> > > >
> > > > This was really tricky to find. I was puzzled why perf reported that
> > > > rmqueue_bulk was using 44% of the time in an imul operation:
> > > >
> > > >???
On Thu, Feb 25, 2021 at 04:16:33PM +0100, Jesper Dangaard Brouer wrote:
> > On Wed, Feb 24, 2021 at 07:56:51PM +0100, Jesper Dangaard Brouer wrote:
> > > Avoid multiplication (imul) operations when accessing:
> > > zone->free_area[order].nr_free
> > >
> &g
>
> On Wed, Feb 24, 2021 at 07:56:51PM +0100, Jesper Dangaard Brouer wrote:
> > Avoid multiplication (imul) operations when accessing:
> > zone->free_area[order].nr_free
> >
> > This was really tricky to find. I was puzzled why perf reported that
> > r
e:
> Avoid multiplication (imul) operations when accessing:
> zone->free_area[order].nr_free
>
> This was really tricky to find. I was puzzled why perf reported that
> rmqueue_bulk was using 44% of the time in an imul operation:
>
>??? del_page_from_free_list():
&g
Avoid multiplication (imul) operations when accessing:
zone->free_area[order].nr_free
This was really tricky to find. I was puzzled why perf reported that
rmqueue_bulk was using 44% of the time in an imul operation:
│ del_page_from_free_list():
44,54 │ e2: imul $0x58,%rax,%
d Brouer (3):
net: page_pool: refactor dma_map into own function page_pool_dma_map
net: page_pool: use alloc_pages_bulk in refill code path
mm: make zone->free_area[order] access faster
include/linux/mmzone.h |6 ++-
net/core/page_p
> On Feb 24, 2021, at 5:26 AM, Mel Gorman wrote:
>
> This is a prototype series that introduces a bulk order-0 page allocator
> with sunrpc being the first user. The implementation is not particularly
> efficient and the intention is to iron out what the semantics of the API
&g
On Wed, Feb 24, 2021 at 12:27:23PM +0100, Jesper Dangaard Brouer wrote:
> On Wed, 24 Feb 2021 10:26:00 +
> Mel Gorman wrote:
>
> > This is a prototype series that introduces a bulk order-0 page allocator
> > with sunrpc being the first user. The implementatio
On Wed, 24 Feb 2021 10:26:00 +
Mel Gorman wrote:
> This is a prototype series that introduces a bulk order-0 page allocator
> with sunrpc being the first user. The implementation is not particularly
> efficient and the intention is to iron out what the semantics of the API
> sho
This is a prototype series that introduces a bulk order-0 page allocator
with sunrpc being the first user. The implementation is not particularly
efficient and the intention is to iron out what the semantics of the API
should be. That said, sunrpc was reported to have reduced allocation
latency
From: Edwin Peer
A TX queue can potentially immediately timeout after it is stopped
and the last TX timestamp on that queue was more than 5 seconds ago with
carrier still up. Prevent these intermittent false TX timeouts
by bringing down carrier first before calling netif_tx_disable().
Fixes: c0
akub Kicinski ; Network
> > Development ; Andrew Lunn ;
> > Florian Fainelli ; Willem de Bruijn
> >
> > Subject: Re: [PATCH net-next 2/2] net: stmmac: slightly adjust the order of
> > the
> > codes in stmmac_resume()
> >
> > On Thu, Feb 4, 2021 at 5:18
jn
>
> Subject: Re: [PATCH net-next 2/2] net: stmmac: slightly adjust the order of
> the
> codes in stmmac_resume()
>
> On Thu, Feb 4, 2021 at 5:18 AM Joakim Zhang
> wrote:
> >
> > Slightly adjust the order of the codes in stmmac_resume(), remove the
> &g
On Thu, Feb 4, 2021 at 5:18 AM Joakim Zhang wrote:
>
> Slightly adjust the order of the codes in stmmac_resume(), remove the
> check "if (!device_may_wakeup(priv->device) || !priv->plat->pmt)".
>
> Signed-off-by: Joakim Zhang
This commit message says what the
Slightly adjust the order of the codes in stmmac_resume(), remove the
check "if (!device_may_wakeup(priv->device) || !priv->plat->pmt)".
Signed-off-by: Joakim Zhang
---
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c | 14 ++
1 file changed, 6 insertions(+), 8
From: Vladimir Oltean
In general it is desirable that cleanup is the reverse process of setup.
In this case I am not seeing any particular issue, but with the
introduction of devlink-sb for felix, a non-obvious decision had to be
made as to where to put its cleanup method. When there's a conventi
From: Vladimir Oltean
In general it is desirable that cleanup is the reverse process of setup.
In this case I am not seeing any particular issue, but with the
introduction of devlink-sb for felix, a non-obvious decision had to be
made as to where to put its cleanup method. When there's a conventi
We check separately for REMOVING and PROBING in ibmvnic_reset().
Switch the order of checks to facilitate better locking when
checking for REMOVING/REMOVED state.
Fixes: 6a2fb0e99f9c ("ibmvnic: driver initialization for kdump/kexec")
Signed-off-by: Sukadev Bhattiprolu
---
drivers/ne
From: Vladimir Oltean
In general it is desirable that cleanup is the reverse process of setup.
In this case I am not seeing any particular issue, but with the
introduction of devlink-sb for felix, a non-obvious decision had to be
made as to where to put its cleanup method. When there's a conventi
On 1/8/2021 9:59 AM, Vladimir Oltean wrote:
> From: Vladimir Oltean
>
> In general it is desirable that cleanup is the reverse process of setup.
> In this case I am not seeing any particular issue, but with the
> introduction of devlink-sb for felix, a non-obvious decision had to be
> made as
From: Vladimir Oltean
In general it is desirable that cleanup is the reverse process of setup.
In this case I am not seeing any particular issue, but with the
introduction of devlink-sb for felix, a non-obvious decision had to be
made as to where to put its cleanup method. When there's a conventi
We check separately for REMOVING and PROBING in ibmvnic_reset().
Switch the order of checks to facilitate better locking when
checking for REMOVING/REMOVED state.
Fixes: 6a2fb0e99f9c ("ibmvnic: driver initialization for kdump/kexec")
Signed-off-by: Sukadev Bhattiprolu
---
drivers/ne
From: Vladimir Oltean
In general it is desirable that cleanup is the reverse process of setup.
In this case I am not seeing any particular issue, but with the
introduction of devlink-sb for felix, a non-obvious decision had to be
made as to where to put its cleanup method. When there's a conventi
gt;> On 22.12.2020 21:14, Hongwei Zhang wrote:
>>>>> Dear Reviewer,
>>>>>
>>>>> Use native MAC address is preferred over other choices, thus change the
>>>>> order
>>>>> of reading MAC address, try to read it from MAC chip first, if
eviewer,
> > > >
> > > > Use native MAC address is preferred over other choices, thus change the
> > > > order
> > > > of reading MAC address, try to read it from MAC chip first, if it's not
> > > > availabe, then try to read it from
On Tue, 22 Dec 2020 22:00:34 +0100 Andrew Lunn wrote:
> On Tue, Dec 22, 2020 at 09:46:52PM +0100, Heiner Kallweit wrote:
> > On 22.12.2020 21:14, Hongwei Zhang wrote:
> > > Dear Reviewer,
> > >
> > > Use native MAC address is preferred over other choices, t
On Tue, Dec 22, 2020 at 09:46:52PM +0100, Heiner Kallweit wrote:
> On 22.12.2020 21:14, Hongwei Zhang wrote:
> > Dear Reviewer,
> >
> > Use native MAC address is preferred over other choices, thus change the
> > order
> > of reading MAC address, try to read it
On 22.12.2020 21:14, Hongwei Zhang wrote:
> Dear Reviewer,
>
> Use native MAC address is preferred over other choices, thus change the order
> of reading MAC address, try to read it from MAC chip first, if it's not
> availabe, then try to read it from device tree.
>
>
Dear Reviewer,
Use native MAC address is preferred over other choices, thus change the order
of reading MAC address, try to read it from MAC chip first, if it's not
availabe, then try to read it from device tree.
Hi Heiner,
> From: Heiner Kallweit
> Sent: Monday, December 21, 2
Change the order of reading MAC address, try to read it from MAC chip
first, if it's not availabe, then try to read it from device tree.
Fixes: 35c54922dc97 ("ARM: dts: tacoma: Add reserved memory for ramoops")
Signed-off-by: Hongwei Zhang
---
drivers/net/ethernet/faraday/f
Dear Reviewer,
Use native MAC address is preferred over other choices, thus change the order
of reading MAC address, try to read it from MAC chip first, if it's not
availabe, then try to read it from device tree.
Thanks,
--Hongwei
Changelog:
v2:
- Corrected comments in the patch
v1:
Am 21.12.2020 um 21:51 schrieb Hongwei Zhang:
> Change the order of reading MAC address, try to read it from MAC chip
> first, if it's not availabe, then try to read it from device tree.
>
This commit message leaves a number of questions. It seems the change
isn't related a
Change the order of reading MAC address, try to read it from MAC chip
first, if it's not availabe, then try to read it from device tree.
Fixes: 35c54922dc97 ("ARM: dts: tacoma: Add reserved memory for ramoops")
Signed-off-by: Hongwei Zhang
---
drivers/net/ethernet/faraday/f
Dear Reviewer,
Use native MAC address is preferred over other choices, thus change the order
of reading MAC address, try to read it from MAC chip first, if it's not
availabe, then try to read it from device tree.
Hongwei Zhang (1):
net: ftgmac100: Change the order of getting MAC ad
Daniel Borkmann writes:
> On 12/9/20 2:57 PM, Toke Høiland-Jørgensen wrote:
>> This series restores the test_offload.py selftest to working order. It seems
>> a
>> number of subtle behavioural changes have crept into various subsystems which
>> broke test_offload.py i
On 12/9/20 2:57 PM, Toke Høiland-Jørgensen wrote:
This series restores the test_offload.py selftest to working order. It seems a
number of subtle behavioural changes have crept into various subsystems which
broke test_offload.py in a number of ways. Most of these are fairly benign
changes where
Hello:
This series was applied to bpf/bpf.git (refs/heads/master):
On Wed, 09 Dec 2020 14:57:36 +0100 you wrote:
> This series restores the test_offload.py selftest to working order. It seems a
> number of subtle behavioural changes have crept into various subsystems which
&
1 - 100 of 1030 matches
Mail list logo