This adds the vhost backend callouts for the worker ioctls added in the
6.4 linux kernel commit:
c1ecd8e95007 ("vhost: allow userspace to create workers")
Signed-off-by: Mike Christie
Reviewed-by: Stefano Garzarella
Reviewed-by: Stefan Hajnoczi
---
hw/virtio/vhost-backend.c
The following patches allow users to configure the vhost worker threads
for vhost-scsi. With vhost-net we get a worker thread per rx/tx virtqueue
pair, but for vhost-scsi we get one worker for all workqueues. This
becomes a bottlneck after 2 queues are used.
In the upstream linux kernel commit:
h
When trying to send IO to more than 2 virtqueues the single
thread becomes a bottlneck.
This patch adds a new setting, worker_per_virtqueue, which can be set
to:
false: Existing behavior where we get the single worker thread.
true: Create a worker per IO virtqueue.
Signed-off-by: Mike Christie
Rev
On 11/29/23 3:30 AM, Stefano Garzarella wrote:
> On Sun, Nov 26, 2023 at 06:28:34PM -0600, Mike Christie wrote:
>> This adds support for vhost-scsi to be able to create a worker thread
>> per virtqueue. Right now for vhost-net we get a worker thread per
>> tx/rx virtqueue pai
This adds the vhost backend callouts for the worker ioctls added in the
6.4 linux kernel commit:
c1ecd8e95007 ("vhost: allow userspace to create workers")
Signed-off-by: Mike Christie
---
hw/virtio/vhost-backend.c | 28
include/hw/virtio/vhost
The following patches allow users to configure the vhost worker threads
for vhost-scsi. With vhost-net we get a worker thread per rx/tx virtqueue
pair, but for vhost-scsi we get one worker for all workqueues. This
becomes a bottlneck after 2 queues are used.
In the upstream linux kernel commit:
h
When trying to send IO to more than 2 virtqueues the single
thread becomes a bottlneck.
This patch adds a new setting, workers_per_virtqueue, which can be set
to:
false: Existing behavior where we get the single worker thread.
true: Create a worker per IO virtqueue.
Signed-off-by: Mike Christie
--
On 11/15/23 6:57 AM, Stefan Hajnoczi wrote:
> On Wed, Nov 15, 2023 at 12:43:02PM +0100, Stefano Garzarella wrote:
>> On Mon, Nov 13, 2023 at 06:36:44PM -0600, Mike Christie wrote:
>>> This adds support for vhost-scsi to be able to create a worker thread
>>> per virtque
On 11/15/23 5:43 AM, Stefano Garzarella wrote:
> On Mon, Nov 13, 2023 at 06:36:44PM -0600, Mike Christie wrote:
>> This adds support for vhost-scsi to be able to create a worker thread
>> per virtqueue. Right now for vhost-net we get a worker thread per
>> tx/rx virtqueue pai
This adds the vhost backend callouts for the worker ioctls added in the
6.4 linux kernel commit:
c1ecd8e95007 ("vhost: allow userspace to create workers")
Signed-off-by: Mike Christie
---
hw/virtio/vhost-backend.c | 28
include/hw/virtio/vhost
The following patches allow users to configure the vhost worker threads
for vhost-scsi. With vhost-net we get a worker thread per rx/tx virtqueue
pair, but for vhost-scsi we get one worker for all workqueues. This
becomes a bottlneck after 2 queues are used.
In the upstream linux kernel commit:
h
When trying to send IO to more than 2 virtqueues the single
thread becomes a bottlneck.
This patch adds a new setting, virtqueue_workers, which can be set to:
1: Existing behavior whre we get the single thread.
-1: Create a worker per IO virtqueue.
Signed-off-by: Mike Christie
---
hw/scsi/vhost-s
What was the issue you are seeing?
Was it something like you get the UA. We retry then on one of the
retries the sense is not setup correctly, so the scsi error handler
runs? That fails and the device goes offline?
If you turn on scsi debugging you would see:
[ 335.445922] sd 0:0:0:0: [sda] ta
I just realized I forgot to cc the virt list so adding now.
Christian see the very bottom for a different fork patch.
On 7/12/21 7:05 AM, Stefan Hajnoczi wrote:
> On Fri, Jul 09, 2021 at 11:25:37AM -0500, Mike Christie wrote:
>> Hi,
>>
>> The goal of this email is to try a
Hi,
The goal of this email is to try and figure how we want to track/limit the
number of kernel threads created by vhost devices.
Background:
---
For vhost-scsi, we've hit a issue where the single vhost worker thread can't
handle all IO the being sent from multiple queues. IOPs is stuck a
On 11/19/20 10:24 AM, Stefan Hajnoczi wrote:
On Thu, Nov 19, 2020 at 4:13 PM Mike Christie
wrote:
On 11/19/20 8:46 AM, Michael S. Tsirkin wrote:
On Wed, Nov 18, 2020 at 11:31:17AM +, Stefan Hajnoczi wrote:
My preference has been:
1. If we were to ditch cgroups, then add a new interface
On 11/19/20 8:46 AM, Michael S. Tsirkin wrote:
On Wed, Nov 18, 2020 at 11:31:17AM +, Stefan Hajnoczi wrote:
My preference has been:
1. If we were to ditch cgroups, then add a new interface that would allow
us to bind threads to a specific CPU, so that it lines up with the guest's
mq to CPU
On 11/18/20 10:35 PM, Jason Wang wrote:
its just extra code. This patch:
https://urldefense.com/v3/__https://www.spinics.net/lists/linux-scsi/msg150151.html__;!!GqivPVa7Brio!MJS-iYeBuOljoz2xerETyn4c1N9i0XnOE8oNhz4ebbzCMNeQIP_Iie8zH18L7cY7_hur$
would work without the ENABLE ioctl I mean.
That
On 11/18/20 1:54 AM, Jason Wang wrote:
On 2020/11/18 下午2:57, Mike Christie wrote:
On 11/17/20 11:17 PM, Jason Wang wrote:
On 2020/11/18 上午12:40, Stefan Hajnoczi wrote:
On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote:
The following kernel patches were made over Michael's
On 11/18/20 12:57 AM, Mike Christie wrote:
> On 11/17/20 11:17 PM, Jason Wang wrote:
>>
>> On 2020/11/18 上午12:40, Stefan Hajnoczi wrote:
>>> On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote:
>>>> The following kernel patches were made over Mic
On 11/17/20 11:17 PM, Jason Wang wrote:
>
> On 2020/11/18 上午12:40, Stefan Hajnoczi wrote:
>> On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote:
>>> The following kernel patches were made over Michael's vhost branch:
>>>
>>> https://urldef
On 11/17/20 10:40 AM, Stefan Hajnoczi wrote:
> On Thu, Nov 12, 2020 at 05:18:59PM -0600, Mike Christie wrote:
>> The following kernel patches were made over Michael's vhost branch:
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost
>&g
This adds a new ioctl VHOST_SET_VRING_ENABLE that the vhost drivers can
implement a callout for and execute an operation when the vq is
enabled/disabled.
Signed-off-by: Mike Christie
---
drivers/vhost/vhost.c | 25 +
drivers/vhost/vhost.h | 1 +
include/uapi
We use like 3 coding styles in this struct. Switch to just tabs.
Signed-off-by: Mike Christie
Reviewed-by: Chaitanya Kulkarni
---
drivers/vhost/vhost.h | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 1ba8e81
In the last patches we are going to have a worker thread per IO vq.
This patch separates the scsi cmd completion code paths so we can
complete cmds based on their vq instead of having all cmds complete
on the same worker thread.
Signed-off-by: Mike Christie
---
drivers/vhost/scsi.c | 48
The following kernel patches were made over Michael's vhost branch:
https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git/log/?h=vhost
and the vhost-scsi bug fix patchset:
https://lore.kernel.org/linux-scsi/20201112170008.GB1555653@stefanha-x1.localdomain/T/#t
And the qemu patch was mad
This patch has vhost-scsi create a worker thread per IO vq.
It also adds a modparam to enable the feature, because I was thinking
existing setups might not be expecting the extra threading use, so the
default is to use the old single thread multiple vq behavior.
Signed-off-by: Mike Christie
multiple worker support
or for the case where the user just does not want to allocate the
resources then we maintain support for the single worker case.
Note: This adds a new function vhost_vq_work_queue. It's used by
this patch and also the next one, so I exported it here.
Signed-off-by:
H can be anywhere from 32 to 128.
With the patches in this set and the patches to remove the sess_cmd_lock
and execution_lock from lio's IO path in the SCSI tree for 5.11, we are
able to get IOPs from a single LUN up to 700K.
Signed-off-by: Mike Christie
---
drivers/vhost/
their responses when the tmf's work is run.
So this patch has vhost-scsi flush the IO vqs on other worker threads
before we send the tmf response.
Signed-off-by: Mike Christie
---
drivers/vhost/scsi.c | 16 ++--
drivers/vhost/vhost.c | 6 ++
drivers/vhost/vhost.h | 1 +
3
This patch made over the master branch allows the vhost-scsi
driver to call into the kernel and tell it to enable/disable
a virtqueue.
The kernel patches included with this set, will create
a worker per IO vq when multiple IO queues have been setup.
Signed-off-by: Mike Christie
---
hw/scsi
vhost_work_dev_flush call flushed.
Signed-off-by: Mike Christie
---
drivers/vhost/scsi.c | 8
1 file changed, 8 deletions(-)
diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
index 8795fd3..4725a08 100644
--- a/drivers/vhost/scsi.c
+++ b/drivers/vhost/scsi.c
@@ -1443,11 +1443,6 @@ static void
The next patch adds a callout so drivers can perform some action when we
get a VHOST_SET_VRING_ENABLE, so this patch moves the msg_handler callout
to a new vhost_dev_ops struct just to keep all the callouts better
organized.
Signed-off-by: Mike Christie
---
drivers/vhost/vdpa.c | 7
vhost_work_flush doesn't do anything with the work arg. This patch drops
it and then renames vhost_work_flush to vhost_work_dev_flush to reflect
that the function flushes all the works in the dev and not just a
specific queue or work item.
Signed-off-by: Mike Christie
Acked-by: Jason
On 03/06/2012 07:51 PM, ronnie sahlberg wrote:
> Hi Mike,
>
> Thanks!
>
> That would be great if you rename it to something less generic and
> specific to libiscsi-utils.
> That means I can continue using libiscsi as the name for my
> multiplatform library.
>
> By the way, if the only user toda
On 03/06/2012 01:58 PM, Mike Christie wrote:
> On 03/06/2012 06:19 AM, Hannes Reinecke wrote:
>> On 03/06/2012 12:06 PM, ronnie sahlberg wrote:
>>> Sorry about this.
>>>
>>> First, libiscsi is a really good name for a general purpose
>>> multiplatform l
On 03/06/2012 06:19 AM, Hannes Reinecke wrote:
> On 03/06/2012 12:06 PM, ronnie sahlberg wrote:
>> Sorry about this.
>>
>> First, libiscsi is a really good name for a general purpose
>> multiplatform library, like libiscsi.
>> Second, a generic name like this is a horribly poor idea for a single
>
37 matches
Mail list logo