Il 20/11/2013 15:18, Stefan Hajnoczi ha scritto:
>> > +if (buffer_is_zero(inbuf, s->qdev.blocksize)) {
> Where is inbuf's size checked? It must be s->qdev.blocksize for this
> code to be correct.
>
See scsi_req_length:
case WRITE_SAME_10:
case WRITE_SAME_16:
cmd->xfer = dev-
On Tue, Nov 19, 2013 at 06:07:43PM +0100, Paolo Bonzini wrote:
> +static void scsi_disk_emulate_write_same(SCSIDiskReq *r, uint8_t *inbuf)
> +{
> +SCSIRequest *req = &r->req;
> +SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev, req->dev);
> +uint32_t nb_sectors = scsi_data_cdb_length(r-
Il 19/11/2013 18:23, ronnie sahlberg ha scritto:
> +#define SCSI_WRITE_SAME_MAX 524288
> ...
> +data->iov.iov_len = MIN(data->nb_sectors * 512, SCSI_WRITE_SAME_MAX);
>
> I don't think you should just clamp the data to 512k, instead I think
> you should report the 512k max write same s
That means the initiator will do the "split into smaller manageable
chunks" for you and you get a 1-to-1 mapping between WS10/16 that the
initiator issues to qemu and the write-same calls that qemu generates.
On Tue, Nov 19, 2013 at 9:23 AM, ronnie sahlberg
wrote:
> +#define SCSI_WRITE_SAME_MAX
+#define SCSI_WRITE_SAME_MAX 524288
...
+data->iov.iov_len = MIN(data->nb_sectors * 512, SCSI_WRITE_SAME_MAX);
I don't think you should just clamp the data to 512k, instead I think
you should report the 512k max write same size through
BlockLimitsVPD/MaximumWriteSameLength to the init
Fetch the data to be written from the input buffer. If it is all zeroes,
we can use the write_zeroes call (possibly with the new MAY_UNMAP flag).
Otherwise, do as many write cycles as needed, writing 512k at a time.
Strictly speaking, this is still incorrect because a zero cluster should
only be