Il 12/09/2012 11:22, Bharata B Rao ha scritto:
> FYI, bdrv_find_protocol() fails for protocols like this. It detects the
> protocol
> as "gluster+tcp" and compares it with drv->protocol_name (which is only
> "gluster").
>
> I guess I will have to fix bdrv_find_protocol() to handle the '+' within
On Fri, Sep 07, 2012 at 11:57:58AM +0200, Kevin Wolf wrote:
> Am 07.09.2012 11:36, schrieb Paolo Bonzini:
> > Hmm, why don't we do the exact same thing as libvirt
> > (http://libvirt.org/remote.html):
> >
> > ipv4 - gluster+tcp://1.2.3.4:0/testvol/dir/a.img
> > ipv6 - gluster+tcp://[1:2:3:4:5:6:7
On Fri, Sep 07, 2012 at 05:11:33PM +0200, Paolo Bonzini wrote:
> This is a bug that has to be fixed anyway. There are provisions in
> aio.c, but they are broken apparently. Can you try this:
>
> diff --git a/aio.c b/aio.c
> index 0a9eb10..99b8b72 100644
> --- a/aio.c
> +++ b/aio.c
> @@ -119,7 +1
Il 07/09/2012 17:06, Bharata B Rao ha scritto:
> qemu_gluster_aio_event_reader() is the node->io_read in qemu_aio_wait().
>
> qemu_aio_wait() calls node->io_read() which calls qemu_gluster_complete_aio().
> Before we return back to qemu_aio_wait(), many other things happen:
>
> bdrv_close() gets
On Thu, Sep 06, 2012 at 12:29:30PM +0200, Kevin Wolf wrote:
> Am 06.09.2012 12:18, schrieb Paolo Bonzini:
> > Il 06/09/2012 12:07, Kevin Wolf ha scritto:
> >>> The AIOCB is already invalid at the time the callback is entered, so we
> >>> could release it before the call. However, not all implement
Il 07/09/2012 12:03, Daniel P. Berrange ha scritto:
>> > I think doing it the other way round would be more logical:
>> >
>> > gluster+unix:///path/to/unix/sock?image=volname/image
>> >
>> > This way you have the socket first, which you also must open first.
>> > Having it as a parameter withou
On Fri, Sep 07, 2012 at 12:00:50PM +0200, Kevin Wolf wrote:
> Am 06.09.2012 17:47, schrieb Daniel P. Berrange:
> > On Thu, Sep 06, 2012 at 09:10:04PM +0530, Bharata B Rao wrote:
> >> On Thu, Sep 06, 2012 at 11:29:36AM +0300, Avi Kivity wrote:
> >>> On 08/14/2012 12:58 PM, Kevin Wolf wrote:
>
>
Am 06.09.2012 17:47, schrieb Daniel P. Berrange:
> On Thu, Sep 06, 2012 at 09:10:04PM +0530, Bharata B Rao wrote:
>> On Thu, Sep 06, 2012 at 11:29:36AM +0300, Avi Kivity wrote:
>>> On 08/14/2012 12:58 PM, Kevin Wolf wrote:
> While we are at this, let me bring out another issue. Gluster sup
Am 07.09.2012 11:36, schrieb Paolo Bonzini:
> Il 07/09/2012 05:24, Bharata B Rao ha scritto:
>>> gluster:///volname/image?transport=unix&sockpath=/path/to/unix/sock
>> ^why 3 /// here ? volname is not a path, but image is.
>
> Because the host is the local computer, i.e. empty.
>
>>
Il 07/09/2012 05:24, Bharata B Rao ha scritto:
>> gluster:///volname/image?transport=unix&sockpath=/path/to/unix/sock
> ^why 3 /// here ? volname is not a path, but image is.
Because the host is the local computer, i.e. empty.
> gluster://server[:port]/volname/path/to/image[?transpo
On Fri, Sep 07, 2012 at 08:54:02AM +0530, Bharata B Rao wrote:
> On Thu, Sep 06, 2012 at 04:47:17PM +0100, Daniel P. Berrange wrote:
> > IMHO this is all gross. URIs already have a well defined way to provide
> > multiple parameters, dealing with escaping of special characters. ie query
> > paramet
On Thu, Sep 06, 2012 at 09:35:04AM +0200, Paolo Bonzini wrote:
> > +static int qemu_gluster_open(BlockDriverState *bs, const char *filename,
> > +int bdrv_flags)
> > +{
> > +BDRVGlusterState *s = bs->opaque;
> > +int open_flags = 0;
> > +int ret = 0;
> > +GlusterURI *uri = g_mal
On Thu, Sep 06, 2012 at 04:47:17PM +0100, Daniel P. Berrange wrote:
> IMHO this is all gross. URIs already have a well defined way to provide
> multiple parameters, dealing with escaping of special characters. ie query
> parameters. The whole benefit of using URI syntax is to let apps process
> the
On 09/06/2012 06:47 PM, Daniel P. Berrange wrote:
>
> gluster:///volname/image?transport=unix&sockpath=/path/to/unix/sock
Like.
--
error compiling committee.c: too many arguments to function
On Fri, Sep 7, 2012 at 1:47 AM, Daniel P. Berrange wrote:
> On Thu, Sep 06, 2012 at 09:10:04PM +0530, Bharata B Rao wrote:
> > On Thu, Sep 06, 2012 at 11:29:36AM +0300, Avi Kivity wrote:
> > > On 08/14/2012 12:58 PM, Kevin Wolf wrote:
> > > >
> > > >> While we are at this, let me bring out another
On Thu, Sep 06, 2012 at 09:10:04PM +0530, Bharata B Rao wrote:
> On Thu, Sep 06, 2012 at 11:29:36AM +0300, Avi Kivity wrote:
> > On 08/14/2012 12:58 PM, Kevin Wolf wrote:
> > >
> > >> While we are at this, let me bring out another issue. Gluster supports 3
> > >> transport types:
> > >>
> > >> -
Il 06/09/2012 17:40, Bharata B Rao ha scritto:
> > > > I don't think we can fit 'unix' within the standard URI scheme (RFC
> > > > 3986)
> > > > easily, but I am planning to specify the 'unix' transport as below:
> > > >
> > > > gluster://[/path/to/unix/domain/socket]/volname/image?transport=unix
On Thu, Sep 06, 2012 at 11:29:36AM +0300, Avi Kivity wrote:
> On 08/14/2012 12:58 PM, Kevin Wolf wrote:
> >
> >> While we are at this, let me bring out another issue. Gluster supports 3
> >> transport types:
> >>
> >> - socket in which case the server will be hostname, ipv4 or ipv4 address.
> >>
Il 06/09/2012 12:29, Kevin Wolf ha scritto:
>> That's quite difficult. Completion of an I/O operation can trigger
>> another I/O operation on another block device, and so on until we go
>> back to the first device (think of a hypothetical RAID-5 device).
>
> You always have a tree of BDSes, and c
Am 06.09.2012 12:18, schrieb Paolo Bonzini:
> Il 06/09/2012 12:07, Kevin Wolf ha scritto:
>>> The AIOCB is already invalid at the time the callback is entered, so we
>>> could release it before the call. However, not all implementation of
>>> AIO are ready for that and I'm not really in the mood f
Il 06/09/2012 12:07, Kevin Wolf ha scritto:
>> The AIOCB is already invalid at the time the callback is entered, so we
>> could release it before the call. However, not all implementation of
>> AIO are ready for that and I'm not really in the mood for large scale
>> refactoring...
>
> But the way
Am 06.09.2012 11:38, schrieb Paolo Bonzini:
> Il 06/09/2012 11:06, Kevin Wolf ha scritto:
If it works, I think this change would be preferrable to using a "magic"
BH in every driver.
>> The way it works in posix-aio-compat is that the request is first
>> removed from the list and then the
Il 06/09/2012 11:06, Kevin Wolf ha scritto:
>> > If it works, I think this change would be preferrable to using a "magic"
>> > BH in every driver.
> The way it works in posix-aio-compat is that the request is first
> removed from the list and then the callback is called. This way
> posix_aio_flush(
Am 06.09.2012 09:23, schrieb Paolo Bonzini:
> Il 05/09/2012 11:57, Bharata B Rao ha scritto:
What could be the issue here ? In general, how do I ensure that my
aio calls get completed correctly in such scenarios where bdrv_read etc
are called from coroutine context rather than from m
On 08/14/2012 12:58 PM, Kevin Wolf wrote:
>
>> While we are at this, let me bring out another issue. Gluster supports 3
>> transport types:
>>
>> - socket in which case the server will be hostname, ipv4 or ipv4 address.
>> - rdma in which case server will be interpreted similar to socket.
>> - un
Il 09/08/2012 15:02, Bharata B Rao ha scritto:
> block: Support GlusterFS as a QEMU block backend.
>
> From: Bharata B Rao
>
> This patch adds gluster as the new block backend in QEMU. This gives
> QEMU the ability to boot VM images from gluster volumes. Its already
> possible to boot from VM im
Il 05/09/2012 11:57, Bharata B Rao ha scritto:
>> > What could be the issue here ? In general, how do I ensure that my
>> > aio calls get completed correctly in such scenarios where bdrv_read etc
>> > are called from coroutine context rather than from main thread context ?
> One way to handle this
On Wed, Sep 05, 2012 at 12:01:58PM +0200, Kevin Wolf wrote:
> Am 05.09.2012 09:41, schrieb Bharata B Rao:
> > On Thu, Aug 09, 2012 at 06:32:16PM +0530, Bharata B Rao wrote:
> >> +static void qemu_gluster_complete_aio(GlusterAIOCB *acb)
> >> +{
> >> +int ret;
> >> +
> >> +if (acb->canceled)
Am 05.09.2012 09:41, schrieb Bharata B Rao:
> On Thu, Aug 09, 2012 at 06:32:16PM +0530, Bharata B Rao wrote:
>> +static void qemu_gluster_complete_aio(GlusterAIOCB *acb)
>> +{
>> +int ret;
>> +
>> +if (acb->canceled) {
>> +qemu_aio_release(acb);
>> +return;
>> +}
>> +
>>
On Wed, Sep 05, 2012 at 01:11:06PM +0530, Bharata B Rao wrote:
> On Thu, Aug 09, 2012 at 06:32:16PM +0530, Bharata B Rao wrote:
> > +static void qemu_gluster_complete_aio(GlusterAIOCB *acb)
> > +{
> > +int ret;
> > +
> > +if (acb->canceled) {
> > +qemu_aio_release(acb);
> > +
On Thu, Aug 09, 2012 at 06:32:16PM +0530, Bharata B Rao wrote:
> +static void qemu_gluster_complete_aio(GlusterAIOCB *acb)
> +{
> +int ret;
> +
> +if (acb->canceled) {
> +qemu_aio_release(acb);
> +return;
> +}
> +
> +if (acb->ret == acb->size) {
> +ret = 0; /
On Wed, Aug 15, 2012 at 10:00:27AM +0200, Kevin Wolf wrote:
> Am 15.08.2012 07:21, schrieb Bharata B Rao:
> > On Tue, Aug 14, 2012 at 10:29:26AM +0200, Kevin Wolf wrote:
> > +static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void
> > *arg)
> > +{
> > +GlusterAIO
On Tue, Aug 14, 2012 at 10:29:26AM +0200, Kevin Wolf wrote:
> Am 14.08.2012 06:38, schrieb Bharata B Rao:
> > Kevin, Thanks for your review. I will address all of your comments
> > in the next iteration, but have a few questions/comments on the others...
> >
> > On Mon, Aug 13, 2012 at 02:50:29PM
Am 15.08.2012 07:21, schrieb Bharata B Rao:
> On Tue, Aug 14, 2012 at 10:29:26AM +0200, Kevin Wolf wrote:
> +static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void
> *arg)
> +{
> +GlusterAIOCB *acb = (GlusterAIOCB *)arg;
> +BDRVGlusterState *s = acb->com
On Tue, Aug 14, 2012 at 10:29:26AM +0200, Kevin Wolf wrote:
> >>> +static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void
> >>> *arg)
> >>> +{
> >>> +GlusterAIOCB *acb = (GlusterAIOCB *)arg;
> >>> +BDRVGlusterState *s = acb->common.bs->opaque;
> >>> +
> >>> +acb->ret =
Am 14.08.2012 11:34, schrieb Bharata B Rao:
> On Tue, Aug 14, 2012 at 10:29:26AM +0200, Kevin Wolf wrote:
>>>
>>> Yes, and that will result in port=0, which is default. So this is to
>>> cater for cases like gluster://[1:2:3:4:5]:/volname/image
>>
>> So you consider this a valid URL? I would have e
On Tue, Aug 14, 2012 at 10:29:26AM +0200, Kevin Wolf wrote:
> >
> > Yes, and that will result in port=0, which is default. So this is to
> > cater for cases like gluster://[1:2:3:4:5]:/volname/image
>
> So you consider this a valid URL? I would have expected it to invalid.
> But let me see, there
Am 14.08.2012 06:38, schrieb Bharata B Rao:
> Kevin, Thanks for your review. I will address all of your comments
> in the next iteration, but have a few questions/comments on the others...
>
> On Mon, Aug 13, 2012 at 02:50:29PM +0200, Kevin Wolf wrote:
>>> +static int parse_server(GlusterURI *uri,
Kevin, Thanks for your review. I will address all of your comments
in the next iteration, but have a few questions/comments on the others...
On Mon, Aug 13, 2012 at 02:50:29PM +0200, Kevin Wolf wrote:
> > +static int parse_server(GlusterURI *uri, char *server)
> > +{
> > +int ret = -EINVAL;
>
Am 09.08.2012 15:02, schrieb Bharata B Rao:
> block: Support GlusterFS as a QEMU block backend.
>
> From: Bharata B Rao
>
> This patch adds gluster as the new block backend in QEMU. This gives
> QEMU the ability to boot VM images from gluster volumes. Its already
> possible to boot from VM image
block: Support GlusterFS as a QEMU block backend.
From: Bharata B Rao
This patch adds gluster as the new block backend in QEMU. This gives
QEMU the ability to boot VM images from gluster volumes. Its already
possible to boot from VM images on gluster volumes using FUSE mount, but
this patchset p
41 matches
Mail list logo