ne - Solr - Nutch
>
>
>
> - Original Message
>> From: Walter Underwood <[EMAIL PROTECTED]>
>> To: solr-user@lucene.apache.org
>> Sent: Thursday, October 30, 2008 11:52:47 AM
>> Subject: Re: replication handler - compression
>>
>>
ter Underwood <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
> Sent: Thursday, October 30, 2008 11:52:47 AM
> Subject: Re: replication handler - compression
>
> About a factor of 2 on a small, optimized index. Gzipping took 20 seconds,
> so it isn't free.
>
&g
: Yeah. I'm just not sure how much benefit in terms of data transfer this
: will save. Has anyone tested this to see if this is even worth it?
one mans trash is another mans treasure ... if you're replicating
snapshoots very frequently within a single datacenter speed is critical
and bandwidt
gt;>> Otis
>>> --
>>> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>>>
>>>
>>>
>>> - Original Message
>>>
>>>> From: Erik Hatcher <[EMAIL PROTECTED]>
>>>> To: solr-user@lucene.apac
- Original Message
From: Erik Hatcher <[EMAIL PROTECTED]>
To: solr-user@lucene.apache.org
Sent: Thursday, October 30, 2008 9:54:28 AM
Subject: Re: replication handler - compression
+1 - the GzipServletFilter is the way to go.
Regarding request handlers reading HTTP headers,
AIL PROTECTED]>
>> To: solr-user@lucene.apache.org
>> Sent: Thursday, October 30, 2008 9:54:28 AM
>> Subject: Re: replication handler - compression
>>
>> +1 - the GzipServletFilter is the way to go.
>>
>> Regarding request handlers reading HTTP headers, yeah,.
gt; To: solr-user@lucene.apache.org
> Sent: Thursday, October 30, 2008 9:54:28 AM
> Subject: Re: replication handler - compression
>
> +1 - the GzipServletFilter is the way to go.
>
> Regarding request handlers reading HTTP headers, yeah,... this will improve,
> for
> sur
+1 - the GzipServletFilter is the way to go.
Regarding request handlers reading HTTP headers, yeah,... this will
improve, for sure.
Erik
On Oct 30, 2008, at 12:18 AM, Chris Hostetter wrote:
: You are partially right. Instead of the HTTP header , we use a
request
: parameter. (Re
: You are partially right. Instead of the HTTP header , we use a request
: parameter. (RequestHandlers cannot read HTP headers). If the param is
hmmm, i'm with walter: we shouldn't invent new mechanisms for
clients to request compression over HTTP from servers.
replicatoin is both special enoug
Hoss,
You are partially right. Instead of the HTTP header , we use a request
parameter. (RequestHandlers cannot read HTP headers). If the param is
present it wraps the response in an zip outputstream. It is configured
in the slave because Every slave may not want compression. . Slaves
which are nea
My understanding of Noble's comment (and i could be wrong, i'm reading
between the lines) is that if you specify the new setting he's suggesting
when initializing the replication handler on the slave, then the slave
should start using an "Accept-Encoding: gzip" header when querying the
master,
You propose to do compressed transfers over HTTP ignoring the standard
support for compressed transfers in HTTP. Programming that with a
library doesn't make it "standard".
In Ultraseek, we implemented index synchronization over HTTP with
compression. It wasn't that hard.
I doubt that compression
we are not doing anything non-standard
GZipInputStream/GZipOutputStream are standards. But asking users to
setup an extra apache is not fair if we can manage it with say 5 lines
of code
On Wed, Oct 29, 2008 at 7:44 PM, Walter Underwood
<[EMAIL PROTECTED]> wrote:
> Why invent something when compres
Why invent something when compression is standard in HTTP? --wunder
On 10/29/08 4:35 AM, "Noble Paul നോബിള് नोब्ळ्" <[EMAIL PROTECTED]>
wrote:
> open a JIRA issue. we will use a gzip on both ends of the pipe . On
the slave
> side you can say
true
as an extra option to compress and
> send data fr
>> This is a fine topic for a "best practices for Windows" wiki page.
> >>
> >> The 'scp' program what you want. It has an option to compress on the fly
> without saving anything to disk. 'Rcopy' in particular has features to only
> copy what
ark, Upper Poppleton, YORK, YO26 6QU.
>
>
> -Original Message-
>
> From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
> Sent: 29 October 2008 03:29
> To: solr-user@lucene.apache.org
> Subject: Re: replication handler - compression
>
> The new replication fea
tered Office Catherine House, Northminster Business Park, Upper Poppleton, YORK, YO26 6QU.
-Original Message-
From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
Sent: 29 October 2008 03:29
To: solr-user@lucene.apache.org
Subject: Re: replication handler - compression
Th
solr-user@lucene.apache.org
Subject: Re: replication handler - compression
The new replication feature does not use any unix commands , it is
pure java. On the fly compression is hard but possible.
I wish to repeat the question. Did you optimize the index? Because a
10:1 compression is not us
नोब्ळ् [mailto:[EMAIL PROTECTED]
> Sent: Monday, October 27, 2008 9:36 PM
> To: solr-user@lucene.apache.org
> Subject: Re: replication handler - compression
>
>> It is useful only if your bandwidth is very low.
>> Otherwise the cost of copying/comprressing/decompres
#x27; program also
has the compression feature.
Lance
-Original Message-
From: Noble Paul നോബിള് नोब्ळ् [mailto:[EMAIL PROTECTED]
Sent: Monday, October 27, 2008 9:36 PM
To: solr-user@lucene.apache.org
Subject: Re: replication handler - compression
> It is useful only if your bandwidth
> It is useful only if your bandwidth is very low.
> Otherwise the cost of copying/comprressing/decompressing can take up
> more time than we save.
I mean compressing and transferring. If the optimized index itself has
a very high compression ratio then it is worth exploring the option
of compres
Are you sure you optimized the index?
It is useful only if your bandwidth is very low.
Otherwise the cost of copying/comprressing/decompressing can take up
more time than we save.
On Tue, Oct 28, 2008 at 2:49 AM, Simon Collins
<[EMAIL PROTECTED]> wrote:
> Is there an option on the replication ha
22 matches
Mail list logo