Thank you Mark!
Let me see whether I understand right you idea.
I have to write a Plugin like LuceneQParserPlugin which uses not the
SolrQueryParser but a MySolrQueryParser which is based on SolrQueryParser
und uses AnalyzingQueryParser methods.
I think this is too difficult for me because I am
Hi!
When I ask solr for facets, with the parameter "facet.sort=index", it
gives me the facets sorted alphabetically, but case and accent sensitive.
I found no way to have the facets returned with the original case and
accents, and sorted alphabetically, with no sensibility to case and accents
On Fri, Jun 26, 2009 at 4:06 PM, Sébastien Lamy wrote:
> Hi!
>
> When I ask solr for facets, with the parameter "facet.sort=index", it gives
> me the facets sorted alphabetically, but case and accent sensitive.
>
> I found no way to have the facets returned with the original case and
> accents, a
We're looking to build a search solution that can contain as many as 10 million
different items and I was wondering if Solr could handle that kind of data
amount or not?
Has anybody done any testing or published any kind of results for a
Solr-installation
working on huge amounts of data like thi
On Fri, Jun 26, 2009 at 1:27 PM, Daniel
Löfquist wrote:
> We're looking to build a search solution that can contain as many as 10
> million
> different items and I was wondering if Solr could handle that kind of data
> amount or not?
10m documents is a quite common load. We're currently running
Shalin Shekhar Mangar a écrit :
On Fri, Jun 26, 2009 at 4:06 PM, Sébastien Lamy wrote:
Hi!
When I ask solr for facets, with the parameter "facet.sort=index", it gives
me the facets sorted alphabetically, but case and accent sensitive.
I found no way to have the facets returned with the or
On Fri, Jun 26, 2009 at 6:02 PM, Sébastien Lamy wrote:
>
>>
> If I use a copyField to store into a string type, and facet on that, my
> problem remains:
> The facets are sorted case and accent sensitive. And I want an
> *insensitive* sort.
> If I use a copyField to store into a type with no accen
Shalin Shekhar Mangar a écrit :
On Fri, Jun 26, 2009 at 6:02 PM, Sébastien Lamy wrote:
If I use a copyField to store into a string type, and facet on that, my
problem remains:
The facets are sorted case and accent sensitive. And I want an
*insensitive* sort.
If I use a copyField to store in
Hi,
I need to upgrade from solr 1.3 to solr 1.4. I was wondering if there
is a particular revision of 1.4 that I should use that is considered
very stable for a production environment?
David Baker wrote:
> Hi,
>
> I need to upgrade from solr 1.3 to solr 1.4. I was wondering if there
> is a particular revision of 1.4 that I should use that is considered
> very stable for a production environment?
Well it it's not pronounced stable and given in download page I don't
think you can
Solr in general is fairly stable in trunk. That isn't to say that a
critical error can't get through, because that does happen, but the
test suite is pretty comprehensive. With Solr 1.4 getting closer and
closer, I think you'll see the pace of change dropping off.
I think it's one of tho
I will like to submit a JIRA issue for this. Can anyone help me on where to
go?
-Yao
Otis Gospodnetic wrote:
>
>
> Brian,
>
> Opening a JIRA issue if it doesn't already exist is the best way. If you
> can provide a patch, even better!
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Luc
Hello Yao,
A contribution would be great. Here is information about how to contribute:
http://wiki.apache.org/solr/HowToContribute
Thanks,
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Yao Ge
> To: solr-user@lucene.apache.org
> Sent:
On Fri, Jun 26, 2009 at 8:50 PM, Yao Ge wrote:
>
> I will like to submit a JIRA issue for this. Can anyone help me on where to
> go?
>
An issue has been opened already. You may want to add a vote to the
following issue.
https://issues.apache.org/jira/browse/SOLR-1223
--
Regards,
Shalin Shekha
Hi Daniel,
How much Solr can handle really depends on the hardware you run it on, the type
of document you index in it, and the query rate and type.
10M doesn't sound like a large number even for an average server today (e.g. 4
GB of RAM, 1-2 cores), web-page sized documents, and a query rate
Netflix is running a nightly build from May in production. We did our
normal QA on it, then ran it on one of our five servers for two weeks.
No problems. It is handling about 10% more traffic with 10% less CPU.
We deployed 1.4 to all our servers yesterday.
wunder
On 6/26/09 7:58 AM, "Julian Davc
On Fri, Jun 26, 2009 at 9:11 PM, Walter Underwood wrote:
> Netflix is running a nightly build from May in production. We did our
> normal QA on it, then ran it on one of our five servers for two weeks.
> No problems. It is handling about 10% more traffic with 10% less CPU.
>
Wow, that is good new
We are using the script replication. I have no interest in spending time
configuring and QA'ing a different method when the scripts work fine.
We are running the nightly from 2009-05-11.
wunder
On 6/26/09 8:51 AM, "Shalin Shekhar Mangar" wrote:
> On Fri, Jun 26, 2009 at 9:11 PM, Walter Underwo
We are using a trunk build from approximately the same time with little to
no issues including the new replication.
--
Jeff Newburn
Software Engineer, Zappos.com
jnewb...@zappos.com - 702-943-7562
> From: Shalin Shekhar Mangar
> Reply-To:
> Date: Fri, 26 Jun 2009 21:21:44 +0530
> To:
> Subjec
I am trying to index a solr server from a nightly build. I get the
following error in my catalina.out:
26-Jun-2009 5:52:06 PM
org.apache.solr.update.processor.LogUpdateProcessor
finish
Hi.
I currently have an index which is 16GB per machine (8 machines = 128GB)
(data is stored externally, not in index) and is growing like crazy (we are
indexing blogs which is crazy by nature) and have only allocated 2GB per
machine to the SOLR app since we are running some other stuff there in
p
Total # of bytes for the input data is a more useful number than # of
documents.
400 million documents was our peak at my last job. They were maybe 300-500
bytes of text, for 1k of disk space per document. The index was thus 400
gigabytes. The problems were:
1) system administration: the logist
22 matches
Mail list logo