Xms is 1.5Gb , Xnx is 1.5Gb and Xns is 128Mb. Physical memory is 4Gb.
We are running Jrockit version 1.5.0_15 on weblogic 10.
./java -version
java version "1.5.0_15"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_15-b04)
BEA JRockit(R) (build R27.6.0-50_o-100423-1.5.0_15-2008062
Jason,
Thanks for the reply.
In general, I would like to use katta to handle the management overhead such
as single point of failure as well as the distributed index deployment. In
the same time, I still want to use nice search features provided by solr.
Basically, I would like to try both on th
Just wondering, how much memory are you giving your JVM ?
On Thu, Sep 10, 2009 at 7:46 AM, Francis Yakin wrote:
>
> I am having OutOfMemory error on our slaves server, I would like to know if
> someone has the same issue and have the solution for this.
>
> SEVERE: Error during auto-warming of
>
the dataDir is a Solr1.4 feature
On Thu, Sep 10, 2009 at 1:57 AM, Paul Rosen wrote:
> Hi All,
>
> I'm trying to set up solr 1.3 to use multicore but I'm getting some puzzling
> results. My solr.xml file is:
>
>
>
>
> dataDir="solr/resources/data/" />
> />
> dataDir="solr/reindex_resourc
I just noticed this and it reminded me of an issue I've had with
collapsed faceting with an older version of the patch in Solr 1.3.
Would it be possible, if we can get the terms for all the collapsed
documents on a field, to then facet each collapsed document on the
unique terms it has col
thanks avlesh,
solrQuery.set("f.myField.facet.limit",10) ...
this is how I ended up doing it, and it works perfectly for me. It just
didn't look good in all neat Solr API calls :), as my complete query
construction logic i
regards,
aakash
2009/9/9 Avlesh Singh
> >
> > When constructing quer
try this
add two xpaths in your forEach
forEach="/document/category/item | /document/category/name"
and add a field as follows
Please try it out and let me know.
On Thu, Sep 10, 2009 at 7:30 AM, venn hardy wrote:
>
> Hello,
>
>
>
> I am using SOLR 1.4 (from nighly build) and its URLDataSour
>
> The patch which will be committed soon will add this functionality.
Where can I follow the progress of this patch?
On Mon, Sep 7, 2009 at 3:38 PM, Uri Boness wrote:
>
>> Great. Nice site and very similar to my requirements.
>>
> thanks.
>
> So, right now, you get all field values by defa
Hello,
I am using SOLR 1.4 (from nighly build) and its URLDataSource in conjunction
with the XPathEntityProcessor. I have successfully imported XML content, but I
think I may have found a limitation when it comes to the commonField attribute
in the DataImportHandler.
Before writing my
Hi ,
Currently we are using Solr 1.3 and we have the following requirement.
As we need to process very high volumes of documents (of the order of 400 GB
per day), we are planning to separate indexer(s) and searcher(s), so that there
won't be performance hit.
Our idea is to have have a set of s
Hi ,
I am building Solr from source. During building it from source I am getting
following error.
generate-maven-artifacts:
[mkdir] Created dir: c:\Downloads\solr_trunk\build\maven
[mkdir] Created dir: c:\Downloads\solr_trunk\dist\maven
[copy] Copying 1 file to
c:\Downloads\solr_trunk
Hi Joe,
I think you come across the issue of:
https://issues.apache.org/jira/browse/SOLR-1377
Is your nightly latest? If not, try the latest one.
Koji
Joe Calderon wrote:
hello *, im not sure what im doing wrong i have this field defined in
schema.xml, using admin/analysis.jsp its working as
We're using the StandardAnalyzer but I'm fairly certain that's not the
issue.
In fact, I there doesn't appear to be any issue with Lucene or Solr. There
are many instances of data in which users have removed the whitespace so
they have a high frequency which means they bubble to the top of the so
Hi Zhong,
It's a very new patch. I'll update the issue as we start the
wiki page.
I've been working on indexing in Hadoop in conjunction with
Katta, which is different (it sounds) than your use case where
you have prebuilt indexes you simply want to distributed using
Katta?
-J
On Wed, Sep 9, 20
I have done a search on the word ³blue² in our index. The debugQuery shows
some extremely strange methods of scoring. Somehow product 1 gets a higher
score with only 1 match on the word blue when product 2 gets a lower score
with the same field match AND an additional field match. Can someone pl
Yep same thing in rsolr and just use the :shards param. It'll return
whatever solr returns.
Matt
On Wed, Sep 9, 2009 at 4:17 PM, Paul Rosen wrote:
> Hi Erik,
>
> Yes, I've been doing that in my tests, but I also have the case of wanting
> to do a search over all the cores using the shards syntax
The Connection is not for parameters, merely the base URL to the Solr
server (or core, which is effectively a Solr "server").
As of solr-ruby 0.0.6, the shards parameter is supported for the
Solr::Request::Standard and Dismax request objects, so you'd simply
specify :shards=>"" for thos
Hi All,
I'm trying to set up solr 1.3 to use multicore but I'm getting some
puzzling results. My solr.xml file is:
dataDir="solr/resources/data/" />
dataDir="solr/exhibits/data/" />
dataDir="solr/reindex_resources/data/" />
When I start up solr, everything looks normal until I g
Hi Erik,
Yes, I've been doing that in my tests, but I also have the case of
wanting to do a search over all the cores using the shards syntax. I was
thinking that the following wouldn't work:
solr =
Solr::Connection.new('http://localhost:8983/solr/core0/select?shards=localhost:8983/solr/cor
hello *, im not sure what im doing wrong i have this field defined in
schema.xml, using admin/analysis.jsp its working as expected,
but when i try to update via csvhandler i get
Error 500 org.apache.solr.analysis.PatternTokeni
With solr-ruby, simply put the core name in the URL of the
Solr::Connection...
solr = Solr::Connection.new('http://localhost:8983/solr/core_name')
Erik
On Sep 9, 2009, at 6:38 PM, Paul Rosen wrote:
Hi all,
I'd like to start experimenting with multicore in a ruby on rails app.
Hi,
It is really exciting to see this integration coming out.
May I ask how I need to make changes to be able to deploy Solr index on
katta servers?
Are there any tutorials?
thanks
zhong
Hi,
What is the best way to do pagination?
I searched around and only found some YUI utilities can do this. But
their examples don't have very close match to the pattern I have in
mind. I would like to have pretty plain display, something like the
search results from google.
Thanks.
Elaine
Our slaves servers is having issue with the following error after we upgraded
to Solr 1.3.
Any suggestions?
Thanks
Francis
NFO: [] webapp=/solr path=/select/
params={q=(type:artist+AND+alphaArtistSort:"forever+in+terror")} hits=1
status=0 QTime=1
SEVERE: java.lang.OutOfMemoryError: allocLar
> It works perfectly well as a query:
>
> http://localhost:8080/solrChunk/nutch/select/?q=url:http\:\/\/xcski\.com\/pharma\/&fq=category:pharma
>
> retrieved all the documents I wanted to delete.
>
I mean it is not a valid string that QueryParser can parse and return a Lucene
Query. Filter que
Hi Matt,
What kinds of things were you hoping to find when looking for multicore
support in either solr-ruby or rsolr?
I have a couple of uses for it:
1) Search and merge the results from multiple indexes:
http://localhost:8983/solr/core0/select?shards=localhost:8983/solr/core0,localhost:898
On Wed, Sep 9, 2009 at 2:07 PM, AHMET ARSLAN wrote:
> --- On Wed, 9/9/09, Paul Tomblin wrote:
>> SEVERE: org.apache.lucene.queryParser.ParseException:
>> Cannot parse
>> 'url:http\:\/\/xcski\.com\/pharma\/&fq=category:pharma':
>
>> Should I rewrite that query to be "url:http:... AND
>> category
And what Analyzer are you using? I'm guessing that your words are
being split up during analysis, which is why you aren't seeing
whitespace. If you want to keep the whitespace, you will need to use
the String field type or possibly the Keyword Analyzer.
-Grant
On Sep 9, 2009, at 11:06 AM
--- On Wed, 9/9/09, Paul Tomblin wrote:
> From: Paul Tomblin
> Subject: Can't delete with a fq?
> To: solr-user@lucene.apache.org
> Date: Wednesday, September 9, 2009, 8:51 PM
> I'm trying to delete using SolJ's
> "deleteByQuery", but it doesn't like
> it that I've added an "fq" parameter. He
I'm trying to delete using SolJ's "deleteByQuery", but it doesn't like
it that I've added an "fq" parameter. Here's what I see in the logs:
Sep 9, 2009 1:46:13 PM org.apache.solr.common.SolrException log
SEVERE: org.apache.lucene.queryParser.ParseException: Cannot parse
'url:http\:\/\/xcski\.com\
--- On Wed, 9/9/09, John Eberly wrote:
> From: John Eberly
> Subject: Re: Highlighting... is highlighting too many fields
> To: solr-user@lucene.apache.org
> Date: Wednesday, September 9, 2009, 7:12 PM
> Thanks Ahmet,
>
> Your second suggestion about using the filter query
> works. Ideally I
Hey Paul,
In rsolr, you could use the #request method to set a request handler path:
solr.request('/core1/select', :q=>'*:*')
Alternatively, (rsolr and solr-ruby) you could probably handle this by
creating a new instance of a connection object per-core, and then have some
kind of factory to retur
Paul
I've been working with rsolr in a Rails app. In terms of querying from
multiple indices/cores within a multicore setup of Solr, I'm managing it all on
the Rails side, aggregating results from mutliple cores. In terms of core
administration, I've been doing that all by hand as well.
Greg
Hi all,
I'd like to start experimenting with multicore in a ruby on rails app.
Right now, the app is using the solr-ruby-rails-0.0.5 to communicate
with solr and it doesn't appear to have direct support for multicore and
I didn't have any luck googling around for it.
We aren't necessarily we
Unfortunately you can't sort on a multi-valued field. In order to sort on a
field it must be indexed but not multi-valued.
Have a look at the FieldOptions wiki page for a good description of what
values to set for different use cases:
http://wiki.apache.org/solr/FieldOptionsByUseCase
-Jay
www.luc
Hi,
Thank you for the answer. Very helpful.
Regards,
Thibault.
On Wed, 09 Sep 2009 13:36:02 +0200
Uri Boness wrote:
> Hi,
>
> This is a bit tricky but I think you can achieve it as follows:
>
> 1. have a field called "location_facet" which holds the logical path of
> the location for each a
Thanks Ahmet,
Your second suggestion about using the filter query works. Ideally I would
like to be able to use the first solution with hl.requireFieldMatch=true,
but I cannot seem to get it to work no matter what I do.
I changed the query to just 'smith~' and hl.requireFieldMatch=true and I get
Hi all,
I'm about to develop a travel website and am wondering if Solr might fit to
be used as the search solution.
Being quite the opposite of a db guru and new to Solr, it's hard for me to
judge if for my use-case a relational db should be used in favor of Solr(or
similar indexing server). Maybe
gwk, thanks a lot.
Elaine
On Wed, Sep 9, 2009 at 11:14 AM, gwk wrote:
> Hi Elaine,
>
> You can page your resultset with the rows and start parameters
> (http://wiki.apache.org/solr/CommonQueryParameters). So for example to get
> the first 100 results one would use the parameters rows=100&start=0
Hi Elaine,
You can page your resultset with the rows and start parameters
(http://wiki.apache.org/solr/CommonQueryParameters). So for example to
get the first 100 results one would use the parameters rows=100&start=0
and the second 100 results with rows=100&start=100 etc. etc.
Regards,
gwk
gwk,
Sorry for confusion. I am doing simple phrase search among the
sentences which could be in english or other language. Each doc has
only several id numbers and the sentence itself.
I did not know about paging. Sounds like it is what I need. How to
achieve paging from solr?
I also need to sto
It's set as Field.Store.YES, Field.Index.ANALYZED.
On Wed, Sep 9, 2009 at 8:15 AM, Grant Ingersoll wrote:
> How are you tokenizing/analyzing the field you are accessing?
>
>
> On Sep 9, 2009, at 8:49 AM, Todd Benge wrote:
>
> Hi Rekha,
>>
>> Here's teh link to the TermsComponent info:
>>
>> h
> But apart from that everything works fine now (10,000 OR clauses takes 10
> seconds).
Not fast.
I would recommend to denormalize your data, put everything into Solr
index and use Solr faceting
http://wiki.apache.org/solr/SolrFacetingOverview to get relevant
persons ( see my previous message )
Hi Elaine,
I think you need to provide us with some more information on what
exactly you are trying to achieve. From your question I also assumed you
wanted paging (getting the first 10 results, than the next 10 etc.) But
reading it again, "slice my docs into pieces" I now think you might've
>
> When constructing query, I create a lucene query and use query.toString to
> create SolrQuery.
>
Go this thread -
http://www.lucidimagination.com/search/document/f4d91628ced293bf/lucene_query_to_solr_query
I am facing difficulty while creating facet query for individual field, as I
> could not
Please, take a look at
http://issues.apache.org/jira/browse/SOLR-1379
Alex.
On Wed, Sep 9, 2009 at 5:28 PM, Constantijn Visinescu wrote:
> Just wondering, is there an easy way to load the whole index into ram?
>
> On Wed, Sep 9, 2009 at 4:22 PM, Alex Baranov >wrote:
>
> > There is a good artic
I want to get the 10K results, not just the top 10.
The fields are regular language sentences, they are not large.
Is clustering the technique for what I am doing?
On Wed, Sep 9, 2009 at 10:16 AM, Grant Ingersoll wrote:
> Do you need 10K results at a time or are you just getting the top 10 or so
Just wondering, is there an easy way to load the whole index into ram?
On Wed, Sep 9, 2009 at 4:22 PM, Alex Baranov wrote:
> There is a good article on how to scale the Lucene/Solr solution:
>
>
> http://www.lucidimagination.com/Community/Hear-from-the-Experts/Articles/Scaling-Lucene-and-Solr
>
>
There is a good article on how to scale the Lucene/Solr solution:
http://www.lucidimagination.com/Community/Hear-from-the-Experts/Articles/Scaling-Lucene-and-Solr
Also, if you have heavy load on the server (large amount of concurrent
requests) then I'd suggest to consider loading the index into R
Yes, that runtime error occurred due to incorrect configuration. So, such
runtime errors in one core will affect all the cores? Is there any way to
avoid affecting all other cores which are fine?
Shalin Shekhar Mangar wrote:
>
> On Mon, Sep 7, 2009 at 8:58 PM, djain101 wrote:
>
>>
>>
>> Plea
Do you need 10K results at a time or are you just getting the top 10
or so in a set of 10K? Also, are you retrieving really large stored
fields? If you add &debugQuery=true to your request, Solr will return
timing information for the various components.
On Sep 9, 2009, at 10:10 AM, Elain
How are you tokenizing/analyzing the field you are accessing?
On Sep 9, 2009, at 8:49 AM, Todd Benge wrote:
Hi Rekha,
Here's teh link to the TermsComponent info:
http://wiki.apache.org/solr/TermsComponent
and another link Matt Weber did on autocompletion:
http://www.mattweber.org/2009/05/02
I had some trouble with maxBooleanClauses -- I have to set it twice the size
I would expect.
But apart from that everything works fine now (10,000 OR clauses takes 10
seconds).
Thank you Alexey.
On Wed, Sep 9, 2009 at 1:19 PM, Alexey Serba wrote:
> >> Is there a way to configure Solr to accept
Hi,
I have 20 million docs on solr. If my query would return more than
10,000 results, the response time will be very very long. How to
resolve such problem? Can I slice my docs into pieces and let the
query operate within one piece at a time so the response time and
response data will be more man
Hi Rekha,
Here's teh link to the TermsComponent info:
http://wiki.apache.org/solr/TermsComponent
and another link Matt Weber did on autocompletion:
http://www.mattweber.org/2009/05/02/solr-autosuggest-with-termscomponent-and-jquery/
We had to upgrade to the latest nightly to get the TermsCompo
Hi,
I tried setting the terms.raw param to true but didn't see any difference.
I did a little more digging and it appears the text in the TermEnum is
missing the whitespace inside Lucene so I'm not sure if it's because of the
way we're indexing the value or not.
One thing I noticed is we're index
Hi,
This is a bit tricky but I think you can achieve it as follows:
1. have a field called "location_facet" which holds the logical path of
the location for each address (e.g. /Eurpoe/England/London)
2. have another multi valued filed "location_search" that holds all the
locations - your "catc
On Wed, Sep 9, 2009 at 2:15 PM, dharhsana wrote:
>
> Hi Shalin Shekhar Mangar,
>
> I got some come from this site
>
> http://www.mattweber.org/2009/05/02/solr-autosuggest-with-termscomponent-and-jquery/
>
> When i use that code in my project ,then only i came to know that there is
> no Termscompo
>> Is there a way to configure Solr to accept POST queries (instead of GET
>> only?).
>> Or: is there some other way to make Solr accept queries longer than 2,000
>> characters? (Up to 10,000 would be nice)
> Solr accepts POST queries by default. I switched to POST for exactly
> the same reason. I
Hi,
I have a requirement on Autocompletion search , iam using solr 1.4.
Could you please tell me how you worked on that Terms component using solr
1.4,
i could'nt find terms component in solr 1.4 which i have downloaded,is there
anyother configuration should be done.
Do you have code for autoco
> Is there a way to configure Solr to accept POST queries (instead of GET
> only?).
> Or: is there some other way to make Solr accept queries longer than 2,000
> characters? (Up to 10,000 would be nice)
Solr accepts POST queries by default. I switched to POST for exactly
the same reason. I use Solr
Hi all,
I am pretty fresh to solr and I have encountered a problem.
In short:
Is there a way to configure Solr to accept POST queries (instead of GET
only?).
Or: is there some other way to make Solr accept queries longer than 2,000
characters? (Up to 10,000 would be nice)
Longer version:
I ha
Hi Solr users,
This is my first post on this list, so nice to meet you.
I need to do something with solr, but I have no idea how to achieve this. Let
me describe my problem.
I'm building an address search engine. In my Solr schema, I've got many fields
like «country», «state», «town», «street»
Hallo Friends,
I have a Problem...
my Search engins Server runs since a lot of weeks...
Now i gett new XML, and one of the fields ar Multivalue,,
Ok, i change the Schema.xml, set it to Multivalue and it works :-) no Error
by the Indexing.. Now i go to the Gui, and will sort this Field, and BAM,
Sorry for being a bit dim, I dont understand this;
Looking at my default configuration for SOLR, I have a request handler named
'dismax' and request handler named 'standard' with the default="true". I
understand that I can configure the usage of this in the query using the
qt=dismax or qt=stand
hello,
I am using SolrJ to access solr indexes. When constructing query, I create
a lucene query and use query.toString to create SolrQuery.
I am facing difficulty while creating facet query for individual field, as
I could not find an easy and clean way of constructing facet query with
param
Hi Joe,
Thanks for the link, I'll check it out, I'm not sure it'll help in my
situation though since the clustering should happen at runtime due to
faceted browsing (unless I'm mistaken at what the preprocessing does).
More on my progress though, I thought some more about using Hilbert
curve
Hi,
Juste checkout trunk of svn. After that, war file is at
./trunk/dist/apache-solr-1.4-dev.war
On Wed, Sep 9, 2009 at 8:56 AM, Venkatesan A. wrote:
> Hi
>
> Where can I find solr1.4.war
>
> Thanks
> Arun
>
> -Original Message-
> From: kaoul@gmail.com [mailto:kaoul@gmail.com] On B
Hi
Where can I find solr1.4.war
Thanks
Arun
-Original Message-
From: kaoul@gmail.com [mailto:kaoul@gmail.com] On Behalf Of Erwin
Sent: Wednesday, September 09, 2009 2:25 PM
To: solr-user@lucene.apache.org
Subject: Re: Why dismax isn't the default with 1.4 and why it doesn't
suppo
Hi Gert,
&qt=dismax in URL works with Solr 1.3 and 1.4 without further
configuration. You are right, you should find a "dismax" query parser
in solrconfig.xml by default.
Erwin
On Wed, Sep 9, 2009 at 7:49 AM, Villemos, Gert wrote:
> On question to this;
>
> Do you need to explicitly configure a
Hi Shalin Shekhar Mangar,
I got some come from this site
http://www.mattweber.org/2009/05/02/solr-autosuggest-with-termscomponent-and-jquery/
When i use that code in my project ,then only i came to know that there is
no Termscomponent jar or plugin ..
There is any other way for doing autocom
On question to this;
Do you need to explicitly configure a 'dismax' queryparser in the
solrconfig.xml to enable this, or is a queryparser named 'dismax'
available per default?
Cheers,
Gert.
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: Wednesday, S
On Wed, Sep 9, 2009 at 11:53 AM, dharhsana wrote:
>
> Iam new to solr,
> My requirement is that,i need to have Autocompletion text box in my blog
> application,i need to know how to implement it with solr 1.4.
>
> I have gone through TermsComponent,but TermsComponent is not available in
> solr 1.4
On Mon, Sep 7, 2009 at 8:58 PM, djain101 wrote:
>
>
> Please suggest what is the right way to configure so that if one core fails
> due to configuration errors, all other cores remain unaffected?
>
> *
> Check your log files for more
On Wed, Sep 9, 2009 at 8:58 AM, Mohamed Parvez wrote:
> I have a multi core Solr setup.
>
> Is it possible to return results from the second core, if the search on the
> first core, does not return any results.
>
No but you can make two queries
>
> Or if its possible to return, the results fro
hi Staszek,
Thank you very much for your advice. My problem has been solved. It is
caused by the regexp in the stoplables.en. I didn't released that regular
expression is required in order to filter out the words. I have add in the
regexp in my stoplabels.en and it works like a charm.
-GC
On Wed
76 matches
Mail list logo