Thanks Gora, I tried that but didn't help.
Regards.
--
View this message in context:
http://lucene.472066.n3.nabble.com/DIH-incorrect-datasource-being-picked-up-by-XPathEntityProcessor-tp3994802p3995211.html
Sent from the Solr - User mailing list archive at Nabble.com.
I'm about to implement an autocomplete mechanism for my search box. I've read
about some of the common approaches, but I have a question about wildcard
query vs facet.prefix.
Say I want autocomplete for a title: 'Shadows of the Damned'. I want this to
appear as a suggestion if I type 'sha' or 'dam
Sam,
These are big numbers you are throwing around, especially the query volume.
How big are these records that you have 4 billion of -- or put another way,
how much space would it take up in a pure form like in CSV? And should I
assume the searches you are doing are more than geospatial? In an
Hi all,
are stopwords from the stopwords.txt config file
supposed to be indexed?
I would say no, but this is the situation I am
observing on my Solr instance:
* I have a bunch of stopwords in stopwords.txt
* my fields are of fieldType "text" from the example schema.xml,
i.e. I have
-- -- >8 -
I am using AbstractSolrTestCase (which in turn uses
solr.util.TestHarness) as a basis for unittests, but the solr
installation is outside of my source tree and I don't want to
duplicate it just to change a few lines (and with the new solr4.0 I
hope I can get the test-framework in a jar file, previo
"unable to create new native thread"
That suggests you're running out of threads, not RAM. Possibly you're
using a multithreaded collector, and it's pushing you over the top of
how many threads your OS lets a single process allocate? Or somehow
the thread stack size is set too high?
More here:
h
Hello, Salman,
It would probably be helpful if you included the text/stack trace of
the error you're encountering, plus any other pertinent system
information you can think of.
One thing to remember is the memory usage you tune with Xmx is only
the maximum size of the heap, and there are other ty
Hi Erick,
Thanks for the reply.
The query:
http://localhost:8983/solr/db/select?indent=on&version=2.2&q=CITY:MILTON&fq=&start=0&rows=10&fl=*&wt=&explainOther=&hl.fl=&group=true&group.field=ID&group.ngroups=true&group.truncate=true&debugQuery=on
yields this in the debug section:
CITY:MILT
Hi Eric,
Thanks for the reply.
The query:
http://localhost:8983/solr/db/select?indent=on&version=2.2&q=CITY:MILTON&fq=&start=0&rows=10&fl=*&wt=&explainOther=&hl.fl=&group=true&group.field=ID&group.ngroups=true&group.truncate=true&debugQuery=on
yields this in the debug section:
CITY:MILTO
Hi Erick,
Thanks for the reply.
The query:
http://localhost:8983/solr/db/select?indent=on&version=2.2&q=CITY:MILTON&fq=&start=0&rows=10&fl=*&wt=&explainOther=&hl.fl=&group=true&group.field=ID&group.ngroups=true&group.truncate=true&debugQuery=on
yields this in the debug section:
CITY:MILT
Hi Eric,
Thanks for the reply.
The query:
http://localhost:8983/solr/db/select?indent=on&version=2.2&q=CITY:MILTON&fq=&start=0&rows=10&fl=*&wt=&explainOther=&hl.fl=&group=true&group.field=ID&group.ngroups=true&group.truncate=true&debugQuery=on
yields this in the debug section:
CITY:MILTON
C
We used JRockit with SOLR1.4 as default JVM had mem issues (not only it was
consuming more mem but didn't restrict to the max mem allocated to tomcat -
jrockit did restrict to max mem). However, JRockit gives an error while using
it with SOLR3.4/3.5. Any ideas, why?
*** This Message Has Been Se
>> Solrj multi-threaded client sends several 1,000 docs/sec
>Can you expand on that? How many threads at once are sending docs to solr?
Is each request a single doc or multiple?
I realize, after the fact, that my solrj client is much like
org.apache.solr.client.solrj.LargeVolumeTestBase. The num
I forgot:
I do the request on the uniqueKey field, so each request gets one document
Le 15/07/2012 14:11, Bruno Mannina a écrit :
Dear Solr Users,
I have a solr3.6 + Tomcat and I have a program that connect 4 http
requests at the same time.
I must do 1902 requests.
I do several tests but ea
Erick,
Thank you. I think originally my thought was that if I had my slave
configuration really close to my master config, it would be very easy to
promote a slave to a master (and vice versa) if necessary. But I think you
are correct that ripping out from the slave config anything that would
modi
The beta will have files that where in solr/conf and solr/data in
solr/collection1/conf|data instead.
What Solr test cases are you referring to? The only ones that should care about
this would have to be looking at the file system. If that is the case, simply
update the path. The built in tests
Agreed. That's why I say "maybe". Clearly something sounds amiss here.
-- Jack Krupansky
-Original Message-
From: Yonik Seeley
Sent: Sunday, July 15, 2012 12:06 PM
To: solr-user@lucene.apache.org
Subject: Re: SOLR 4 Alpha Out Of Mem Err
On Sun, Jul 15, 2012 at 12:52 PM, Jack Krupansky
On Sun, Jul 15, 2012 at 12:52 PM, Jack Krupansky
wrote:
> Maybe your rate of update is so high that the commit never gets a chance to
> run.
I don't believe that is possible. If it is, it should be fixed.
-Yonik
http://lucidimagination.com
On Sun, Jul 15, 2012 at 11:52 AM, Nick Koton wrote:
>> Do you have the following hard autoCommit in your config (as the stock
> server does)?
>>
>> 15000
>> false
>>
>
> I have tried with and without that setting. When I described running with
> auto commit, that setting is
Maybe your rate of update is so high that the commit never gets a chance to
run. So, maybe all these uncommitted updates are buffered up and using
excess memory.
Try explicit commits from SolrJ, but less frequently. Or maybe if you just
pause your updates periodically (every 30 seconds or so)
"Anything currently in the trunk ..."
I think you mean "Anything in the 4x branch", since "trunk" is 5x by
definition.
But I'd agree that taking a nightly build or building from the 4x branch is
likely to be a better bet than the "old" Alpha.
-- Jack Krupansky
-Original Message-
F
You've got a couple of choices. There's a new patch in town
https://issues.apache.org/jira/browse/SOLR-139
that allows you to update individual fields in a doc if (and only if)
all the fields in the original document were stored (actually, all the
non-copy fields).
So if you're storing (stored="tr
Anything currently in the trunk will most probably be in the BETA and
in the eventual release. So I'd go with the trunk code. It'll always
be closer to the actual release than ALPHA or BETA
I know there've been some changes recently around, exactly
the "collection" name. In fact there's a disc
q and fq queries don't necessarily run through the same query parser, see:
http://wiki.apache.org/solr/SimpleFacetParameters#facet.query_:_Arbitrary_Query_Faceting
So try adding &debugQuery=on to both queries you submitted. My guess
is that if you look at the parsed queries, you'll see something t
> Do you have the following hard autoCommit in your config (as the stock
server does)?
>
> 15000
> false
>
I have tried with and without that setting. When I described running with
auto commit, that setting is what I mean. I have varied the time in the
range 10,000-60,000 m
Hi,
I am new to Solr Spatial Search and would like to understand if Solr can be
used successfully for very large data sets in the range of 4Billion records.
I need to search some filtered data based on a region - maybe a set of
lat/lons or polygon area. is that possible in solr? How fast is it wit
The answer appears to be "No", but it's good to hear people express an
interest in proposed features.
-- Jack Krupansky
-Original Message-
From: Rajani Maski
Sent: Sunday, July 15, 2012 12:02 AM
To: solr-user@lucene.apache.org
Subject: Facet on all the dynamic fields with *_s feature
Dear Solr Users,
I have a solr3.6 + Tomcat and I have a program that connect 4 http
requests at the same time.
I must do 1902 requests.
I do several tests but each time it losts some requests:
- sometimes I get 1856 docs, 1895 docs, 1900 docs but never 1902 docs.
With Jetty, I get always 1902
Hi,
I'm using SOLR 4.x from trunk. This was the version from 2012-07-10. So
this is one of the latest versions.
I searched mailing list and jira but found only this
https://issues.apache.org/jira/browse/SOLR-3436
It was committed in May to trunk so my version of SOLR has this fix. But
the proble
Do you have the following hard autoCommit in your config (as the stock
server does)?
15000
false
This is now fairly important since Solr now tracks information on
every uncommitted document added.
At some point we should probably hardcode some mechanism based on
number o
Hello,
I am new to Solr and I running some tests with our data in Solr. I am using
version 3.6 and the data is imported form a DB2 database using Solr's DIH.
We have defined a single entity in the db-data-config.xml, which is an
equivalent of the following query:
The ID in NAME_CONNECTIONS is n
31 matches
Mail list logo