Hello all,
We are running a solr cluster which is now running solr-4.2.
The index is about 35GB on disk with each register between 15k and 30k.
(This is simply the size of a full xml reply of one register. I'm not sure
how to measure it otherwise.)
Our memory requirements are running amok. We ha
Yes I did.. But there is no change in result..
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-sorting-is-not-working-properly-on-long-Fields-tp4050834p4050844.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am using the below code and getting the exception while using SolrQuery
Mar 24, 2013 3:08:07 PM org.apache.solr.core.QuerySenderListener newSearcher
INFO: QuerySenderListener sending requests to Searcher@795e0c2b
main{StandardDirectoryReader(segments_49:524 _4v(4.2):C299313
_4x(4.2):C2953/13
Hi all,
We import about 1.5 million documents on a nightly basis using DIH. During this
time, we need to ensure that all documents make it into index otherwise
rollback on any errors; which DIH takes care of for us. We also disable
autoCommit in DIH but instruct it to commit at the very end of
Hi ballusethuraman,
I am sure you have done this already, but just to be sure, did you reindex your
existing kilometer data after you changed the data type from string to long? If
not, then you should.
-sujit
On Mar 23, 2013, at 11:21 PM, ballusethuraman wrote:
> Hi, I am having a colum
Yeah, it is kind of weird, but certainly do-able. But big gotcha is if you
want to _retrieve_ that field, that could take some time. If you just want
to search it, no problems that I know of. If you do want to retrieve it,
make sure lazy field loading is enabled and that you do NOT ask for this
fie
Seems like a reasonable thing to do. Examine the debug output to insure
that there's no short-circuiting being done as far as ConstantScoreQuery...
Best
Erick
On Tue, Mar 19, 2013 at 7:05 PM, adityab wrote:
> Hi All,
>
> I want to validate my approach by the experts, just to make sure i am on
Just to get started, do you hit OOM quickly with a few expensive queries, or
is it after a number of hours and lots of queries?
Does Java heap usage seem to be growing linearly as queries come in, or are
there big spikes?
How complex/rich are your queries (e.g., how many terms, wildcards, fac
On Sun, Mar 24, 2013 at 4:19 AM, John Nielsen wrote:
> Schema with DocValues attempt at solving problem:
> http://pastebin.com/Ne23NnW4
> Config: http://pastebin.com/x1qykyXW
>
This schema isn't using docvalues, due to a typo in your config.
it should not be DocValues="true" but docValues="true"
Hi,
Does anyone know how solr4/lucene and the JVM, manages memory?
We have the following case.
We have a 15GB server running only SOLR4/Lucene and the JVM (no custom code)
We had allocated 2GB of memory and the JVM was using 1.9MB. At some point
something happened and we run out of memory.
The
Spyros Lambrinidis [spy...@peopleperhour.com]:
> Then we increased the JVM memory to 4GB and we see that gradually, JVM
> starts to use as much as it can. It is now using 3GB out of the 4GB
> allocated.
That is to be expected. When the amount of garbage collections increases, the
JVM might decide
From: John Nielsen [j...@mcb.dk]:
> The index is about 35GB on disk with each register between 15k and 30k.
> (This is simply the size of a full xml reply of one register. I'm not sure
> how to measure it otherwise.)
> Our memory requirements are running amok. We have less than a quarter of
> our
Hi,
our solr implementation consists of several cores sometimes interacting with
each other. Using SolrTestCaseJ4 didn't work out for us. Instead we would
like to test the resulting war from outside using integration tests. We are
utilizing Apache Maven as build management tool. Therefore we are c
Unrelated about your question you said that: "We are utilizing Apache Maven
as build management tool" I think currently ant + ivy is build and
dependency management tools, maven pom is generated via plugin (If I am
wrong you can correct it). Are there any plan to move the project based on
Maven?
Toke Eskildsen [t...@statsbiblioteket.dk]:
> If your whole index has 10M documents, which each has 100 values
> for each field, with each field having 50M unique values, then the
> memory requirement would be more than
> 10M*log2(100*10M) + 100*10M*log2(50M) bit ~= 340MB/field ~=
> 1.6GB for face
A step I meant to include was that after you "warm" Solr with a
representative collection of queries that references all of the fields,
facets, sorting, etc. that your daily load will reference, check the Java
heap size at that point, and then set your Java heap limit to a moderate
level higher
thanks Eric. in this query "q=*:*" the Lucene score is always 1
--
View this message in context:
http://lucene.472066.n3.nabble.com/Too-many-fields-to-Sort-in-Solr-tp4049139p4050944.html
Sent from the Solr - User mailing list archive at Nabble.com.
The wiki at http://wiki.apache.org/solr/ has come under attack by spammers more
frequently of late, so the PMC has decided to lock it down in an attempt to
reduce the work involved in tracking and removing spam.
From now on, only people who appear on
http://wiki.apache.org/solr/ContributorsGrou
Hi,
I managed to resolve this issue and I am getting the results also. But this
time I am getting a different exception while loading Solr Container
Here is the Code.
String SOLR_HOME = "/data/solr1/example/solr/collection1";
CoreContainer coreContainer = new CoreContainer(SOLR_
Hi all,
I've enabled term vector component to be stored. The result has been
shown using http request on browser. Since I'm planning to build web
service using java, I need to get those values using Solrj.
I've been googling find this solution
(http://stackoverflow.com/questions/8977852/how-to-pa
Thanks Chris,
--
View this message in context:
http://lucene.472066.n3.nabble.com/how-to-get-term-vector-information-of-sepcific-word-position-in-field-tp4047637p4050997.html
Sent from the Solr - User mailing list archive at Nabble.com.
I have three indexes which I have set up as three separate cores, using this
solr.xml config.
This works just fine a standalone solr.
I duplicated this setup on the same machine under a completely separate solr installation (solr-node
manually delete lock file
"/data/solr1/example/solr/collection1/./data/index/write.lock",
And restart solr
On Sun, Mar 24, 2013 at 9:32 PM, Sandeep Kumar Anumalla <
sanuma...@etisalat.ae> wrote:
> Hi,
>
> I managed to resolve this issue and I am getting the results also. But
> this time I am get
23 matches
Mail list logo