This sounds a lot like...
https://issues.apache.org/jira/browse/SOLR-6643
: Date: Fri, 12 Dec 2014 16:54:03 -0700 (MST)
: From: solr-user
: Reply-To: solr-user@lucene.apache.org
: To: solr-user@lucene.apache.org
: Subject: Re: Solr 4.10.2 "Found core" but I get "No cores available" in
:
Set mincount=1
Bill Bell
Sent from mobile
> On Dec 19, 2014, at 12:22 PM, Tang, Rebecca wrote:
>
> Hi there,
>
> I have an index that has a field called collection_facet.
>
> There was a value 'Ness Motley Law Firm Documents' that we wanted to update
> to 'Ness Motley Law Firm'. There were
I have the following documents indexed
0
1
physicalcharacteristics
Black
Green
physicalcharacteristics
Red
Brown
person
1
physicalcharacteristics
Pink
Purple
physicalcharacteristics
Brown
Blue
person
2
I am able to get back all people that have child documents with brown hair
and
On 12/19/2014 11:22 AM, Tang, Rebecca wrote:
> I have an index that has a field called collection_facet.
>
> There was a value 'Ness Motley Law Firm Documents' that we wanted to update
> to 'Ness Motley Law Firm'. There were 36,132 records with this value. So I
> re-indexed just the 36,132 reco
50K is still very, very large. You say you have 50M docs/node. Each
filterCache entry will be on the order of 6M. Times 50,000 (potential
if you turn indexing off). Or 300G memory for your filter cache alone.
There are OOMs out there with your name on them, just waiting to
happen at 3:00 AM after y
Hi there,
I have an index that has a field called collection_facet.
There was a value 'Ness Motley Law Firm Documents' that we wanted to update to
'Ness Motley Law Firm'. There were 36,132 records with this value. So I
re-indexed just the 36,132 records. After the update, I ran a facet query
: I'm trying to get anything to index. Starting with the simplest file
: possible. As it stands no extraction is working. I'm just trying to get any
: extraction working. I've followed that guide, I'll try again.
let's back up for a minute.
You have a plain text file, and you want to index it.
I'm trying to get anything to index. Starting with the simplest file
possible. As it stands no extraction is working. I'm just trying to get any
extraction working. I've followed that guide, I'll try again.
J
On 19 December 2014 at 16:21, Alexandre Rafalovitch
wrote:
>
> Then I don't understand
I looked for messages on the following error but dont see anything in nabble.
Does anyone know what this error means and how to correct it??
SEVERE: java.lang.IllegalArgumentException:
/var/apache/my-solr-slave/solr/coreA/data/index/write.lock does not exist
I also occasionally see error message
Okay, thanks for the suggestion, will try to decrease the caches gradually.
Each node has near 50 000 000 docs, perhaps we need more shards...
We had smaller caches before but that was leading to bad feedback from our
users. Besides our application users we also use Solr internally for data
analyz
As Shalin points out, these cache sizes are wy out the norm.
For filterCache, each entry is roughly maxDoc/8. You haven't told
us now many docs are on the node, but you can find maxDoc on
the admin page. What I _have_ seen is a similar situation and
if you ever stop indexing you'll get OOM err
Thanks, decreased the caches at twice, increased the heap size to 16G,
configured Huge Pages and added these options:
-XX:+UseConcMarkSweepGC
-XX:+UseLargePages
-XX:+CMSParallelRemarkEnabled
-XX:+ParallelRefProcEnabled
-XX:+UseLargePages
-XX:+AggressiveOpts
-XX:CMSInitiatingOccupancyFraction=75
Be
Then I don't understand what you are trying to do. I assume you have
gone through the tutorial and the explanation of the extract handler
(e.g. http://wiki.apache.org/solr/ExtractingRequestHandler ).
It feels like you are shooting yourself in the foot on purpose and
wonder why it hurts. What is th
i'm sending it too
/update/extract/
using the document interface in the web manager.
The text file is just an empty text document. I'm on a mac so utf-8 I
guess.
- Joel
On 19 December 2014 at 15:55, Alexandre Rafalovitch
wrote:
>
> Oh. You are saying you are sending a text file and somehow Ti
Oh. You are saying you are sending a text file and somehow Tika gets involved.
Which handler are you sending it to and what format is your text file in?
If it's Solr XML/JSON or CSV, you should be sending it to the /update
handler, not the /extract one.
Regards,
Alex.
Sign up for my Solr
Also note SOLR-5986 which will help in such cases when queries are stuck
iterating through terms. This will be released with Solr 5.0
On Fri, Dec 19, 2014 at 9:14 AM, Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
>
> Hello,
> Note, that timeout is checked only during the search. But for ex
Those are huge cache sizes. My guess is that the searchExecutor thread is
spending too much time doing warming. Garbage collection may also be a
factor as other people pointed out.
On Fri, Dec 19, 2014 at 12:50 PM, heaven wrote:
>
> I have the next settings in my solrconfig.xml:
>
>
Hi,
based on this example:
http://www.cominvent.com/2012/01/25/super-flexible-autocomplete-with-solr/
I have earlier successfully implemented highlight of terms in
(Edge)NGram-analyzed fields.
In a new project, however, with Solr 4.10.2 it does not work.
In the Solr admin analysis page I see the
I have the next settings in my solrconfig.xml:
What is the best way to calculate the optimal cache/heap sizes? I understand
there's no a common formula and all docs have different size but -Xmx is
already 12G.
Thanks,
Alex
--
View this message in context:
http://lucene.472066.n3.nabble.
I am out of the office until 12/26/2014.
I'll be out of the office starting Friday, Dec 19, until Friday, Dec 26.
Please contact Biju Kumar or Khurram Arshad for any pressing JAS Team
related items.
Note: This is an automated response to your message "Re: Indexing with
SolrJ" sent on 12/19/20
Hello,
Note, that timeout is checked only during the search. But for example, it
isn't checked during facet counting. Check debugQuery=true output, to
understand how the processing time is distributed across components.
On Fri, Dec 19, 2014 at 12:05 PM, Vishnu Mishra wrote:
>
> Hi, I am using sol
Hello Nick,
First of all, if you don't understand the results ask Solr to explain by
debugQuery=true. The output is raally verbose and puzzling. That is.
Then, I guess you tries to implement something like 'refinement', filtering
aka faceted navigation. Try to supply the filtering clause via f
Hi, I am using solr 4.9 for searching over 90 million+ documents. My Solr is
running on tomcat server and I am querying Solr from an application. I
have a problem with long-running queries against Solr. Although I have set
timeAllowed to 4ms, but it seems that solr still running this query
u
Hello,
I suppose it's answered at
http://lucene.472066.n3.nabble.com/converting-to-parent-child-block-indexing-td4174835.html
On Fri, Dec 19, 2014 at 8:53 AM, Rajesh
wrote:
>
> Hi,
>
> I'm trying to index documents using SolrJ. I'm getting duplicate documents
> while adding child document to par
I'm trying to implement specifying queries - we have some results and need to
search over them.
But query, I constructed, returns some strange results.
q=(text:(specific) OR title:(specific)) AND (text:(first long query) OR
title:(first long query))
This query returns something, which contains "
Hi,
I know that they mismatch. I'm guessing that there is something inside Tika
that is setting up. Or alternatively it's trying to guess the format and
needs to unzip. I don't really understand why the error says a constant is
not available.
Any ideas?
- Joel
On 18 December 2014 at 17:01, Ale
26 matches
Mail list logo