Hi All
we have two fields:
'text' is our default field:
text
we copy the doc field to the 'text' field
when indexing 10 documents that have a value with same prefix in the doc
field, for example: ca067-XXX ,and searching on the default field I get only
5 results, I search for ca067 on the
Jack, Thanks for your reply.
We are using solr 3.4.
We use the standard lucene query parser.
I added debugQuery=true , this is the result when searching ca067 and
getting 5 documents:
ca067ca067PhraseQuery(text:"ca
067")text:"ca 067"
0.1108914 = (MATCH) weight(text:"ca 067" in 75), product of:
Thanks Jack.
our schema version is 1.3
we are using the official solr 3.4 release. actually we use maven to
download solr war and artifacts
org.apache.solr
solr
3.4.0
war
Hi all!
After upgrading to Solr 4.6.1 we encountered a situation where a cluster
outage was traced to a single node misbehaving, after restarting the node
the cluster immediately returned to normal operation.
The bad node had ~420 threads locked on FastLRUCache and most
httpshardexecutor threads w
a little more information: it seems the issue is happening after we get
OutOfMemory error on facet query.
On Wed, Mar 12, 2014 at 11:06 PM, Avishai Ish-Shalom
wrote:
> Hi all!
>
> After upgrading to Solr 4.6.1 we encountered a situation where a cluster
> outage was traced to a
Hi,
My solr instances are configured with 10GB heap (Xmx) but linux shows
resident size of 16-20GB. even with thread stack and permgen taken into
account i'm still far off from these numbers. Could it be that jvm IO
buffers take so much space? does lucene use JNI/JNA memory allocations?
aha! mmap explains it. thank you.
On Tue, Mar 18, 2014 at 3:11 PM, Shawn Heisey wrote:
> On 3/18/2014 5:30 AM, Avishai Ish-Shalom wrote:
> > My solr instances are configured with 10GB heap (Xmx) but linux shows
> > resident size of 16-20GB. even with thread stack and per
tory-on-64bit.html
>
> Best,
> Erick
>
> On Tue, Mar 18, 2014 at 7:23 AM, Avishai Ish-Shalom
> wrote:
> > aha! mmap explains it. thank you.
> >
> >
> > On Tue, Mar 18, 2014 at 3:11 PM, Shawn Heisey wrote:
> >
> >> On 3/18/2014 5:30 AM, Avish
Hi,
We've had a strange mishap with a solr cloud cluster (version 4.5.1) where
we observed high search latency. The problem appears to develop over
several hours until such point where the entire cluster stopped responding
properly.
After investigation we found that the number of threads (both so
SOLR-5216 ?
On Fri, Mar 7, 2014 at 12:13 AM, Mark Miller wrote:
> It sounds like the distributed update deadlock issue.
>
> It's fixed in 4.6.1 and 4.7.
>
> - Mark
>
> http://about.me/markrmiller
>
> On Mar 6, 2014, at 3:10 PM, Avishai Ish-Shalom
> wrote:
&g
Hi all,
I have very large documents (as big as 1GB) which i'm indexing and planning
to store in Solr in order to use highlighting snippets. I am concerned
about possible performance issues with such large fields - does storing the
fields require additional RAM over what is required to index/fetch/
r-user@lucene.apache.org
> Subject: Re: Large fields storage
>
>
> Hi Avi,
>
> I assume your documents are rich documents like pdf word, am I correct?
> When you extract textual content from them, their size will shrink.
>
> Ahmet
>
>
>
> On Tuesday, December 2, 20
we are continuously getting this exception during replication from
master to slave. our index size is 9.27 G and we are trying to replicate
a slave from scratch.
Its a different file each time , sometimes we get to 60% replication
before it fails and sometimes only 10%, we never managed a successfu
ation
will work ?
Thank you again.
Shalom
On Wed, Oct 30, 2013 at 10:00 PM, Shawn Heisey wrote:
> On 10/30/2013 1:49 PM, Shalom Ben-Zvi Kazaz wrote:
>
>> we are continuously getting this exception during replication from
>> master to slave. our index size is 9.27 G and we
on and the
problem disappeared.
the httpcomponents jars which are dependencies of solrj where in the 4.2.x
version, I upgraded to httpclient-4.3.1 , httpcore-4.3 and httpmime-4.3.1
I ran the replication a few times now and no problem at all, it is now
working as expected.
It seams that the upgrade
Hi,
We have a customer that needs support for both english and japanese, a
document can be any of the two and we have no indication about the
language for a document. ,so I know I can construct a schema with both
english and japanese fields and index them with copy field. I also know
I can detect t
Hello,
I have a text and text_ja fields where text is english and text_ja is
japanese analyzers, i index both with copyfield from other fields.
I'm trying to search both fields using edismax and qf parameter, but I
see strange behaviour of edismax , I wonder if someone can give me a
hist to what's
Hello list
In one of our search that we use Result Grouping we have a need to
filter results to only groups that have more then one document in the
group, or more specifically to groups that have two documents.
Is it possible in some way?
Thank you
18 matches
Mail list logo