thanks for our reply Chris,
Yes i am aware of this Bug. we had reported this through lucid work during
our 4.2.0 evaluation :)
I will try to get thread dump and verify where CPU is pegging
Regarding tall documents. We have a huge list of multivalued in the
document. which i refer as tall docu
On 19 May 2013 08:36, Kamal Palei wrote:
> Hi Alex
> I just saw in* types *area, long is already defined as
>
> * omitNorms="true" positionIncrementGap="0"/>
> *
> Hence I hope, I should be able to declare a long type index in* fields *area
> as shown below.
>
>
>
Yes, this should be fine.
Awesome news Rishi! Looking forward to your SolrCloud updates.
On Sat, May 18, 2013 at 12:59 AM, Rishi Easwaran wrote:
>
>
> Hi All,
>
> Its Friday 3:00pm, warm & sunny outside and it was a good week. Figured
> I'd share some good news.
> I work for AOL mail team and we use SOLR for our mail sea
Hi Alex
I just saw in* types *area, long is already defined as
*
*
Hence I hope, I should be able to declare a long type index in* fields *area
as shown below.
Not sure, why it is not taking effect.
Best Regards
Kamal
On Sat, May 18, 2013 at 6:23 PM, Kamal Palei wrote:
> Hi Alex,
>
: We recently decided to move from Solr version 3.5 to 4.2.1. The transition
...
: Most of the fields are multiValued (type String) and the size of array in
: those vary from 5 to 50K. So our 30% of popular documents are tall. Not all
...
: Issues that we observed is high CPU and M
I have read that about zookeeper:
"Zookeeper servers have an active connections limit, which by default is
30." Do you define it higher than 30 for Solr?
2013/5/17 vsilgalis
> As an example, I have 9 SOLR nodes (3 clusters of 3) using different
> versions
> of SOLR (4.1, 4.1, and 4.2.1), utiliz
Hi,
We recently decided to move from Solr version 3.5 to 4.2.1. The transition
seam to be smooth from development point but i see some intermediate issues
with our cluster.
Some information We use the classic Master/Slave model (have plans to move
to Cloud v4.3)
#documents 300K and have around 1
hello
your comment made me think - so i decided to double check myself.
i opened up the schema in squirrel and made sure that the two columns in
question were actually of type TEXT in the schema - check
i went in to the db-config.xml and removed all references to
ClobTransformer, removed the cas
aah… was doing a facet on a double field which was having 6 decimal places…
No surprise that the lucene cache got full…
.z/ahoor
On 17-May-2013, at 11:56 PM, J Mohamed Zahoor wrote:
> Memory increase a lot with queries which have facets…
>
>
> ./Zahoor
>
>
> On 17-May-2013, at 10:00 PM, S
Rishi,
Fantastic! Thank you so very much for sharing the details.
Jason
On May 17, 2013, at 12:29 PM, Rishi Easwaran wrote:
>
>
> Hi All,
>
> Its Friday 3:00pm, warm & sunny outside and it was a good week. Figured I'd
> share some good news.
> I work for AOL mail team and we use SOLR for
These numbers are really great. Would you mind sharing your h/w configuration
and JVM params
thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Upgrading-from-SOLR-3-5-to-4-2-1-Results-tp4064266p4064370.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Alex,
Where I need to mention the types. Kindly tell me in detail.
I use Drupal framework. It has given a schema file. In that there are
already some long type fields, and these are actually shown by solr as part
of index.
Whatever long field I am adding it does not show part of index.
Best R
You'll have to decide whether cached or uncached filter queries work best
for your particular application. If you can us cached filter queries, that's
better, and then separating or factoring the filter query terms is better.
But if you have so much data or so little memory or such complex quer
Ideally, such a text search should be done using tokenized text and span
query. Maybe you could do it using the "surround" query parser, but you
should be able to do it using the LucidWorks Search query parser:
"this is" BEFORE:1 ("good" OR "excellent")
But, given that you have a keyword token
Thank you so very much Jack for your prompt reply. Your solution worked for
us.
I have another issue in querying fields having values of the sort
This is goodThis is also goodThis
is excellent. I want to perform "StartsWith" as well as 'Contains"
searches on this field. The field definition is as
Hi
I am using solr 4.2.1.
My index has products from different stores with different attributes.
If i want to get the count of all products which belongs to store X which is
coloured red and is in-stock…
My question is : Which way of querying is better in-terms of "performance" and
"cache u
Hi Mikhail,
yes the thing is that I need to take into account different queries and
that's why I can't use the Terms Component.
Cheers.
2013/5/17 Mikhail Khludnev
> On Fri, May 17, 2013 at 12:47 PM, Carlos Bonilla
> wrote:
>
> > We
> > only need to calculate how many different "B" values have
17 matches
Mail list logo