Thanks Erick.
Arcadius.
On 22 May 2014 22:14, Erick Erickson wrote:
> Why not just return them all and sort it out on the app layer? Seems
> easier
>
> Or consider doc transformers I suppose.
>
> Best,
> Erick
>
> On Thu, May 22, 2014 at 10:20 AM, Arcadius Ahouansou
> wrote:
> > Hello.
>
Hi All,
I have my own popularity value source class
and I let solr know about it via solrconfig.xml
But then I get the following class cast exception
I have tried to make sure there are no old Solr jar files in the classpath.
Why would this be happening ?
org.apache.solr.common.SolrExcept
We’re seeing something similar to what Ryan reported, e.g. a massively clogged
overseer queue that gets so bad it brings down our solr nodes. I tried “rmr”ing
the entire /overseer/queue but it keeps returning with “Node does not exist:
/overseer/queue/qn-0##”, after which in order to con
Shawn,
I agree the data dir concept doesn't make much sense in the context of
collections.
With the config I had setup the solr home was in a different file mount, so
I had to change my solr home
filemount location to get the data dir to locate where I wanted it once the
data dir option was removed
"Age out" in this context is just implementing a LRU cache for open
cores. When the cache limit is exceeded, the oldest core is closed
automatically.
Best,
Erick
On Thu, May 22, 2014 at 10:27 AM, Saumitra Srivastav
wrote:
> Eric,
>
> Can you elaborate more on what you mean by "age out"?
>
>
>
>
Why not just return them all and sort it out on the app layer? Seems easier
Or consider doc transformers I suppose.
Best,
Erick
On Thu, May 22, 2014 at 10:20 AM, Arcadius Ahouansou
wrote:
> Hello.
>
> I need to have dynamically assigned field list (fl) depending on the
> existence of a fiel
Thanks for looking into this.
Auto warming queries are the ones which gets executed upon creation of first
searcher and new searcher. We use same set of queries for first and new
searcher.
These are our queries.
*:*
Shawn, can you open an issue so we don't forget about this?
On Thu, May 22, 2014 at 9:31 PM, Shawn Heisey wrote:
> On 5/22/2014 8:31 AM, Aniket Bhoi wrote:
> > On Thu, May 22, 2014 at 7:13 PM, Shawn Heisey wrote:
> >
> >> On 5/22/2014 1:53 AM, Aniket Bhoi wrote:
> >>> Details:
> >>>
> >>> *Sol
Just a thought: If your users can send updates and you can't trust them,
how can you keep them from deleting all your data?
I would consider using a servlet filter to inspect the request. That would
probably be non-trivial if you plan to accept javabin requests as well.
Michael Della Bitta
Appli
Tomas, I left a few comments for particular cases at
https://issues.apache.org/jira/browse/SOLR-6096 and really want to follow
up your issues... Y U NO TXT ME BACK??>..
On Tue, May 20, 2014 at 4:36 PM, Thomas Scheffler <
thomas.scheff...@uni-jena.de> wrote:
> Am 20.05.2014 14:11, schrieb Jack Kr
all,
Is it possible I can add new share to existing cloud?
I first call below to create colleciton1
java -DzkRun -Dbootstrap_confdir=./solr/collection1/conf
-Dcollection.configName=myconf -jar start.jar
I did not give "numShards" means it will use implicit router.
After that, how can I add new
On Thu, May 22, 2014 at 10:30 AM, Erick Erickson wrote:
> If we manage to extend the "lazy core" loading from stand-alone to
> "lazy collection" loading in SolrCloud would that satisfy the
> use-case? It still doesn't allow manual unloading of the collection,
> but the large collection would "age
Eric,
Can you elaborate more on what you mean by "age out"?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Unload-collection-in-SolrCloud-tp4135706p4137707.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello.
I need to have dynamically assigned field list (fl) depending on the
existence of a field in the response.
I need to do something like
fl=if(exists(field0),field0 field1,field2 field3))
The problem is that the "if" function does not like the space.
I have tried many combinations like doub
Hi Jayesh,
Solr already works like that. Returned fields (fl) will have original data.
On Thursday, May 22, 2014 8:02 PM, Jayesh Sidhwani
wrote:
Hello folks,
I use SOLR to store e-commerce product data. I wanted to understand if
there's any way in which we can store the index data different
Hello folks,
I use SOLR to store e-commerce product data. I wanted to understand if
there's any way in which we can store the index data differently from the
data that I expect solr to send as a result?
For example; consider that a category is called T-Shirt. When I query solr,
I want to be able
Eric,
Can you elaborate more on what you mean by "age out"?
On Thu, May 22, 2014 at 9:00 PM, Erick Erickson wrote:
> If we manage to extend the "lazy core" loading from stand-alone to
> "lazy collection" loading in SolrCloud would that satisfy the
> use-case? It still doesn't allow manual unloa
On 5/22/2014 8:31 AM, Aniket Bhoi wrote:
> On Thu, May 22, 2014 at 7:13 PM, Shawn Heisey wrote:
>
>> On 5/22/2014 1:53 AM, Aniket Bhoi wrote:
>>> Details:
>>>
>>> *Solr Version:*
>>> Solr Specification Version: 3.4.0.2012.01.23.14.08.01
>>> Solr Implementation Version: 3.4
>>> Lucene Specification
On 5/21/2014 3:15 PM, cpalm wrote:
> Yes we use that to force the data dir to be someplace other than solr.home.
> Removing the data dir field just puts the data in solr home, which isn't
> desirable.
> solr home is specified in conf/Catalina/localhost/solr.xml
>
>override="true"/>
>
>
> Is t
No, but it sure would be nice to have the Elasticsearch feature of supplying
a script for update.
-- Jack Krupansky
-Original Message-
From: Saumitra Srivastav
Sent: Thursday, May 22, 2014 11:13 AM
To: solr-user@lucene.apache.org
Subject: Atomic update by query instead of ID
Is is po
If we manage to extend the "lazy core" loading from stand-alone to
"lazy collection" loading in SolrCloud would that satisfy the
use-case? It still doesn't allow manual unloading of the collection,
but the large collection would "age out" if it was truly not used all
that much. That said, I don't k
Your boosting in these examples is almost, but not quite totally,
useless. Here's why:
&sort=price asc
The only time the score of the doc (which is what boosting influences)
will be used for ordering the output is as a tie-breaker when the
price is _exactly_ the same.
FWIW,
Erick
On Wed, May 2
Gotta reiterate: Do you _know_ making your cores "do extra work" is
really a problem?
I _strongly_ urge you to demonstrate (via hard measurements) that you
have a problem here before investing any time or effort in fixing it.
The development and maintenance issues will almost undoubtedly
surprise
Is is possible to update a single field in multiple documents through atomic
updates, using a query instead of ID?
Here's my use case:
I have a multivalued field called 'tags' and a text field called 'content'.
User can define a pattern(regex) and name it, say my_pattern. Value
'my_pattern' shoul
Hmmm, not quite.
AFAIK, anything you can put in a q clause can also be put in an fq
clause. So it's not a matter of whether your search is precise or not
that you should use for determining whether to use a q or fq clause.
What _should_ influence this is whether docs that satisfy the clause
should
Hmmm... that doesn't sound like what I would have expected - I would have
thought that Solr would throw an exception on the "user" field, rather than
simply treat it as a text keyword. File a Jira. Either it's a bug or the doc
is not complete.
-- Jack Krupansky
-Original Message-
Fro
Hi all,
I have a question regarding the functionality of Edismax parser and its "User
Field" feature.
We are running Solr 4.4 on our server.
For the query:
"q= id:b* user:"Anna Collins"&defType=edismax&uf=* -user&rows=0"
The parsed query (taken from query debug info) is:
+((id:b* (text:user) (te
On Thu, May 22, 2014 at 7:13 PM, Shawn Heisey wrote:
> On 5/22/2014 1:53 AM, Aniket Bhoi wrote:
> > Details:
> >
> > *Solr Version:*
> > Solr Specification Version: 3.4.0.2012.01.23.14.08.01
> > Solr Implementation Version: 3.4
> > Lucene Specification Version: 3.4
> > Lucene Implementation Versi
Hi;
I want to use Solr schemaless mode at SolrCloud. When I start it I do not
have a definition for field type boosting. I want to modify without
changing configuration within Zookeeper as like:
text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
title^10.0 description^5.0 ke
On 5/22/2014 1:53 AM, Aniket Bhoi wrote:
> Details:
>
> *Solr Version:*
> Solr Specification Version: 3.4.0.2012.01.23.14.08.01
> Solr Implementation Version: 3.4
> Lucene Specification Version: 3.4
> Lucene Implementation Version: 3.4
>
> *Tomcat version:*
> Apache Tomcat/6.0.18
>
> *OS details
No, I was rejecting BOTH methods 1 and 2. I was suggesting a different
method. I'll leave it to somebody else to describe the method so that it is
easier to understand.
-- Jack Krupansky
-Original Message-
From: Pavel Belenkovich
Sent: Thursday, May 22, 2014 4:00 AM
To: solr-user@luc
Yeah, I recall running into infinite loop issues with PDFBox in Solr years
ago. They keep fixing these issues, but they keep popping up again. Sigh.
-- Jack Krupansky
-Original Message-
From: Siegfried Goeschl
Sent: Thursday, May 22, 2014 4:35 AM
To: solr-user@lucene.apache.org
Subjec
Launching the following query against a sharded Solr 4.7 installation in
EC2 yields a 'Specify the group.field as parameter or local parameter'
error:
http://my_solr_instance:8983/solr/core_name/select/?q=*:mozart&facet=true&facet.field=publisher_facet&facet.field=publication_year_facet&f.publicat
Yes, that's what I am doing.
IMO in addition to search, Solr satisfies the needs of lot of analytics
applications as well, and on-demand loading is a common use case in
analytics(to keep TCO low), so it would be nice to keep this supported.
Regards,
Saumitra
On Thu, May 22, 2014 at 5:37 PM, S
Ah, I see. So if I understand it correctly, you are sharing the cluster
with other collections which are more frequently used and you want to keep
resources available for them so you keep your collection dormant most of
the time until requested.
No, we don't have such an API. It'd be cool to have
I don't want to delete the collection/shards. I just want to unload all
shards/replica of the collection temporarily.
Let me explain my use case.
I have a collection alias say *collectionA* which consists of n
collections(n<=5) each with 8 shards and 2 replica over a 16 machine
cluster.
*collecti
You can use the delete Collection API.
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api6
On Thu, May 22, 2014 at 3:56 PM, Saumitra Srivastav <
saumitra.srivast...@gmail.com> wrote:
> Guys, any suggestions for this??
>
>
>
> --
> View this message in context:
Correct Michael McCandless's link: Changing Bits: Accuracy and performance of
Google's Compact Language Detector
Changing Bits: Accuracy and performance of Google's ...
To get a sense of the accuracy and performance of Google's Compact
Language Detector, I ran some tests against two othe
Hi,
Do anyone has some ideas about CLD1 OR CLD2 in solr?
I found that accuracy and performance of Google Compact Language Detector is
very good(Michael McCandless), so I want to have a try, but I don't know how to
use it.
Thanks and Best Regards,
--
Gabriel Zhang
Hi All;
I've designed a system that allows people to use a search service from
SolrCloud. However I think that I should disable "commit" option for people
to avoid performance issues (many users can send commit requests and this
may cause to performance issues). I'll configure solr config file wit
Guys, any suggestions for this??
--
View this message in context:
http://lucene.472066.n3.nabble.com/Unload-collection-in-SolrCloud-tp4135706p4137602.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Eric,
I am running the code from Eclipse with default heap size of 384Mb and
indexing using Solr SimplePostTool, posting xml files through Http Request.
I feel its not concern with Heap, otherwise the program would have made my
process pretty slow rather this is observed every 2-3 hours of inde
Great, thanx Mikhail, I"ll try that out.
regards,
Pavel.
-Original Message-
From: Mikhail Khludnev [mailto:mkhlud...@griddynamics.com]
Sent: Thursday, May 22, 2014 11:49
To: solr-user
Subject: Re: multiple queries in single request
Pavel,
I suppose the benchmark matters, anyway. (when
I'm using Solr 4.7.2.
A few things I've missed follow. Before reaching the "one leader-one failed
to recover" state, the situation was no leader for the shard and both nodes
in "recovery failed" mode. A bit of tinkering to clusterstate.json "forced"
the one to be the leader but that didn't change
As far as I remember
https://issues.apache.org/jira/browse/SOLR-3011threads didn't work at
3.4, and even had some minor issue at 3.6. Try to
run 3.6.1.
No threads in DIH in 4.x anymore
https://issues.apache.org/jira/browse/SOLR-3262
On Thu, May 22, 2014 at 11:58 AM, Shalin Shekhar Mangar <
shali
Pavel,
I suppose the benchmark matters, anyway. (when benchmark 1. don't forget to
enable http connection pooling on the client side)
Regarding the 2. - passing the long disjunction is a well known road block
(it's not a really natural problem for search engines). Most of approaches
is described a
Hi folks,
for a small customer project I'm running SOLR with embedded Tikka.
* memory consumption is an issue but can be handled
* there is an issue with PDFBox hitting an infinite loop which causes
excessive CPU usage - requires SOLR restart but happens only once
withing 400.000 documents (PD
Hi Jack!
Thanx for the response!
So you say that using method 2 below (single request with ORs and sorting
results in client) is better than method 1 (separate requests)?
regards,
Pavel.
-Original Message-
From: Jack Krupansky [mailto:j...@basetechnology.com]
Sent: Thursday, May 22,
You are running an ancient version of Solr plus the multi-threaded support
in DataImportHandler was experimental at best and was removed a few
versions later.
Why don't you upgrade to a more recent version of Solr? At the very least,
remove the threads setttings from DIH.
On Thu, May 22, 2014 at
I have Apache Solr,hosted on my apache Tomcat Server with SQLServer Backend.
Details:
*Solr Version:*
Solr Specification Version: 3.4.0.2012.01.23.14.08.01
Solr Implementation Version: 3.4
Lucene Specification Version: 3.4
Lucene Implementation Version: 3.4
*Tomcat version:*
Apache Tomcat/6.0.1
You can actually specify a dataDir in the Collection CREATE API by passing
the property.dataDir=/your/data/dir parameter.
It is not very well documented but it is possible.
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-api1
On Thu, May 22, 2014 at 4:28 AM, cpal
51 matches
Mail list logo