Hi Olivier,
Can you look at the collections to see if there are leader initiated
recovery nodes in the ZooKeeper tree? Go into the Solr Admin UI ->
Cloud panel -> Tree view and drill into one of the collections that's
not recovering /collections//leader_initiated_recovery/
You could try deleting
Hi Vijay,
Verify the ResourceManager URL and try passing the --manager param to
explicitly set the ResourceManager URL during the create step.
Cheers,
Tim
On Mon, Aug 17, 2015 at 4:37 AM, Vijay Bhoomireddy
wrote:
> Hi,
>
>
>
> Any help on this please?
>
>
>
> Thanks & Regards
>
> Vijay
>
>
>
>
Hi Vijay,
I'm not sure what's wrong here ... have you posted to the Slider
mailing list? Also, which version of Java are you using when
interacting with Slider? I know it had some issues with Java 8 at one
point. Which version of Slider so I can try to reproduce ...
Cheers,
Tim
On Thu, Aug 27, 2
You should fix your log4j.properties file to no log to console ...
it's there for the initial getting started experience, but you don't
need to send log messages to 2 places.
On Tue, Oct 20, 2015 at 10:42 AM, Shawn Heisey wrote:
> On 10/20/2015 9:19 AM, Eric Torti wrote:
>> I had a 52GB solr-8983
would launching the java process with javaw help here?
On Thu, Oct 29, 2015 at 4:03 AM, Zheng Lin Edwin Yeo
wrote:
> Yes, this is the expected behaviour. Once you close the command window,
> Solr will stop running. This has happened to me several times. Just to
> check, which version of Solr are
I hope 256MB of Xss is a typo and you really meant 256k right?
On Mon, Nov 16, 2015 at 4:58 AM, Behzad Qureshi
wrote:
> Hi All,
>
> I am using Tomcat server with solr 4.10.3. I want to shift to Jetty as
> replacement of Tomcat server but I am not getting any good results with
> respect to perfor
I think Mark found something similar -
https://issues.apache.org/jira/browse/SOLR-6838
On Sat, Feb 14, 2015 at 2:05 AM, Erick Erickson
wrote:
> Exactly how are you issuing the commit? I'm assuming you're
> using SolrJ. the server.commit(whatever, true) waits for the searcher
> to be opened befor
Before I open a JIRA, I wanted to put this out to solicit feedback on what
I'm seeing and what Solr should be doing. So I've indexed the following 8
docs into a 2-shard collection (Solr 4.8'ish - internal custom branch
roughly based on 4.8) ... notice that the 3 grand-children of 2-1 have
dup'd key
I think the next step here is to ship Solr with the war already extracted
so that Jetty doesn't need to extract it on first startup -
https://issues.apache.org/jira/browse/SOLR-7227
On Tue, Mar 10, 2015 at 10:15 AM, Erick Erickson
wrote:
> If I'm understanding your problem correctly, I think you
Are you using a SolrJ client from 4.x to connect to a Solr 5 cluster?
On Wed, Mar 18, 2015 at 1:32 PM, Adnan Yaqoob wrote:
> I'm getting following exception while trying to upload document on
> SolrCloud using CloudSolrServer.
>
> Exception in thread "main" org.apache.solr.common.SolrException:
Anything in the server-side Solr logs? Also, if you go to the Solr admin
console at http://localhost:8983/solr, do you see the gettingstarted
collection in the cloud panel?
On Mon, Mar 30, 2015 at 1:12 PM, Purohit, Sumit
wrote:
> I have a basic Solr 5.0.0 cloud setup after following
> http://l
ount=0 seems related to "no node" error.
>
> thanks
> sumit
>
> From: Timothy Potter [thelabd...@gmail.com]
> Sent: Monday, March 30, 2015 2:18 PM
> To: solr-user@lucene.apache.org
> Subject: Re: NoNode for /clusterstate.json in solr5.0.0 cloud
, 2015 at 2:32 PM, Purohit, Sumit
wrote:
> Thanks Tim,
>
> i had to make some changes in my local spark-solr clone to build it for
> sorl5.
> If its ok, i can commit these to github.
>
> thanks
> sumit
> ________
> From: Timothy Potter
You'll need a python lib that uses a python ZooKeeper client to be
SolrCloud-aware so that you can do RDD like things, such as reading
from all shards in a collection in parallel. I'm not aware of any Solr
py libs that are cloud-aware yet, but it would be a good contribution
to upgrade https://gith
I wrote a simple backup utility for a Collection that uses the
replication handler, see:
https://github.com/LucidWorks/solr-scale-tk/blob/master/src/main/java/com/lucidworks/SolrCloudTools.java#L614
feel free to borrow / steal if useful.
On Mon, Apr 6, 2015 at 12:42 PM, Davis, Daniel (NIH/NLM) [C]
14 April 2015 - The Lucene PMC is pleased to announce the release of
Apache Solr 5.1.0.
Solr 5.1.0 is available for immediate download at:
http://www.apache.org/dyn/closer.cgi/lucene/solr/5.1.0
Solr 5.1.0 includes 39 new features, 40 bug fixes, and 36 optimizations
/ other changes from over 60 un
tiles.
* facet.contains option to limit which constraints are returned.
* Streaming Aggregation for SolrCloud.
* The admin UI now visualizes Lucene segment information.
* Parameter substitution / macro expansion across entire request
On Tue, Apr 14, 2015 at 11:42 AM, Timothy Potter wrote:
> 14 Ap
Can you try defining the ZK_HOST in bin\solr.in.cmd instead of passing
it on the command-line?
On Mon, Apr 27, 2015 at 12:10 PM, Erick Erickson
wrote:
> What version of Solr are you using? 4.10.3? 5.1?
>
> And can we see the full output of your attempt to start Solr? There
> might be some more in
I'm seeing that RTG requests get routed to any active replica of the
shard hosting the doc requested by /get ... I was thinking only the
leader should handle that request since there's a brief window of time
where the latest update may not be on the replica (albeit usually very
brief) and the lates
Yes, same bug. Fixed in 5.2
On Tue, May 26, 2015 at 9:15 AM, Clemens Wyss DEV wrote:
> I also noticed that (see my post this "morning")
> ...
> SOLR_OPTS="$SOLR_OPTS -Dsolr.allow.unsafe.resourceloading=true"
> ...
> Is not taken into consideration (anymore). Same "bug"?
>
>
> -Ursprüngliche N
Hi Edwin,
Are there changes you recommend to bin/solr.cmd to make it easier to
work with NSSM? If so, please file a JIRA as I'd like to help make
that process easier.
Thanks.
Tim
On Mon, May 25, 2015 at 3:34 AM, Zheng Lin Edwin Yeo
wrote:
> I've managed to get the Solr started as a Windows serv
Seems like you should be able to use the ManagedStopFilterFactory with
a custom StorageIO impl that pulls from your db:
http://lucene.apache.org/solr/5_1_0/solr-core/index.html?org/apache/solr/rest/ManagedResourceStorage.StorageIO.html
On Thu, May 28, 2015 at 7:03 AM, Alessandro Benedetti
wrote:
Hi Edwin,
You'll need to use the bin\solr.cmd to start Solr as it now requires
some additional system properties to be set. Put simply, starting solr
using java -jar start.jar is not supported. Please try bin\solr.cmd
and let us know if you run into any issues. You can set any additional
system pr
can you try with double-quotes around the zk connect string?
bin\solr.cmd -e cloud -z "localhost:2181,localhost:2182,localhost:2183"
On Mon, Jul 6, 2015 at 2:59 AM, Adrian Liew wrote:
> Hi David,
>
> When I run the command below on a Windows machine using Powershell window:
>
> .\solr.cmd -e clo
What are your cache sizes? Max doc?
Also, what GC settings are you using? 6GB isn't all that much for a
memory-intensive app like Solr, esp. given the number of facet fields
you have. Lastly, are you using docvalues for your facet fields? That
should help reduce the amount of heap needed to comput
this gives expected result:
SELECT title_s, COUNT(*) as cnt
FROM movielens
WHERE genre_ss='action' AND rating_i='[4 TO 5]'
GROUP BY title_s
ORDER BY cnt desc
LIMIT 5
but using >= 4 doesn't give same results (my ratings are 1-5):
SELECT title_s, COUNT(*) as cnt
How would I do something like: find all docs using a geofilt, e.g.
SELECT title_s
FROM movielens
WHERE location_p='{!geofilt d=90 pt=37.773972,-122.431297 sfield=location_p}'
This fails with:
{"result-set":{"docs":[
{"EXCEPTION":"java.util.concurrent.ExecutionException:
java.io.IOException: --
redicates should not be too
>> difficult. Feel free to create a jira ticket for this.
>>
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>> On Sat, May 21, 2016 at 10:55 AM, Timothy Potter
>> wrote:
>>
>>> this gives expec
I've seen docs and diagrams that seem to indicate a streaming
expression can utilize all replicas of a shard but I'm seeing only 1
replica per shard (I have 2) being queried.
All replicas are on the same host for my experimentation, could that
be the issue? What are the circumstances where all rep
that may or may not be host
> to any replicas of that collection.
>
> At least I think that's what's up, but then again this is
> new to me too.
>
> Which bits of the doc anyway? Sounds like some
> clarification is in order.
>
> Best,
> Erick
>
>
t;>> > for sub-processing, where N is however many worker
>>> > nodes you specified that may or may not be host
>>> > to any replicas of that collection.
>>> >
>>> > At least I think that's what's up, but then again this is
>&g
I have code that uses the DateMathParser and this used to work in 5.x
but is no longer accepted in 6.x:
time:[NOW-2DAY TO 2016-07-19Z]
org.apache.solr.common.SolrException: Invalid Date in Date Math
String:'2016-07-19Z'
at org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:241)
Th
Got an answer from Hossman in another channel ... this syntax was not
officially supported and is no longer valid, i.e. my code must change
;-)
On Mon, Jul 18, 2016 at 8:02 AM, Timothy Potter wrote:
> I have code that uses the DateMathParser and this used to work in 5.x
> but is no
I'm working with 6.1.0 release and I have a single SolrCloud instance
with 1 shard / 1 replica. Somehow I'm triggering this, which from what
I can see, means workers == 0, but how? Shouldn't workers default to 1
I should mention that my streaming expression doesn't include any
workers, i.e. it is
l to the /stream handler?
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Thu, Jul 21, 2016 at 12:28 PM, Timothy Potter
> wrote:
>
>> I'm working with 6.1.0 release and I have a single SolrCloud instance
>> with 1 shard / 1 replica. Somehow I'm t
ld never see this error if the /stream handler is executing the
> expression.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Tue, Jul 26, 2016 at 10:44 AM, Timothy Potter
> wrote:
>
>> it's from a unit test, but not sure why that matters? If I wrap
Does anyone have an example of just POST'ing a streaming expression to
the /stream handler from SolrJ client code? i.e. I don't want to parse
and execute the streaming expression on the client side, rather, I
want to post the expression to the server side.
Currently, my client code is a big copy a
tein
> http://joelsolr.blogspot.com/
>
> On Tue, Jul 26, 2016 at 3:58 PM, Timothy Potter
> wrote:
>
>> Does anyone have an example of just POST'ing a streaming expression to
>> the /stream handler from SolrJ client code? i.e. I don't want to parse
>> and exe
This SQL used to work pre-calcite:
SELECT movie_id, COUNT(*) as num_ratings, avg(rating) as aggAvg FROM
ratings GROUP BY movie_id HAVING num_ratings > 100 ORDER BY aggAvg ASC
LIMIT 10
Now I get:
Caused by: java.io.IOException: -->
http://192.168.1.4:8983/solr/ratings_shard2_replica1/:Failed to
ex
uld consider this a regression, but
>>> I
>>> think this will be a won't fix.
>>>
>>> Joel Bernstein
>>> http://joelsolr.blogspot.com/
>>>
>>> On Tue, May 16, 2017 at 12:51 PM, Timothy Potter
>>> wrote:
>>>
>>>
I'm executing a streaming expr and get this error:
Caused by: org.apache.solr.common.SolrException: Could not load
collection from ZK:
MovieLens_Ratings_f2e6f8b0_3199_11e7_b8ab_0242ac110002
at
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1098)
at
[SessionTracker:ZooKeeperServer@347] -
Expiring session 0x15bd8bdd3500023, timeout of 1ms exceeded
On Fri, May 19, 2017 at 9:48 AM, Joel Bernstein wrote:
> You get this every time you run the expression?
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Fri, May 19, 2017 at 10:44 AM, Timot
Hi Mark,
Sorry for the trouble! I've now made the ami-1e6b9d76 AMI public;
total oversight on my part :-(. Please try again. Thanks Hoss for
trying to help out on this one.
Cheers,
Tim
On Fri, Jun 6, 2014 at 6:46 PM, Mark Gershman wrote:
> Thanks, Hoss.
>
> I did substitute the previous AMI ID
Hi Greg,
Sorry for the slow response. The general thinking is that you
shouldn't worry about which nodes host leaders vs. replicas because A)
that can change, and B) as you say, the additional responsibilities
for leader nodes is quite minimal (mainly per-doc version management
and then distributi
Hi Modassar,
Have you tried hitting the cores for each replica directly (instead of
using the collection)? i.e. if you had col_shard1_replica1 on node1,
then send the optimize command to that core URL directly:
curl -i -v "http://host:port/solr/col_shard1_replica1/update"; -H
'Content-type:applic
Hi Zane,
re 1: as an alternative to shard splitting, you can just overshard the
collection from the start and then migrate existing shards to new
hardware as needed. The migrate can happen online, see collection API
ADDREPLICA. Once the new replica is online on the new hardware, you
can unload the
Hi Ian,
What's the CPU doing on the leader? Have you tried attaching a
profiler to the leader while running and then seeing if there are any
hotspots showing. Not sure if this is related but we recently fixed an
issue in the area of leader forwarding to replica that used too many
CPU cycles ineffi
Hi Ameya,
Tough to say without more information about what's slow. In general,
when I've seen Solr index that slow, it's usually related to some
complex text analysis, for instance, are you doing any phonetic
analysis? Best thing to do is attach a Java profiler (e.g. JConsole or
VisualVM) using rm
You'll need to scp the JAR files to all nodes in the cluster. ZK is
not a great distribution mechanism for large binary files since it has
a 1MB znode size limit (by default)
On Thu, Jul 31, 2014 at 10:26 AM, P Williams
wrote:
> Hi,
>
> I have an existing collection that I'm trying to add to a ne
It will always be supported under the current architecture as
SolrCloud uses master/slave style replication to bring replicas back
in-sync with leaders if a replica is too far out of date (currently,
too far > 100 missed updates). So if it fits your architecture better,
then use it!
On Mon, Aug 4,
Hi Bruno,
Have you looked into Solr's facet support? If I'm reading your post
correctly, this sounds like the classic case for facets. Each time the user
selects a facet, you add a filter query (fq clause) to the original query.
http://wiki.apache.org/solr/SolrFacetingOverview
Tim
On Wed, Oct 2
Sounds correct - you probably want to use an invariant parameter in
solrconfig.xml, something along the lines of:
docset:0
Where docset is the new field you add to the schema to determine which set
a document belongs to. You might also consider adding a newSearcher warming
query that includes t
ing me just if an another solution exists.
>
> Facet seems to be the good solution.
>
> Bruno
>
>
>
> Le 23/10/2013 17:03, Timothy Potter a écrit :
>
> Hi Bruno,
>>
>> Have you looked into Solr's facet support? If I'm reading your post
>> correc
I've been thinking about how SolrCloud deals with write-availability using
in-sync replica sets, in which writes will continue to be accepted so long
as there is at least one healthy node per shard.
For a little background (and to verify my understanding of the process is
correct), SolrCloud only
want.
>
> - Mark
>
> On Nov 19, 2013, at 12:14 PM, Timothy Potter wrote:
>
> > I've been thinking about how SolrCloud deals with write-availability
> using
> > in-sync replica sets, in which writes will continue to be accepted so
> long
> > as there is a
ov 19, 2013, at 12:42 PM, Timothy Potter wrote:
>
> > You're thinking is always one-step ahead of me! I'll file the JIRA
> >
> > Thanks.
> > Tim
> >
> >
> > On Tue, Nov 19, 2013 at 10:38 AM, Mark Miller
> wrote:
> >
> >> Yeah
Good questions ... From my understanding, queries will work if Zk goes down
but writes do not work w/o Zookeeper. This works because the clusterstate
is cached on each node so Zookeeper doesn't participate directly in queries
and indexing requests. Solr has to decide not to allow writes if it loses
>> were hard not to think about while building :) Just hard to get back to a
>> lot of those things, even though a lot of them are fairly low hanging
>> fruit. Hardening takes the priority :(
>>
>> - Mark
>>
>> On Nov 19, 2013, at 12:42 PM, Timothy Potter wr
Yes, I've done this ... but I had to build my own utility to update
clusterstate.json (for reasons I can't recall now). So make your
changes to clusterstate.json manually and then do something like the
following with SolrJ:
public static void updateClusterstateJsonInZk(CloudSolrServer
cloudSol
I'm curious how much compression you get with your synonym file using
something basic like gzip? If significant, would it make sense to
store the compressed syn file in ZooKeeper (or any other metadata you
need to distribute around the cluster)? This would require the code
that reads the syn file f
Hi Dave,
Have you looked at the TermsComponent?
http://wiki.apache.org/solr/TermsComponent It is easy to wire into an
existing request handler and allows you to return the top terms for a
field. Example server even includes an example request handler that
uses it:
true
fa
I have an example in Solr In Action that uses the
PatternReplaceCharFilterFactory and now it doesn't work in 4.7.0.
Specifically, the is:
The PatternReplaceCharFilterFactory (PRCF) is used to collapse
repeated
; On 3/1/2014 12:15 PM, Timothy Potter wrote:
>> The PatternReplaceCharFilterFactory (PRCF) is used to collapse
>> repeated letters in a term down to a max of 2, such as #yu would
>> be #yumm
>>
>> When I run some text through this analyzer using the Analysis form,
>
Hi,
Using the coreAdmin mergeindexes command to merge an index into a
leader (SolrCloud mode on 4.9.0) and the replica does not do a snap
pull from the leader as I would have expected. The merge into the
leader worked like a charm except I had to send a hard commit after
that (which makes sense).
> sounds like we should write a test and make it work.
>
> --
> Mark Miller
> about.me/markrmiller
>
> On August 19, 2014 at 1:20:54 PM, Timothy Potter (thelabd...@gmail.com) wrote:
>> Hi,
>>
>> Using the coreAdmin mergeindexes command to merge an index into a
&
You can set the storageDir init-arg in the solrconfig.xml for the
RestManager for each core. However, since it is at the core config
level, you can't have a different storageDir per language. Here's an
example of how to configure the RestManager in solrconfig.xml to
customize the storageDir:
You need to also verify the node hosting the replica is a live node
(/live_nodes). From SolrJ, you can call:
clusterState.getLiveNodes().contains(node).
As for API, there is CLUSTERSTATE provided by the Collection API, but
it's not consulting /live_nodes (which is a bug) - I'll open a ticket.
On
https://issues.apache.org/jira/browse/SOLR-6481
On Thu, Sep 4, 2014 at 12:32 PM, Timothy Potter wrote:
> You need to also verify the node hosting the replica is a live node
> (/live_nodes). From SolrJ, you can call:
> clusterState.getLiveNodes().contains(node).
>
> As fo
Probably need to look at it running with a profiler to see what's up.
Here's a few additional flags that might help the GC work better for
you (which is not to say there isn't a leak somewhere):
-XX:MaxTenuringThreshold=8 -XX:CMSInitiatingOccupancyFraction=40
This should lead to a nice up-and-dow
Indeed - Hoss is correct ... it's a problem with the example in the
book ... my apologies for the confusion!
On Tue, Sep 30, 2014 at 3:57 PM, Chris Hostetter
wrote:
>
> : Thanks for the response, yes the way you describe I know it works and is
> : how I get it to work but then what does mean the
Just soliciting some advice from the community ...
Let's say I have a 10-node SolrCloud cluster and have a single collection
with 2 shards with replication factor 10, so basically each shard has one
replica on each of my nodes.
Now imagine one of those nodes starts getting into a bad state and st
Correct. Solr 5.0 is not a Web application; any WAR or Web app'ish things
in Solr 5 are implementation details that may change in the future. The ref
guide will include some content about how to migrate to Solr 5 from 4.
On Tue, Feb 10, 2015 at 9:48 AM, Matt Kuiper wrote:
> I am starting to look
The bin/solr script in 4 didn't do a good job at allowing you to control
the location of the redirected console log or gc log, so you'll probably
have to hack that script a bit. The location of the main Solr log can be
configured in the example/resources/log4j.properties
This has been improved in
Hi Vijay,
We're working on SOLR-6816 ... would love for you to be a test site for any
improvements we make ;-)
Curious if you've experimented with changing the mergeFactor to a higher
value, such as 25 and what happens if you set soft-auto-commits to
something lower like 15 seconds? Also, make s
Recently upgraded to 4.3.1 but this problem has persisted for a while now ...
I'm using the following configuration when starting Jetty:
-XX:OnOutOfMemoryError="/home/solr/oom_killer.sh 83 %p"
If an OOM is triggered during Solr web app initialization (such as by
me lowering -Xmx to a value that
hread.java:722)
Caused by: java.lang.OutOfMemoryError: Java heap space
On Wed, Jun 26, 2013 at 12:27 PM, Timothy Potter wrote:
> Recently upgraded to 4.3.1 but this problem has persisted for a while now ...
>
> I'm using the following configuration when starting Jetty:
>
> -XX:
t;
> Not much we can do short of not using Jetty?
>
> That's a pain, I'd just written a nice OOM handler too!
>
>
> On 26 June 2013 20:37, Timothy Potter wrote:
>
>> A little more to this ...
>>
>> Just on chance this was a weird Jetty issue or somet
This is not a problem per se, just want to verify that we're not able
to specify which server shard splits are created as of 4.3.1? From
what I've seen, the new cores for the sub-shards are created on the
leader of the shard being split.
Of course it's easy enough to migrate the new sub-shards to
some extra disks working in parallel during the split
(icing on the cake of course).
Cheers,
Tim
On Wed, Jul 17, 2013 at 10:40 AM, Yonik Seeley wrote:
> On Wed, Jul 17, 2013 at 12:26 PM, Timothy Potter wrote:
>> This is not a problem per se, just want to verify that we're not able
I saw something similar and used an absolute path to my JAR file in
solrconfig.xml vs. a relative path and it resolved the issue for me.
Not elegant but worth trying, at least to rule that out.
Tim
On Mon, Jul 22, 2013 at 7:51 AM, Abeygunawardena, Niran
wrote:
> Hi,
>
> I'm trying to migrate to
A couple of things I've learned along the way ...
I had a similar architecture where we used fairly low numbers for
auto-commits with openSearcher=false. This keeps the tlog to a
reasonable size. You'll need something on the client side to send in
the hard commit request to open a new searcher eve
Why was it down? e.g. did it OOM? If so, the recommended approach is
kill the process on OOM vs. leaving it in the cluster in a zombie
state. I had similar issues when my nodes OOM'd is why I ask. That
said, you can get the /clusterstate.json which contains Zk's status of
a node using a request lik
There is but I couldn't get it to work in my environment on Jetty, see:
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201306.mbox/%3CCAJt9Wnib+p_woYODtrSPhF==v8Vx==mDBd_qH=x_knbw-bn...@mail.gmail.com%3E
Let me know if you have any better luck. I had to resort to something
hacky but wa
Curious what the use case is for this? Zookeeper is not an HTTP
service so loading it in Jetty by itself doesn't really make sense. I
also think this creates more work for the Solr team especially since
setting up a production ensemble shouldn't take more than a few
minutes once you have the nodes
Hi Matt,
This feature is commonly known as deep paging and Lucene and Solr have
issues with it ... take a look at
http://solr.pl/en/2011/07/18/deep-paging-problem/ as a potential
starting point using filters to bucketize a result set into sets of
sub result sets.
Cheers,
Tim
On Tue, Jul 23, 2013
Quick behavior check on whether Solr continues to process queries and
index documents during a collection reload?
For example, after I upload new config documents to Zookeeper, I issue
a reload command using the collections API. Of course this propagates
a core reload across all nodes in the colle
Log messages?
On Wed, Jul 24, 2013 at 1:37 AM, Neil Prosser wrote:
> Great. Thanks for your suggestions. I'll go through them and see what I can
> come up with to try and tame my GC pauses. I'll also make sure I upgrade to
> 4.4 before I start. Then at least I know I've got all the latest changes
Apologize if this is not the correct way to request mailing list admin
support but it's pretty clear that wired...@yahoo.com is spamming this
list and should be booted out.
Tim
7;s the leader.
> 2013-07-24 07:31:42,449 - server04 registers its state as active.
>
> I'm sorry there's so much there. I'm still getting used to what's important
> for people. Both servers were running 4.3.1. I've since upgraded to 4.4.0.
>
> If you need any
only this box in the cluster:
>
> Something like /admin/state which would return
> "down","active","leader","recovering"
>
> I'm not really sure where to begin however. Any ideas?
>
> jim
>
> On Mon, Jul 22, 2013 at 12:52 PM, T
Going over the comments in SOLR-1316, I seemed to have lost the
forrest for the trees. What is the benefit of using the spellcheck
based suggester over something like the terms component to get
suggestions as the user types?
Maybe it is faster because it builds the in-memory data structure on
comm
1) Depends on your document routing strategy. It sounds like you could
be using the compositeId strategy and if so, there's still a hash
range assigned to each shard, so you can split the big shards into
smaller shards.
2) Since you're replicating in 2 places, when one of your servers
crash, there
on? In the first case I'd go for TermsComponent
> and the second spell check as an example.
>
> Best
> Erick
>
> On Tue, Jul 30, 2013 at 2:07 PM, Timothy Potter wrote:
>> Going over the comments in SOLR-1316, I seemed to have lost the
>> forrest for the trees.
I've been thinking about this one too and was curious about using the Solr
Entity support in the DIH to do the import from one DC to another (for the
lost docs). In my mind, one configures the DIH to use the
SolrEntityProcessor with a query to capture the docs in the DC that stayed
online, most lik
Trying to add some information about core.properties and auto-discovery in
Solr in Action and am at a loss for what to tell the reader is the purpose
of this feature.
Can anyone point me to any background information about core
auto-discovery? I'm not interested in the technical implementation det
Exactly the insight I was looking for! Thanks Yonik ;-)
On Fri, Sep 20, 2013 at 10:37 AM, Yonik Seeley wrote:
> On Fri, Sep 20, 2013 at 11:56 AM, Timothy Potter
> wrote:
> > Trying to add some information about core.properties and auto-discovery
> in
> > Solr in Action
I have a custom ValueSourceParser that sets up a Zookeeper Watcher on some
frequently changing metadata that a custom ValueSource depends on.
Basic flow of events is - VSP watches for metadata changes, which triggers
a refresh of some expensive data that my custom ValueSource uses at query
time. T
cess to the solrcore with fp.req.getCore?
>
> If so, it's easy to get the zk stuff
>
> core.getCoreDescriptor.getCoreContainer.getZkController(.getZkClient).
>
> From memory, so perhaps with some minor misname.
>
> - Mark
>
> On Mar 25, 2013, at 6:03 PM
Hi,
I have a custom ValueSource that I'd like to use as a filter, something
like:
fq={!frange l=1 u=1}MYFUNC(some_field, addl args)
Based on the args passed in and the value in some_field, MYFUNC returns
either 1 or 0.
This works but it doesn't seem like the results get cached as subsequent
req
Thanks for the reply Chris - equals and hashCode are implemented correctly
...
I ended up solving my issue by enabling the PostFilter support in the
frange parser using:
{!frange l=1 u=1 cost=200 cache=false}
This works for me because the queries that use my custom ValueSource are
pretty tightly
1 - 100 of 201 matches
Mail list logo