dataDir and tlog dir cannot be changed with a core reload.
- Mark
On Sat, Jan 9, 2016 at 1:20 PM Erick Erickson
wrote:
> Please show us exactly what you did. and exactly
> what you saw to say that "does not seem to work".
>
> Best,
> Erick
>
> On Fri, Jan 8, 2016 at 7:47 PM, KNitin wrote:
> >
Not sure I'm onboard with the first proposed solution, but yes, I'd open a
JIRA issue to discuss.
- Mark
On Mon, Jan 11, 2016 at 4:01 AM Konstantin Hollerith
wrote:
> Hi,
>
> I'm using SLF4J MDC to log additional Information in my WebApp. Some of my
> MDC-Parameters even include Line-Breaks.
>
Two of them are sub requests. They have params isShard=true and
distrib=false. The top level user query will not have distrib or isShard
because they default the other way.
- Mark
On Mon, Jan 11, 2016 at 6:30 AM Syed Mudasseer
wrote:
> Hi,
> I have solr configured on cloud with the following de
You get this when the Overseer is either bogged down or not processing
events generally.
The Overseer is way, way faster at processing events in 5x.
If you search your logs for .Overseer you can see what it's doing. Either
nothing at the time, or bogged down processing state updates probably.
Al
Only INFO level, so I suspect not bad...
If that Overseer closed, another node should have picked up where it left
off. See that in another log?
Generally an Overseer close means a node or cluster restart.
This can cause a lot of DOWN state publishing. If it's a cluster restart, a
lot of those D
Perhaps there is something preventing clean shutdown. Shutdown makes a best
effort attempt to publish DOWN for all the local cores.
Otherwise, yes, it's a little bit annoying, but full state is a combination
of the state entry and whether the live node for that replica exists or not.
- Mark
On W
Have you used jconsole or visualvm to see what it is actually hanging on to
there? Perhaps it is lock files that are not cleaned up or something else?
You might try: find ~/.ivy2 -name "*.lck" -type f -exec rm {} \;
- Mark
On Wed, Sep 16, 2015 at 9:50 AM Susheel Kumar wrote:
> Hi,
>
> Sending
wrote:
> On 9/16/2015 9:32 AM, Mark Miller wrote:
> > Have you used jconsole or visualvm to see what it is actually hanging on
> to
> > there? Perhaps it is lock files that are not cleaned up or something
> else?
> >
> > You might try: find ~/.ivy2 -name "*.lck&qu
gt; inside it stucks. let me try to see if jconsole can show something
> meaningful.
>
> Thanks,
> Susheel
>
> On Wed, Sep 16, 2015 at 12:17 PM, Shawn Heisey
> wrote:
>
> > On 9/16/2015 9:32 AM, Mark Miller wrote:
> > > Have you used jconsole or vis
On Wed, Sep 30, 2015 at 10:36 AM Steve Davids wrote:
> Our project built a custom "admin" webapp that we use for various O&M
> activities so I went ahead and added the ability to upload a Zip
> distribution which then uses SolrJ to forward the extracted contents to ZK,
> this package is built and
If it's always when using https as in your examples, perhaps it's SOLR-5776.
- mark
On Mon, Oct 5, 2015 at 10:36 AM Markus Jelsma
wrote:
> Hmmm, i tried that just now but i sometimes get tons of Connection reset
> errors. The tests then end with "There are still nodes recoverying - waited
> for
Not sure what that means :)
SOLR-5776 would not happen all the time, but too frequently. It also
wouldn't matter the power of CPU, cores or RAM :)
Do you see fails without https is what you want to check.
- mark
On Mon, Oct 5, 2015 at 2:16 PM Markus Jelsma
wrote:
> Hi - no, i don't think so,
I'd make two guess:
Looks like you are using Jrocket? I don't think that is common or well
tested at this point.
There are a billion or so bug fixes from 4.6.1 to 5.3.2. Given the pace of
SolrCloud, you are dealing with something fairly ancient and so it will be
harder to find help with older iss
Best tool for this job really depends on your needs, but one option:
I have a dev tool for Solr log analysis:
https://github.com/markrmiller/SolrLogReader
If you use the -o option, it will spill out just the queries to a file with
qtimes.
- Mark
On Wed, Sep 23, 2015 at 8:16 PM Tarala, Magesh w
memory. For memory mapped index files the remaining 24G or what is
> available off of it should be available. Looking at the lsof output the
> memory mapped files were around 10G.
>
> Thanks.
>
>
> On 10/5/15 5:41 PM, Mark Miller wrote:
> > I'd make two guess:
> >
>
gt;
> On 10/6/15 1:07 PM, Mark Miller wrote:
> > That amount of RAM can easily be eaten up depending on your sorting,
> > faceting, data.
> >
> > Do you have gc logging enabled? That should describe what is happening
> with
> > the heap.
> >
> > - Mar
Your Lucene and Solr versions must match.
On Thu, Oct 8, 2015 at 4:02 PM Steve wrote:
> I've loaded the Films data into a 4 node cluster. Indexing went well, but
> when I issue a query, I get this:
>
> "error": {
> "msg": "org.apache.solr.client.solrj.SolrServerException: No live
> SolrServ
openSearcher is a valid param for a commit whatever the api you are using
to issue it.
- Mark
On Wed, Nov 11, 2015 at 12:32 PM Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
> Does waitSearcher=false works like you need?
>
> On Wed, Nov 11, 2015 at 1:34 PM, Sathyakumar Seshachalam <
> sat
You can pass arbitrary params with Solrj. The API usage is just a little
more arcane.
- Mark
On Wed, Nov 11, 2015 at 11:33 PM Sathyakumar Seshachalam <
sathyakumar_seshacha...@trimble.com> wrote:
> I intend to use SolrJ. I only saw the below overloaded commit method in
> documentation (http://lu
If you see "WARNING: too many searchers on deck" or something like that in
the logs, that could cause this behavior and would indicate you are opening
searchers faster than Solr can keep up.
- Mark
On Tue, Nov 17, 2015 at 2:05 PM Erick Erickson
wrote:
> That's what was behind my earlier comment
He has waitSearcher as false it looks, so all the time should be in the
commit. So that amount of time does sound odd.
I would certainly change those commit settings though. I would not use
maxDocs, that is an ugly way to control this. And one second is much too
aggressive as Erick says.
If you w
Perhaps try quotes around the url you are providing to curl. It's not
complaining about the http method - Solr has historically always taken
simple GET's for http - for good or bad, you pretty much only post
documents / updates.
It's saying the name param is required and not being found and since
I’ll be working on this at some point:
https://issues.apache.org/jira/browse/SOLR-6237
- Mark
http://about.me/markrmiller
> On Feb 25, 2015, at 2:12 AM, longsan wrote:
>
> We used HDFS as our Solr index storage and we really have a heavy update
> load. We had met much problems with current le
If you google replication can cause index corruption there are two jira issues
that are the most likely cause of corruption in a solrcloud env.
- Mark
> On Mar 5, 2015, at 2:20 PM, Garth Grimm
> wrote:
>
> For updates, the document will always get routed to the leader of the
> appropriate s
Interesting bug.
First there is the already closed transaction log. That by itself deserves
a look. I'm not even positive we should be replaying the log we
reconnecting from ZK disconnect, but even if we do, this should never
happen.
Beyond that there seems to be some race. Because of the log tro
Doesn't ConcurrentUpdateSolrServer take an HttpClient in one of it's
constructors?
- Mark
On Sun, Mar 22, 2015 at 3:40 PM Ramkumar R. Aiyengar <
andyetitmo...@gmail.com> wrote:
> Not a direct answer, but Anshum just created this..
>
> https://issues.apache.org/jira/browse/SOLR-7275
> On 20 Mar
Hmm...can you file a JIRA issue with this info?
- Mark
On Fri, Mar 27, 2015 at 6:09 PM Joseph Obernberger
wrote:
> I just started up a two shard cluster on two machines using HDFS. When I
> started to index documents, the log shows errors like this. They repeat
> when I execute searches. All s
If copies of the index are not eventually cleaned up, I'd fill a JIRA to
address the issue. Those directories should be removed over time. At times
there will have to be a couple around at the same time and others may take
a while to clean up.
- Mark
On Tue, Apr 28, 2015 at 3:27 AM Ramkumar R. Ai
A bug fix version difference probably won't matter. It's best to use the
same version everyone else uses and the one our tests use, but it's very
likely 3.4.5 will work without a hitch.
- Mark
On Tue, May 5, 2015 at 9:09 AM shacky wrote:
> Hi.
>
> I read on
> https://cwiki.apache.org/confluence
File a JIRA issue please. That OOM Exception is getting wrapped in a
RuntimeException it looks. Bug.
- Mark
On Wed, Jun 3, 2015 at 2:20 AM Clemens Wyss DEV
wrote:
> Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available
> for Solr.
>
> I am seeing the following OOMs:
> ERROR -
We will have to a find a way to deal with this long term. Browsing the code
I can see a variety of places where problem exception handling has been
introduced since this all was fixed.
- Mark
On Wed, Jun 3, 2015 at 8:19 AM Mark Miller wrote:
> File a JIRA issue please. That OOM Exception
I didn't really follow this issue - what was the motivation for the rewrite?
Is it entirely under: "new code should be quite a bit easier to work on for
programmer
types" or are there other reasons as well?
- Mark
On Mon, Jun 15, 2015 at 10:40 AM Erick Erickson
wrote:
> Gaaah, that'll teach me
ase in point, I've got a working patch that I'll release at some
> point soon that gives us a "collections" version of the "core admin"
> pane. I'd love to add HDFS support to the UI if there were APIs worth
> exposing (I haven't dug into HDFS support
SolrCloud does not really support any form of rollback.
On Mon, Jun 15, 2015 at 5:05 PM Aurélien MAZOYER <
aurelien.mazo...@francelabs.com> wrote:
> Hi all,
>
> Is DeletionPolicy customization still available in Solr Cloud? Is there
> a way to rollback to a previous commit point in Solr Cloud tha
I think there is some better classpath isolation options in the works for
Hadoop. As it is, there is some harmonization that has to be done depending
on versions used, and it can get tricky.
- Mark
On Wed, Jun 17, 2015 at 9:52 AM Erick Erickson
wrote:
> For sure there are a few rough edges here
Could you file a JIRA issue so that this report does not get lost?
- Mark
On Tue, Nov 15, 2016 at 10:49 AM Solr User wrote:
> For those interested, I ended up bundling the customized ACL provider with
> the solr.war. I could not stomach looking at the stack trace in the logs.
>
> On Mon, Nov 7
Look at the Overseer host and see if there are any relevant logs for
autoAddReplicas.
- Mark
On Mon, Oct 24, 2016 at 3:01 PM Chetas Joshi wrote:
> Hello,
>
> I have the following configuration for the Solr cloud and a Solr collection
> This is Solr on HDFS and Solr version I am using is 5.5.0
>
That is probably partly because of hdfs cache key unmapping. I think I
improved that in some issue at some point.
We really want to wait by default for a long time though - even 10 minutes
or more. If you have tons of SolrCores, each of them has to be torn down,
each of them might commit on close,
That already happens. The ZK client itself will reconnect when it can and
trigger everything to be setup like when the cluster first starts up,
including a live node and leader election, etc.
You may have hit a bug or something else missing from this conversation,
but reconnecting after losing the
On Mar 7, 2014, at 3:11 AM, Avishai Ish-Shalom wrote:
> SOLR-5216
Yes, that is the one.
- Mark
http://about.me/markrmiller
+1 to the idea, I love bug fix releases (which is why I volunteered to do the
last couple).
The main limiting factor is a volunteer to do it. Users requesting a specific
bug fix relese is probably a good way to prompt volunteers though.
--
Mark Miller
about.me/markrmiller
On March 12, 2014
open, though it looks like stuff has been committed:
https://issues.apache.org/jira/browse/SOLR-5130
--
Mark Miller
about.me/markrmiller
On March 12, 2014 at 10:40:15 AM, Mike Hugo (m...@piragua.com) wrote:
After a collection has been created in SolrCloud, is there a way to modify
the
before we had a collections api.
The collections API is much better if you want multiple collections and it’s
the future.
--
Mark Miller
about.me/markrmiller
On March 20, 2014 at 10:24:18 AM, Ugo Matrangolo (ugo.matrang...@gmail.com)
wrote:
Hi,
I would like some advice about the best way to
Recently fixed in Lucene - should be able to find the issue if you dig a little.
--
Mark Miller
about.me/markrmiller
On March 21, 2014 at 10:25:56 AM, Greg Walters (greg.walt...@answers.com) wrote:
I've seen this on 4.6.
Thanks,
Greg
On Mar 20, 2014, at 11:58 PM, Shalin Shekhar M
On March 21, 2014 at 1:46:13 PM, Tim Potter (tim.pot...@lucidworks.com) wrote:
We've seen instances where you end up restarting the overseer node each time as
you restart the cluster, which causes all kinds of craziness.
That would be a great test to add tot he suite.
--
Mark M
FWIW, you can use merge like this if you run on HDFS rather than local
filesystem.
--
Mark
On March 26, 2014 at 12:34:39 PM, Shawn Heisey (s...@elyograg.org) wrote:
On 3/26/2014 3:14 AM, rulinma wrote:
> MergingSolrIndexes:
>
> http://192.168.22.32:8080/solr/admin/cores?action=mergeinde
Nice, Congrats!
--
Mark Miller
about.me/markrmiller
On March 27, 2014 at 11:17:49 AM, Trey Grainger (solrt...@gmail.com) wrote:
I'm excited to announce the final print release of *Solr in Action*, the
newest Solr book by Manning publications covering through Solr 4.7 (the
current ve
I'm looking into a hang as well - not sure of it involves searching as well,
but it may. Can you file a JIRA issue - let's track it down.
- Mark
> On Mar 28, 2014, at 8:07 PM, Rafał Kuć wrote:
>
> Hello!
>
> I have an issue with one of the SolrCloud deployments and I wanted to
> ask maybe so
We don’t currently retry, but I don’t think it would hurt much if we did - at
least briefly.
If you want to file a JIRA issue, that would be the best way to get it in a
future release.
--
Mark Miller
about.me/markrmiller
On March 28, 2014 at 5:40:47 PM, Michael Della Bitta
(michael.della.bi
We have to fix that then.
--
Mark Miller
about.me/markrmiller
On April 15, 2014 at 12:20:03 PM, Rich Mayfield (mayfield.r...@gmail.com) wrote:
I see something similar where, given ~1000 shards, both nodes spend a LOT of
time sorting through the leader election process. Roughly 30 minutes
Inline responses below.
--
Mark Miller
about.me/markrmiller
On April 15, 2014 at 2:12:31 PM, Peter Keegan (peterlkee...@gmail.com) wrote:
I have a SolrCloud index, 1 shard, with a leader and one replica, and 3
ZKs. The Solr indexes are behind a load balancer. There is one
CloudSolrServer
bq. before any of Solr gets to do its shutdown sequence
Yeah, this is kind of an open issue. There might be a JIRA for it, but I cannot
remember. What we really need is an explicit shutdown call that can be made
before stopping jetty so that it’s done gracefully.
--
Mark Miller
about.me
What version are you testing? Thought we had addressed this.
--
Mark Miller
about.me/markrmiller
On April 16, 2014 at 6:02:09 PM, Jessica Mallet (mewmewb...@gmail.com) wrote:
Hi Furkan,
Thanks for the reply. I understand the intent. However, in the case I
described, the follower is blocked
Odd - might be helpful if you can share your sorlconfig.xml being used.
--
Mark Miller
about.me/markrmiller
On April 17, 2014 at 12:18:37 PM, Brett Hoerner (br...@bretthoerner.com) wrote:
I'm doing HDFS input and output in my job, with the following:
hadoop jar /mnt/faas-solr.jar \
l the merge is complete. If
writes are allowed, corruption may occur on the merged index.”
Doesn’t sound right to me at all.
--
Mark Miller
about.me/markrmiller
On April 22, 2014 at 10:38:08 AM, Brett Hoerner (br...@bretthoerner.com) wrote:
I think I'm just misunderstanding the use of go-
System.currentTimeMillis can jump around due to NTP, etc. If you are trying to
count elapsed time, you don’t want to use a method that can jump around with
the results.
--
Mark Miller
about.me/markrmiller
On April 26, 2014 at 8:58:20 AM, YouPeng Yang (yypvsxf19870...@gmail.com) wrote:
Hi
My answer remains the same. I guess if you want more precise terminology,
nanoTime will generally be monotonic and currentTimeMillis will not be, due to
things like NTP, etc. You want monotonicity for measuring elapsed times.
--
Mark Miller
about.me/markrmiller
On April 26, 2014 at 11:25:16 AM
Have you tried a comma-separated list or are you going by documentation? It
should work.
--
Mark Miller
about.me/markrmiller
On April 26, 2014 at 1:03:25 PM, Scott Stults
(sstu...@opensourceconnections.com) wrote:
It looks like this only takes a single host as its value, whereas the
zkHost
to even hope for monotonicity.
--
Mark Miller
about.me/markrmiller
On April 26, 2014 at 1:11:14 PM, Walter Underwood (wun...@wunderwood.org) wrote:
NTP works very hard to keep the clock positive monotonic. But nanoTime is
intended for elapsed time measurement anyway, so it is the right choice
What version are you running? This was fixed in a recent release. It can happen
if you hit add core with the defaults on the admin page in older versions.
--
Mark Miller
about.me/markrmiller
On May 1, 2014 at 11:19:54 AM, ryan.cooke (ryan.co...@gmail.com) wrote:
I saw an overseer queue
If you are sure about this, can you file a JIRA issue?
--
Mark Miller
about.me/markrmiller
On May 12, 2014 at 8:50:42 PM, lboutros (boutr...@gmail.com) wrote:
Dear All,
we just finished the migration of a cluster from Solr 4.3.1 to Solr 4.6.1.
With solr 4.3.1 a node was not considered as
The main limit is the 1mb zk node limit. But even that can be raised.
- Mark
> On Jun 6, 2014, at 6:21 AM, Shalin Shekhar Mangar
> wrote:
>
> No, there's no theoretical limit.
>
>
>> On Fri, Jun 6, 2014 at 11:20 AM, ku3ia wrote:
>>
>> Hi all!
>> The question is how many collections I can
We have been waiting for that issue to be finished before thinking too hard
about how it can improve things. There have been a couple ideas (I’ve mostly
wanted it for improving the internal zk mode situation), but no JIRAs yet that
I know of.
--
Mark Miller
about.me/markrmiller
On June 23
other
cases, information is being pulled from zookeeper and recovering nodes are
ignored. If this is the issue I think it is, it should only be an issue when
you directly query recovery node.
The CloudSolrServer client works around this issue as well.
--
Mark Miller
about.me/markrmiller
On July
would have to dig in to really know I think. I would doubt it’s a configuration
issue, but you never know.
--
Mark Miller
about.me/markrmiller
On July 8, 2014 at 9:18:28 AM, Ian Williams (NWIS - Applications Design)
(ian.willi...@wales.nhs.uk) wrote:
Hi
I'm encountering a surprisingly
I think that’s pretty much a search time param, though it might end being used
on the update side as well. In any case, I know it doesn’t affect commit or
optimize.
Also, to my knowledge, SolrCloud optimize support was never explicitly added or
tested.
--
Mark Miller
about.me/markrmiller
Looks like you probably have to raise the http client connection pool limits to
handle that kind of load currently.
They are specified as top level config in solr.xml:
maxUpdateConnections
maxUpdateConnectionsPerHost
--
Mark Miller
about.me/markrmiller
On July 21, 2014 at 7:14:59 PM, Darren
Some good info on unique id’s for Lucene / Solr can be found here:
http://blog.mikemccandless.com/2014/05/choosing-fast-unique-identifier-uuid.html
--
Mark Miller
about.me/markrmiller
On July 24, 2014 at 9:51:28 PM, He haobo (haob...@gmail.com) wrote:
Hi,
In our Solr collection (Solr 4.8
That is good testing :) We should track down what is up with that 30%. Might
open a JIRA with some logs.
It can help if you restart the overseer node last.
There are likely some improvements around this post 4.6.
--
Mark Miller
about.me/markrmiller
On August 13, 2014 at 12:05:27 PM, KNitin
There was a reload bug in SolrCloud that was fixed in 4.4 -
https://issues.apache.org/jira/browse/SOLR-4805
Mark
On Oct 17, 2013, at 7:18 AM, Grzegorz Sobczyk wrote:
> Sorry for previous spam (something eat my message)
>
> I have the same problem but with reload action
> ENV:
> - 3x Solr 4.2.
I would try the 4.6 builds and report back your results.
I don't know that Chris is seeing the same thing that has come up in the
past.
In my testing, I'm not having issues with the latest 4.6. The more people
that try it out, the more we will know.
- Mark
On Tue, Oct 22, 2013 at 6:31 AM, mich
I filed https://issues.apache.org/jira/browse/SOLR-5380 and just committed a
fix.
- Mark
On Oct 23, 2013, at 11:15 AM, Shawn Heisey wrote:
> On 10/23/2013 3:59 AM, Thomas Egense wrote:
>> Using cloudSolrServer.setDefaultCollection(collectionId) does not work as
>> intended for an alias spannin
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
October 2013, Apache Solr™ 4.5.1 available
The Lucene PMC is pleased to announce the release of Apache Solr 4.5.1
Solr is the popular, blazing fast, open source NoSQL search platform
from the Apache Lucene project. Its major features include powerful
Just to add to the “use jetty for Solr” argument - Solr 5.0 will no longer
consider itself a webapp and will consider the fact that Jetty is a used an
implementation detail.
We won’t necessarily make it impossible to use a different container, but the
project won’t condone it or support it and
Can you file a JIRA issue?
- Mark
On Oct 25, 2013, at 12:52 PM, marotosg wrote:
> Right, but what if you have many properties being shared across multiple
> cores.
> That means you have to copy same properties in each individual
> core.properties.
>
> Is not this redundant data.
>
> My main
I’ll look into it. I ran the command to create the tag, but perhaps it did not
‘take’ :)
- Mark
On Oct 25, 2013, at 3:56 PM, André Widhani wrote:
> Hi,
>
> shouldn't there be a tag for the 4.5.1 release under
> http://svn.apache.org/repos/asf/lucene/dev/tags/ ?
>
> Or am I looking at the wr
I had created it in a ‘retired’ location. The tag should be in the correction
spot now.
Thanks!
- Mark
On Oct 25, 2013, at 4:04 PM, Mark Miller wrote:
> I’ll look into it. I ran the command to create the tag, but perhaps it did
> not ‘take’ :)
>
> - Mark
>
> On Oct 25
On Oct 24, 2013, at 6:37 AM, michael.boom wrote:
> Any idea what is happening and why the core on which i wanted the
> optimization to happen, got no optimization and instead another shard got
> optimized, on both servers?
Sounds like a bug we should fix. If you don’t specify distrib=false, it
gly) propose we take it a step further and drop Java :)! I'm getting
> tired of trying to scale GC'ing JVMs!
>
> Tim
>
> On 25/10/13 09:02 AM, Mark Miller wrote:
>> Just to add to the “use jetty for Solr” argument - Solr 5.0 will no longer
>> consider itself a weba
Has someone filed a JIRA issue with the current known info yet?
- Mark
> On Oct 29, 2013, at 12:36 AM, Sai Gadde wrote:
>
> Hi Michael,
>
> I downgraded to Solr 4.4.0 and this issue is gone. No additional settings
> or tweaks are done.
>
> This is not a fix or solution I guess but, in our cas
Which version of solr are you using? Regardless of your env, this is a fail
safe that you should not hit.
- Mark
> On Nov 5, 2013, at 8:33 AM, Henrik Ossipoff Hansen
> wrote:
>
> I previously made a post on this, but have since narrowed down the issue and
> am now giving this another try, w
Can you isolate any exceptions that happened just before that exception.
started repeating?
- Mark
> On Nov 7, 2013, at 9:09 AM, Eric Bus wrote:
>
> Hi,
>
> I'm having a problem with one of my shards. Since yesterday, SOLR keeps
> repeating the same exception over and over for this shard.
>
StateReader ZooKeeper watch triggered, but Solr cannot talk
> to ZK
> 03:07:41 WARN RecoveryStrategy Stopping recovery for
> zkNodeName=solr04.cd-et.com:8080_solr_auto_suggest_shard1_replica2core=auto_suggest_shard1_replica2
>
> After this, the cluster state seems to be fine, and I
RE: the example folder
It’s something I’ve been pushing towards moving away from for a long time - see
https://issues.apache.org/jira/browse/SOLR-3619 Rename 'example' dir to
'server' and pull examples into an 'examples’ directory
Part of a push I’ve been on to own the Container level (people a
Try Solr 4.5.1.
https://issues.apache.org/jira/browse/SOLR-5306 Extra collection creation
parameters like collection.configName are not being respected.
- Mark
On Nov 13, 2013, at 2:24 PM, Christopher Gross wrote:
> Running Apache Solr 4.5 on Tomcat 7.0.29, Java 1.6_30. 3 SolrCloud nodes
>
We are moving away from pre defining SolrCores for SolrCloud. The correct
approach would be to use thew Collections API - then it is quite simple to
change the number of shards for each collection you create.
Hopefully our examples will move to doing this before long.
- Mark
On Nov 15, 2013, a
You are asking for 5000 docs right? And that’s forcing us to look up 5000
external to internal ids. I think this always had a cost, but it’s obviously
worse if you ask for a ton of results. I don’t think single node has to do
this? And if we had like Searcher leases or something (we will eventua
There appear to be plugins to do this, but since Apache hosts the wiki infra
for us, we don’t get to toss in any plugins we want unfortunately.
- Mark
On Nov 18, 2013, at 8:16 AM, Uwe Reh wrote:
> I'd like to read the guide as e-paper. Is there a way to obtain the document
> in the Format epu
We should have a list command in the collections api. I can help if someone
wants to make a JIRA issue.
- Mark
On Nov 18, 2013, at 2:11 PM, Anirudha Jadhav wrote:
> you can use the following 2 ways
>
> 1. ZK client API
>you could just do a get_children on the zk node
> /collections/ to ge
You shouldn’t be configuring the replication handler if you are using solrcloud.
- Mark
On Nov 18, 2013, at 3:51 PM, Beale, Jim (US-KOP) wrote:
> Thanks Michael,
>
> I am having a terrible time getting this non-sharded index up. Everything I
> try leads to a dead-end.
>
> http://10.0.15.44:
4.6 no longer uses XML to send requests between nodes. It’s probably worth
trying it and seeing if there is still a problem. Here is the RC we are voting
on today:
http://people.apache.org/~simonw/staging_area/lucene-solr-4.6.0-RC4-rev1543363/
Otherwise, I do plan on looking into this issue soo
Yeah, this is kind of like one of many little features that we have just not
gotten to yet. I’ve always planned for a param that let’s you say how many
replicas an update must be verified on before responding success. Seems to make
sense to fail that type of request early if you notice there are
I’d recommend you start with the upcoming 4.6 release. Should be out this week
or next.
- Mark
On Nov 19, 2013, at 8:18 AM, adfel70 wrote:
> Hi, we plan to establish an ensemble of solr with zookeeper.
> We gonna have 6 solr servers with 2 instances on each server, also we'll
> have 6 shards
, Timothy Potter wrote:
> You're thinking is always one-step ahead of me! I'll file the JIRA
>
> Thanks.
> Tim
>
>
> On Tue, Nov 19, 2013 at 10:38 AM, Mark Miller wrote:
>
>> Yeah, this is kind of like one of many little features that we have just
>&
On Nov 19, 2013, at 2:24 PM, Timothy Potter wrote:
> Good questions ... From my understanding, queries will work if Zk goes down
> but writes do not work w/o Zookeeper. This works because the clusterstate
> is cached on each node so Zookeeper doesn't participate directly in queries
> and indexin
There might be a JIRA issue out there about replication not cleaning up on all
fails - e.g. on startup or something - kind of rings a bell…if so, it will be
addressed eventually.
Otherwise, you might have two for a bit just due to multiple searchers being
around at once for a while or something
Feel free to file a JIRA issue with the changes you think make sense.
- Mark
On Nov 20, 2013, at 4:21 PM, Eugen Paraschiv wrote:
> Hi,
> Quick question about the HttpSolrServer implementation - I would like to
> extend some of the functionality of this class - but when I extend it - I'm
> havin
Yes, more details…
Solr version, which garbage collector, how does heap usage look, cpu, etc.
- Mark
On Nov 21, 2013, at 6:46 PM, Erick Erickson wrote:
> How real time is NRT? In particular, what are you commit settings?
>
> And can you characterize "periodic slowness"? Queries that usually
>
SolrCloud does not use commits for update acceptance promises.
The idea is, if you get a success from the update, it’s in the system, commit
or not.
Soft Commits are used for visibility only.
Standard Hard Commits are used essentially for internal purposes and should be
done via auto commit ge
If you want this promise and complete control, you pretty much need to do a doc
per request and many parallel requests for speed.
The bulk and streaming methods of adding documents do not have a good fine
grained error reporting strategy yet. It’s okay for certain use cases and and
especially b
1 - 100 of 1567 matches
Mail list logo