lot.
But seriously, you have a second chance here.
This mostly concerns SolrCloud. That’s why I recommend standalone mode. But
key people know why to do. I know it will happen - but their lives will be
easier if you help.
Lol.
- Mark
On Sat, Nov 30, 2019 at 9:25 PM Mark Miller wrote:
>
you
don’t need too.
Mark
On Sat, Nov 30, 2019 at 7:05 PM Dave wrote:
> I’m young here I think, not even 40 and only been using solr since like
> 2008 or so, so like 1.4 give or take. But I know a really good therapist if
> you want to talk about it.
>
> > On Nov 30, 2019, at 6:
Now I have sacrificed to give you a new chance. A little for my community.
It was my community. But it was mostly for me. The developer I started as
would kick my ass today. Circumstances and luck has brought money to our
project. And it has corrupted our process, our community, and our code.
In
It’s going to haunt me if I don’t bring up Hossman. I don’t feel I have to
because who doesn’t know him.
He is a treasure that doesn’t spend much time on SolrCloud and has checked
out of leadership for the large part for reasons I won’t argue with.
Why doesn’t he do much with SolrCloud in a real
I’m including this response to a private email because it’s not something
I’ve brought up and I also think it’s a critical note:
“Yes. That is our biggest advantage. Being Apache. Almost no one seems to
be employed to help other contributors get their work in at the right
level, and all the money
ntroduced at
> some time. Notwithstanding, I do think that the project needs to be more
> open with community commits. The community and open-sourceness of Solr is
> what I used to love over those of ElasticSearch's.
>
> Anyways, keep rocking! You have already left your footpr
The people I have identified that I have the most faith in to lead the
fixing of Solr are Ishan, Noble and David. I encourage you all to look at
and follow and join in their leadership.
You can do this.
Mark
--
- Mark
http://about.me/markrmiller
Now one company thinks I’m after them because they were the main source of
the jokes.
Companies is not a typo.
If you are using Solr to make or save tons of money or run your business
and you employee developers, please include yourself in this list.
You are taking and in my opinion Solr is goin
y I found nothing in solr cloud worth changing from standalone
> for, and just added more complications, more servers, and required becoming
> an expert/knowledgeable in zoo keeper, id rather spend my time developing
> than becoming a systems administrator
>
> On Wed, Nov 27, 2019 at 3
This is your queue to come and make your jokes with your name attached. I’m
sure the Solr users will appreciate them more than I do. I can’t laugh at
this situation because I take production code seriously.
--
- Mark
http://about.me/markrmiller
And if you are a developer, enjoy that Gradle build! It was the highlight
of my year.
On Wed, Nov 27, 2019 at 10:00 AM Mark Miller wrote:
> If you have a SolrCloud installation that is somehow working for you,
> personally I would never upgrade. The software is getting progressively
to work on it in any real fashion since 2012. I’m
sorry I couldn’t help improve the situation for you.
Take it for what it’s worth. To some, not much I’m sure.
Mark Miller
--
- Mark
http://about.me/markrmiller
Hook up a profiler to the overseer and see what it's doing, file a JIRA and
note the hotspots or what methods appear to be hanging out.
On Tue, Sep 3, 2019 at 1:15 PM Andrew Kettmann
wrote:
>
> > You’re going to want to start by having more than 3gb for memory in my
> opinion but the rest of you
ying on legacy cloud mode.
>
> I think I can figure out where the data is being stored for an existing
> (empty) collection, shut that down, swap in the new files, and reload.
>
> But I’m wondering if that’s really the best (or even sane) approach.
>
> Thanks,
>
> —
You create MiniSolrCloudCluster with a base directory and then each Jetty
instance created gets a SolrHome in a subfolder called node{i}. So if
legacyCloud=true you can just preconfigure a core and index under the right
node{i} subfolder. legacyCloud=true should not even exist anymore though,
so th
Yeah, basically ConcurrentUpdateSolrClient is a shortcut to getting multi
threaded bulk API updates out of the single threaded, single update API.
The downsides to this are: It is not cloud aware - you have to point it at
a server, you have to add special code to see if there are any errors, you
do
A soft commit does not control merging. The IndexWriter controls merging
and hard commits go through the IndexWriter. A soft commit tells Solr to
try and open a new SolrIndexSearcher with the latest view of the index. It
does this with a mix of using the on disk index and talking to the
IndexWriter
It's been a while since I've been in this deeply, but it should be
something like:
sendUpdateOnlyToShardLeaders will select the leaders for each shard as the
load balanced targets for update. The updates may not go to the *right*
leader, but only the leaders will be chosen, followers (non leader
r
Yeah, the project should never use built in serialization. I'd file a JIRA
issue. We should remove this when we can.
- Mark
On Sun, May 6, 2018 at 9:39 PM Will Currie wrote:
> Premise: During an upgrade I should be able to run a 7.3 pull replica
> against a 7.2 tlog leader. Or vice versa.
>
> M
That already happens. The ZK client itself will reconnect when it can and
trigger everything to be setup like when the cluster first starts up,
including a live node and leader election, etc.
You may have hit a bug or something else missing from this conversation,
but reconnecting after losing the
That is probably partly because of hdfs cache key unmapping. I think I
improved that in some issue at some point.
We really want to wait by default for a long time though - even 10 minutes
or more. If you have tons of SolrCores, each of them has to be torn down,
each of them might commit on close,
Look at the Overseer host and see if there are any relevant logs for
autoAddReplicas.
- Mark
On Mon, Oct 24, 2016 at 3:01 PM Chetas Joshi wrote:
> Hello,
>
> I have the following configuration for the Solr cloud and a Solr collection
> This is Solr on HDFS and Solr version I am using is 5.5.0
>
Could you file a JIRA issue so that this report does not get lost?
- Mark
On Tue, Nov 15, 2016 at 10:49 AM Solr User wrote:
> For those interested, I ended up bundling the customized ACL provider with
> the solr.war. I could not stomach looking at the stack trace in the logs.
>
> On Mon, Nov 7
Only INFO level, so I suspect not bad...
If that Overseer closed, another node should have picked up where it left
off. See that in another log?
Generally an Overseer close means a node or cluster restart.
This can cause a lot of DOWN state publishing. If it's a cluster restart, a
lot of those D
You get this when the Overseer is either bogged down or not processing
events generally.
The Overseer is way, way faster at processing events in 5x.
If you search your logs for .Overseer you can see what it's doing. Either
nothing at the time, or bogged down processing state updates probably.
Al
Two of them are sub requests. They have params isShard=true and
distrib=false. The top level user query will not have distrib or isShard
because they default the other way.
- Mark
On Mon, Jan 11, 2016 at 6:30 AM Syed Mudasseer
wrote:
> Hi,
> I have solr configured on cloud with the following de
Not sure I'm onboard with the first proposed solution, but yes, I'd open a
JIRA issue to discuss.
- Mark
On Mon, Jan 11, 2016 at 4:01 AM Konstantin Hollerith
wrote:
> Hi,
>
> I'm using SLF4J MDC to log additional Information in my WebApp. Some of my
> MDC-Parameters even include Line-Breaks.
>
dataDir and tlog dir cannot be changed with a core reload.
- Mark
On Sat, Jan 9, 2016 at 1:20 PM Erick Erickson
wrote:
> Please show us exactly what you did. and exactly
> what you saw to say that "does not seem to work".
>
> Best,
> Erick
>
> On Fri, Jan 8, 2016 at 7:47 PM, KNitin wrote:
> >
He has waitSearcher as false it looks, so all the time should be in the
commit. So that amount of time does sound odd.
I would certainly change those commit settings though. I would not use
maxDocs, that is an ugly way to control this. And one second is much too
aggressive as Erick says.
If you w
If you see "WARNING: too many searchers on deck" or something like that in
the logs, that could cause this behavior and would indicate you are opening
searchers faster than Solr can keep up.
- Mark
On Tue, Nov 17, 2015 at 2:05 PM Erick Erickson
wrote:
> That's what was behind my earlier comment
You can pass arbitrary params with Solrj. The API usage is just a little
more arcane.
- Mark
On Wed, Nov 11, 2015 at 11:33 PM Sathyakumar Seshachalam <
sathyakumar_seshacha...@trimble.com> wrote:
> I intend to use SolrJ. I only saw the below overloaded commit method in
> documentation (http://lu
openSearcher is a valid param for a commit whatever the api you are using
to issue it.
- Mark
On Wed, Nov 11, 2015 at 12:32 PM Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
> Does waitSearcher=false works like you need?
>
> On Wed, Nov 11, 2015 at 1:34 PM, Sathyakumar Seshachalam <
> sat
Your Lucene and Solr versions must match.
On Thu, Oct 8, 2015 at 4:02 PM Steve wrote:
> I've loaded the Films data into a 4 node cluster. Indexing went well, but
> when I issue a query, I get this:
>
> "error": {
> "msg": "org.apache.solr.client.solrj.SolrServerException: No live
> SolrServ
gt;
> On 10/6/15 1:07 PM, Mark Miller wrote:
> > That amount of RAM can easily be eaten up depending on your sorting,
> > faceting, data.
> >
> > Do you have gc logging enabled? That should describe what is happening
> with
> > the heap.
> >
> > - Mar
memory. For memory mapped index files the remaining 24G or what is
> available off of it should be available. Looking at the lsof output the
> memory mapped files were around 10G.
>
> Thanks.
>
>
> On 10/5/15 5:41 PM, Mark Miller wrote:
> > I'd make two guess:
> >
>
Best tool for this job really depends on your needs, but one option:
I have a dev tool for Solr log analysis:
https://github.com/markrmiller/SolrLogReader
If you use the -o option, it will spill out just the queries to a file with
qtimes.
- Mark
On Wed, Sep 23, 2015 at 8:16 PM Tarala, Magesh w
I'd make two guess:
Looks like you are using Jrocket? I don't think that is common or well
tested at this point.
There are a billion or so bug fixes from 4.6.1 to 5.3.2. Given the pace of
SolrCloud, you are dealing with something fairly ancient and so it will be
harder to find help with older iss
Not sure what that means :)
SOLR-5776 would not happen all the time, but too frequently. It also
wouldn't matter the power of CPU, cores or RAM :)
Do you see fails without https is what you want to check.
- mark
On Mon, Oct 5, 2015 at 2:16 PM Markus Jelsma
wrote:
> Hi - no, i don't think so,
If it's always when using https as in your examples, perhaps it's SOLR-5776.
- mark
On Mon, Oct 5, 2015 at 10:36 AM Markus Jelsma
wrote:
> Hmmm, i tried that just now but i sometimes get tons of Connection reset
> errors. The tests then end with "There are still nodes recoverying - waited
> for
On Wed, Sep 30, 2015 at 10:36 AM Steve Davids wrote:
> Our project built a custom "admin" webapp that we use for various O&M
> activities so I went ahead and added the ability to upload a Zip
> distribution which then uses SolrJ to forward the extracted contents to ZK,
> this package is built and
gt; inside it stucks. let me try to see if jconsole can show something
> meaningful.
>
> Thanks,
> Susheel
>
> On Wed, Sep 16, 2015 at 12:17 PM, Shawn Heisey
> wrote:
>
> > On 9/16/2015 9:32 AM, Mark Miller wrote:
> > > Have you used jconsole or vis
wrote:
> On 9/16/2015 9:32 AM, Mark Miller wrote:
> > Have you used jconsole or visualvm to see what it is actually hanging on
> to
> > there? Perhaps it is lock files that are not cleaned up or something
> else?
> >
> > You might try: find ~/.ivy2 -name "*.lck&qu
Have you used jconsole or visualvm to see what it is actually hanging on to
there? Perhaps it is lock files that are not cleaned up or something else?
You might try: find ~/.ivy2 -name "*.lck" -type f -exec rm {} \;
- Mark
On Wed, Sep 16, 2015 at 9:50 AM Susheel Kumar wrote:
> Hi,
>
> Sending
Perhaps there is something preventing clean shutdown. Shutdown makes a best
effort attempt to publish DOWN for all the local cores.
Otherwise, yes, it's a little bit annoying, but full state is a combination
of the state entry and whether the live node for that replica exists or not.
- Mark
On W
I think there is some better classpath isolation options in the works for
Hadoop. As it is, there is some harmonization that has to be done depending
on versions used, and it can get tricky.
- Mark
On Wed, Jun 17, 2015 at 9:52 AM Erick Erickson
wrote:
> For sure there are a few rough edges here
SolrCloud does not really support any form of rollback.
On Mon, Jun 15, 2015 at 5:05 PM Aurélien MAZOYER <
aurelien.mazo...@francelabs.com> wrote:
> Hi all,
>
> Is DeletionPolicy customization still available in Solr Cloud? Is there
> a way to rollback to a previous commit point in Solr Cloud tha
ase in point, I've got a working patch that I'll release at some
> point soon that gives us a "collections" version of the "core admin"
> pane. I'd love to add HDFS support to the UI if there were APIs worth
> exposing (I haven't dug into HDFS support
I didn't really follow this issue - what was the motivation for the rewrite?
Is it entirely under: "new code should be quite a bit easier to work on for
programmer
types" or are there other reasons as well?
- Mark
On Mon, Jun 15, 2015 at 10:40 AM Erick Erickson
wrote:
> Gaaah, that'll teach me
We will have to a find a way to deal with this long term. Browsing the code
I can see a variety of places where problem exception handling has been
introduced since this all was fixed.
- Mark
On Wed, Jun 3, 2015 at 8:19 AM Mark Miller wrote:
> File a JIRA issue please. That OOM Exception
File a JIRA issue please. That OOM Exception is getting wrapped in a
RuntimeException it looks. Bug.
- Mark
On Wed, Jun 3, 2015 at 2:20 AM Clemens Wyss DEV
wrote:
> Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available
> for Solr.
>
> I am seeing the following OOMs:
> ERROR -
A bug fix version difference probably won't matter. It's best to use the
same version everyone else uses and the one our tests use, but it's very
likely 3.4.5 will work without a hitch.
- Mark
On Tue, May 5, 2015 at 9:09 AM shacky wrote:
> Hi.
>
> I read on
> https://cwiki.apache.org/confluence
If copies of the index are not eventually cleaned up, I'd fill a JIRA to
address the issue. Those directories should be removed over time. At times
there will have to be a couple around at the same time and others may take
a while to clean up.
- Mark
On Tue, Apr 28, 2015 at 3:27 AM Ramkumar R. Ai
Hmm...can you file a JIRA issue with this info?
- Mark
On Fri, Mar 27, 2015 at 6:09 PM Joseph Obernberger
wrote:
> I just started up a two shard cluster on two machines using HDFS. When I
> started to index documents, the log shows errors like this. They repeat
> when I execute searches. All s
Doesn't ConcurrentUpdateSolrServer take an HttpClient in one of it's
constructors?
- Mark
On Sun, Mar 22, 2015 at 3:40 PM Ramkumar R. Aiyengar <
andyetitmo...@gmail.com> wrote:
> Not a direct answer, but Anshum just created this..
>
> https://issues.apache.org/jira/browse/SOLR-7275
> On 20 Mar
Interesting bug.
First there is the already closed transaction log. That by itself deserves
a look. I'm not even positive we should be replaying the log we
reconnecting from ZK disconnect, but even if we do, this should never
happen.
Beyond that there seems to be some race. Because of the log tro
If you google replication can cause index corruption there are two jira issues
that are the most likely cause of corruption in a solrcloud env.
- Mark
> On Mar 5, 2015, at 2:20 PM, Garth Grimm
> wrote:
>
> For updates, the document will always get routed to the leader of the
> appropriate s
I’ll be working on this at some point:
https://issues.apache.org/jira/browse/SOLR-6237
- Mark
http://about.me/markrmiller
> On Feb 25, 2015, at 2:12 AM, longsan wrote:
>
> We used HDFS as our Solr index storage and we really have a heavy update
> load. We had met much problems with current le
Perhaps try quotes around the url you are providing to curl. It's not
complaining about the http method - Solr has historically always taken
simple GET's for http - for good or bad, you pretty much only post
documents / updates.
It's saying the name param is required and not being found and since
What is your replication factor and doc size?
Replication can affect performance a fair amount more than it should currently.
For the number of nodes, that doesn’t sound like it matches what I’ve seen
unless those are huge documents or you have some slow analyzer in the chain or
something.
Wit
Yes, after 45 seconds a replica should take over as leader. It should
likely explain in the logs of the replica that should be taking over why
this is not happening.
- Mar
On Wed Jan 28 2015 at 2:52:32 PM Joshi, Shital wrote:
> When leader reaches 99% physical memory on the box and starts swapp
Sorry, there is no great workaroud. You might try raising the max idle time
for your container - perhaps that makes it less frequent.
- Mark
On Tue Jan 20 2015 at 1:56:54 PM Nishanth S wrote:
> Thank you Mike.Sure enough,we are running into the same issue you
> mentoined.Is there a quick fix fo
bq. Is this the correct approach ?
It works, but it might not be ideal. Recent versions of ZooKeeper have an
alternate config for this max limit though, and it is preferable to use
that.
See maxSessionTimeout in
http://zookeeper.apache.org/doc/r3.3.1/zookeeperAdmin.html
- Mark
On Mon Jan 26 201
I'd have to do some digging. Hossman might know offhand. You might just
want to use @SupressSSL on the tests :)
- Mark
On Mon Jan 12 2015 at 8:45:11 AM Markus Jelsma
wrote:
> Hi - in a small Maven project depending on Solr 4.10.3, running unit tests
> that extend BaseDistributedSearchTestCase r
bq. ClusterState says we are the leader, but locally we don't think so
Generally this is due to some bug. One bug that can lead to it was recently
fixed in 4.10.3 I think. What version are you on?
- Mark
On Mon Jan 12 2015 at 7:35:47 AM Thomas Lamy wrote:
> Hi,
>
> I found no big/unusual GC pa
bq. But tons of people on this mailing list do not recommend AggressiveOpts
It's up to you to decide - that is why it's an option. It will enable more
aggressive options that will tend to perform better. On the other hand,
these more aggressive options and optimizations have a history of being
mor
releases. It is possible that the mirror you
are using may not have replicated the release yet. If that is the
case, please try another mirror. This also goes for Maven access.
Happy Holidays,
Mark Miller
http://www.about.me/markrmiller
bq. esp. since we've set max threads so high to avoid distributed
dead-lock.
We should fix this for 5.0 - add a second thread pool that is used for
internal requests. We can make it optional if necessary (simpler default
container support), but it's a fairly easy improvement I think.
- Mark
On
If someone wants to file a JIRA, we really should detect and help the user
on that.
- Mark
On Wed Nov 19 2014 at 10:39:56 AM Robert Kent
wrote:
> Yes, Alan's comment was correct. Using the correct Zookeeper string made
> things work correctly, e.g.:
>
> SOLR_ZK_ENSEMBLE=zookeeper1:2181,zookee
> On Oct 28, 2014, at 9:31 AM, Shawn Heisey wrote:
>
> exceed a 15 second zkClientTimeout
Which is too low even with good GC settings. Anyone with config still using 15
or 10 seconds should move it to at least 30.
- Mark
http://about.me/markrmiller
Best is to pass the Java cmd line option that kills the process on OOM and
setup a supervisor on the process to restart it. You need a somewhat recent
release for this to work properly though.
- Mark
> On Oct 14, 2014, at 9:06 AM, Salman Akram
> wrote:
>
> I know there are some suggestions
I think it's just cruft I left in and never ended up using anywhere. You can
ignore it.
- Mark
> On Oct 13, 2014, at 8:42 PM, Martin Grotzke
> wrote:
>
> Hi,
>
> can anybody tell me the meaning of ZkStateReader.SYNC? All other state
> related constants are clear to me, I'm only not sure abo
to drop the container
thread pool too much. There are other control points though.
- Mark
http://about.me/markrmiller
On Sun, Aug 31, 2014 at 11:53 AM, Ramkumar R. Aiyengar <
andyetitmo...@gmail.com> wrote:
> On 31 Aug 2014 13:24, "Mark Miller" wrote:
> >
> >
&g
> On Aug 31, 2014, at 4:04 AM, Christoph Schmidt
> wrote:
>
> we see at least two problems when scaling to large number of collections. I
> would like to ask the community, if they are known and maybe already
> addressed in development:
> We have a SolrCloud running with the following numbers
I am often asked to take a look at one to many Solr log files that are
hundreds of megabytes to gigabytes in size. "Peaking" at this amount of
logs is a bit time consuming. Anybody that does this often enough has to
build a log parsing tool eventually. One off greps can only get you so far.
The las
Sounds like you should file 3 JIRA issues. They all look like legit stuff we
should dig into on a glance.
--
Mark Miller
about.me/markrmiller
On August 24, 2014 at 12:35:13 PM, ralph tice (ralph.t...@gmail.com) wrote:
> Hi all,
>
> Two issues, first, when I issue an ADDREPLICA cal
The state is actually a combo of the state in clusterstate and the live nodes.
If the live node is not there, it's gone regardless of the last state it
published.
- Mark
> On Aug 23, 2014, at 6:00 PM, Nathan Neulinger wrote:
>
> In particular, a shard being 'active' vs. 'gone'.
>
> The web
On August 19, 2014 at 1:33:10 PM, Mark Miller (markrmil...@gmail.com) wrote:
> > sounds like we should write a test and make it work.
Keeping in mind that when using a shared filesystem like HDFS or especially if
using the MapReduce contrib, you probably won’t want this new behavior.
--
with SolrCloud, sounds
like we should write a test and make it work.
--
Mark Miller
about.me/markrmiller
On August 19, 2014 at 1:20:54 PM, Timothy Potter (thelabd...@gmail.com) wrote:
> Hi,
>
> Using the coreAdmin mergeindexes command to merge an index into a
> leader (SolrCloud m
On August 19, 2014 at 2:39:32 AM, Lee Chunki (lck7...@coupang.com) wrote:
> > the sooner the better? i.e. version 4.9.0.
Yes, certainly.
--
Mark Miller
about.me/markrmiller
That is good testing :) We should track down what is up with that 30%. Might
open a JIRA with some logs.
It can help if you restart the overseer node last.
There are likely some improvements around this post 4.6.
--
Mark Miller
about.me/markrmiller
On August 13, 2014 at 12:05:27 PM, KNitin
Some good info on unique id’s for Lucene / Solr can be found here:
http://blog.mikemccandless.com/2014/05/choosing-fast-unique-identifier-uuid.html
--
Mark Miller
about.me/markrmiller
On July 24, 2014 at 9:51:28 PM, He haobo (haob...@gmail.com) wrote:
Hi,
In our Solr collection (Solr 4.8
Looks like you probably have to raise the http client connection pool limits to
handle that kind of load currently.
They are specified as top level config in solr.xml:
maxUpdateConnections
maxUpdateConnectionsPerHost
--
Mark Miller
about.me/markrmiller
On July 21, 2014 at 7:14:59 PM, Darren
I think that’s pretty much a search time param, though it might end being used
on the update side as well. In any case, I know it doesn’t affect commit or
optimize.
Also, to my knowledge, SolrCloud optimize support was never explicitly added or
tested.
--
Mark Miller
about.me/markrmiller
would have to dig in to really know I think. I would doubt it’s a configuration
issue, but you never know.
--
Mark Miller
about.me/markrmiller
On July 8, 2014 at 9:18:28 AM, Ian Williams (NWIS - Applications Design)
(ian.willi...@wales.nhs.uk) wrote:
Hi
I'm encountering a surprisingly
other
cases, information is being pulled from zookeeper and recovering nodes are
ignored. If this is the issue I think it is, it should only be an issue when
you directly query recovery node.
The CloudSolrServer client works around this issue as well.
--
Mark Miller
about.me/markrmiller
On July
We have been waiting for that issue to be finished before thinking too hard
about how it can improve things. There have been a couple ideas (I’ve mostly
wanted it for improving the internal zk mode situation), but no JIRAs yet that
I know of.
--
Mark Miller
about.me/markrmiller
On June 23
The main limit is the 1mb zk node limit. But even that can be raised.
- Mark
> On Jun 6, 2014, at 6:21 AM, Shalin Shekhar Mangar
> wrote:
>
> No, there's no theoretical limit.
>
>
>> On Fri, Jun 6, 2014 at 11:20 AM, ku3ia wrote:
>>
>> Hi all!
>> The question is how many collections I can
If you are sure about this, can you file a JIRA issue?
--
Mark Miller
about.me/markrmiller
On May 12, 2014 at 8:50:42 PM, lboutros (boutr...@gmail.com) wrote:
Dear All,
we just finished the migration of a cluster from Solr 4.3.1 to Solr 4.6.1.
With solr 4.3.1 a node was not considered as
What version are you running? This was fixed in a recent release. It can happen
if you hit add core with the defaults on the admin page in older versions.
--
Mark Miller
about.me/markrmiller
On May 1, 2014 at 11:19:54 AM, ryan.cooke (ryan.co...@gmail.com) wrote:
I saw an overseer queue
to even hope for monotonicity.
--
Mark Miller
about.me/markrmiller
On April 26, 2014 at 1:11:14 PM, Walter Underwood (wun...@wunderwood.org) wrote:
NTP works very hard to keep the clock positive monotonic. But nanoTime is
intended for elapsed time measurement anyway, so it is the right choice
Have you tried a comma-separated list or are you going by documentation? It
should work.
--
Mark Miller
about.me/markrmiller
On April 26, 2014 at 1:03:25 PM, Scott Stults
(sstu...@opensourceconnections.com) wrote:
It looks like this only takes a single host as its value, whereas the
zkHost
My answer remains the same. I guess if you want more precise terminology,
nanoTime will generally be monotonic and currentTimeMillis will not be, due to
things like NTP, etc. You want monotonicity for measuring elapsed times.
--
Mark Miller
about.me/markrmiller
On April 26, 2014 at 11:25:16 AM
System.currentTimeMillis can jump around due to NTP, etc. If you are trying to
count elapsed time, you don’t want to use a method that can jump around with
the results.
--
Mark Miller
about.me/markrmiller
On April 26, 2014 at 8:58:20 AM, YouPeng Yang (yypvsxf19870...@gmail.com) wrote:
Hi
l the merge is complete. If
writes are allowed, corruption may occur on the merged index.”
Doesn’t sound right to me at all.
--
Mark Miller
about.me/markrmiller
On April 22, 2014 at 10:38:08 AM, Brett Hoerner (br...@bretthoerner.com) wrote:
I think I'm just misunderstanding the use of go-
Odd - might be helpful if you can share your sorlconfig.xml being used.
--
Mark Miller
about.me/markrmiller
On April 17, 2014 at 12:18:37 PM, Brett Hoerner (br...@bretthoerner.com) wrote:
I'm doing HDFS input and output in my job, with the following:
hadoop jar /mnt/faas-solr.jar \
What version are you testing? Thought we had addressed this.
--
Mark Miller
about.me/markrmiller
On April 16, 2014 at 6:02:09 PM, Jessica Mallet (mewmewb...@gmail.com) wrote:
Hi Furkan,
Thanks for the reply. I understand the intent. However, in the case I
described, the follower is blocked
bq. before any of Solr gets to do its shutdown sequence
Yeah, this is kind of an open issue. There might be a JIRA for it, but I cannot
remember. What we really need is an explicit shutdown call that can be made
before stopping jetty so that it’s done gracefully.
--
Mark Miller
about.me
Inline responses below.
--
Mark Miller
about.me/markrmiller
On April 15, 2014 at 2:12:31 PM, Peter Keegan (peterlkee...@gmail.com) wrote:
I have a SolrCloud index, 1 shard, with a leader and one replica, and 3
ZKs. The Solr indexes are behind a load balancer. There is one
CloudSolrServer
We have to fix that then.
--
Mark Miller
about.me/markrmiller
On April 15, 2014 at 12:20:03 PM, Rich Mayfield (mayfield.r...@gmail.com) wrote:
I see something similar where, given ~1000 shards, both nodes spend a LOT of
time sorting through the leader election process. Roughly 30 minutes
We don’t currently retry, but I don’t think it would hurt much if we did - at
least briefly.
If you want to file a JIRA issue, that would be the best way to get it in a
future release.
--
Mark Miller
about.me/markrmiller
On March 28, 2014 at 5:40:47 PM, Michael Della Bitta
(michael.della.bi
1 - 100 of 1567 matches
Mail list logo