If your doing periodic backups, I'm just not getting why you would care. I'm
still missing what stopping indexing would gain you.
- Mark
On Jan 8, 2013, at 1:36 AM, Otis Gospodnetic wrote:
> Hi,
>
> Right, you can continue indexing, but if you need to run
> http:/
If you are using 4.0 you can't use the CloudSolrServer with the collections API
- you have to pick a server and use the HttpSolrServer impl. In 4.1 you can use
the CloudSolrServer with the collections API.
- Mark
On Jan 6, 2013, at 8:42 PM, Jay Parashar wrote:
> The exception
uff is all distrib
capable - though I think it should be.
- Mark
On Jan 8, 2013, at 10:06 AM, Jay Parashar wrote:
> I recently migrated to Solr Cloud (4.0.0 from 3.6.0) and my auto suggest
> feature does not seem to be working. It is a typical implementation with a
> "/suggest"
If it is a problem, you should be able to just stop your cluster and nuke that
file in zookeeper, than startup with the new version.
- Mark
On Jan 8, 2013, at 5:09 PM, Markus Jelsma wrote:
> I am not sure this applies to alpha and final but i do think upgrading from
> 4.0 to 4.1 wil
s nodes are green and the
collection consists of 3 shards. Each shard has 1 leader and 1 replica, each
hosted by a different Solr instance.
In other words, it seemed to work for me.
- Mark
On Jan 9, 2013, at 10:58 AM, James Thomas wrote:
> Hi,
>
> Simple question, I hope.
>
>
It may be able to do that because it's forwarding requests to other nodes that
are up?
Would be good to dig into the logs to see if you can narrow in on the reason
for the recovery_failed.
- Mark
On Jan 9, 2013, at 8:52 PM, Zeng Lames wrote:
> Hi ,
>
> we meet below s
r to share sets of
config files across collections if you want to. You don't need to at all though.
I'm not sure if xinclude works with zk, but I don't think it does.
- Mark
On Jan 9, 2013, at 10:31 PM, Shawn Heisey wrote:
> I have a lot of experience with Solr, start
It may still be related. Even a non empty index can have no versions (eg one
that was just replicated). Should behave better in this case in 4.1.
- Mark
On Jan 10, 2013, at 12:41 AM, Zeng Lames wrote:
> thanks Mark. will further dig into the logs. there is another problem
> related.
&
Setup hard auto commit with openSeacher=false. I would do it at least once a
minute. Don't worry about the commit being out of sync on the different nodes -
you will be using soft commits for visibility. The hard commits will just be
about relieving the pressure on the tlog.
- Mark
On J
t had.
- Mark
On Jan 10, 2013, at 5:17 AM, mizayah wrote:
> Lets say i got one collection with 3 shards. Every shard contains indexed
> data.
>
> I want to unload one shard. Is there any way for data from unloaded shard to
> be not lost?
> How to remove shard with da
mes simply
about periodically flushing the tlog and the soft commit completely controls
visibility.
- Mark
On Jan 10, 2013, at 9:41 AM, Upayavira wrote:
> And you don't need to open a searcher (openSearcher=false) because
> you've got caches built up already alongside the in-
hings.
One of the tradeoffs off using a very fast soft commit is that Sol's std caches
will not be nearly as useful.
- Mark
On Jan 10, 2013, at 11:24 AM, Upayavira wrote:
> That's great Mark. Thx. One final question... all the stuff to do with
> autowarming and static warming
k closer later - can't remember who made the change in Solr.
- Mark
use cases, you
might not worry about it.
- Mark
On Jan 10, 2013, at 2:33 PM, Upayavira wrote:
> Heh, the "it depends" answer :-)
>
> Thanks for the clarification.
>
> Upayavira
>
> On Thu, Jan 10, 2013, at 05:01 PM, Mark Miller wrote:
>> I think it real
closely though.
- Mark
On Jan 10, 2013, at 4:49 PM, Gregg Donovan wrote:
> Thanks, Mark.
>
> The relevant commit on the solrcloud branch appears to be 1231134 and is
> focused on the recovery aspect of SolrCloud:
>
> http://svn.apache.org/viewvc?diff_format=h&view=revision&
Looks like we are talking about making a release candidate next week.
Mark
Sent from my iPhone
On Jan 10, 2013, at 7:50 PM, Zeng Lames wrote:
> thanks Mark. may I know the target release date of 4.1?
>
>
> On Thu, Jan 10, 2013 at 10:13 PM, Mark Miller wrote:
>
>> It
ugh to diagnose because the root
exception is being swallowed - it's likely a connect to zk failed exception
though.
- Mark
On Jan 10, 2013, at 1:34 PM, Christopher Gross wrote:
> I'm trying to get SolrCloud working with more than one configuration going.
> I have the base schema t
On Jan 10, 2013, at 12:06 PM, Shawn Heisey wrote:
> On 1/9/2013 8:54 PM, Mark Miller wrote:
>> I'd put everything into one. You can upload different named sets of config
>> files and point collections either to the same sets or different sets.
>>
>> You can re
They point to the admin UI - or should - that seems right?
- Mark
On Jan 11, 2013, at 10:57 AM, Christopher Gross wrote:
> I've managed to get my SolrCloud set up to have 2 different indexes up and
> running. However, my URLs aren't right. They just point to
> http://se
I've fixed this - thanks Gregg.
https://issues.apache.org/jira/browse/SOLR-4303
- Mark
On Jan 10, 2013, at 5:41 PM, Mark Miller wrote:
> Hmm…I don't recall that change. We use the force, so SolrCloud certainly does
> not depend on it.
>
> It seems like it might be a m
Solr server, is there some
mechanism to queue them onto disk, or does it try to hold them all in RAM?
And *if* the backlog caused an OOM condition, wouldn't that JVM have mostly
crashed (if not completely)?
Any guesses on the mostly likely failure point, and where to look?
Thanks,
Mark
ion factor for the collection because the collection was created
> via the API, not via solr.xml, so I can't easily reconfigure the
> collection.
You could just use the CoreAdmin API to create new replicas on whatever nodes.
- Mark
node.
Another options is to setup solr.xml like you would locally, then start with
-Dconf_bootstrap=true and it will duplicate your local config and collection
setup in ZooKeeper.
- Mark
On Jan 17, 2013, at 9:10 PM, Shawn Heisey wrote:
> I'm trying to get a 2-node SolrCloud install off
.
I'll look at adding some to the wiki.
- Mark
like also adding the component to the /select
handler.
I think it would be nice to clean this up a bit somehow. Or document it better.
- Mark
On Jan 18, 2013, at 3:39 PM, Shawn Heisey wrote:
> On 1/18/2013 1:20 PM, Mike Schultz wrote:
>> Can someone explain the logic of not sendin
ap will likely mean a few second pauses
at least at some points. A well tuned concurrent collector will never step the
world in most situations.
-XX:+UseConcMarkSweepGC
I wrote an article that might be useful a while back:
http://searchhub.org/2011/03/27/garbage-collection-bootcamp-1-0/
- Mark
Indexing should def not slow down substantially if you commit every minute or
something. Be sure to use openSearcher=false on the auto hard commit.
- Mark
On Jan 19, 2013, at 11:11 PM, Nikhil Chhaochharia wrote:
>
>
> Hi,
>
> We run a SolrCloud cluster using Solr 4.0 u
ta through http calls. The
data directory is a local property for each SolrCore and other nodes in the
cloud do not need to know about it.
- Mark
On Jan 22, 2013, at 11:33 AM, Otis Gospodnetic
wrote:
> Thanks Markus. Yes, I'm after the actual, physical directory on the local
> FS (
The logging shows that its finding transaction log entries.
Are you doing anything else while bringing the nodes up and down? Indexing? Are
you positive you remove the tlog files? It can't really have any versions if it
doesn't read them from a tlog on startup...
- Mark
On Jan 22,
No idea - logs might help.
- Mark
On Jan 22, 2013, at 4:37 PM, Marcin Rzewucki wrote:
> Sorry, my mistake. I did 2 tests: in the 1st I removed just index directory
> and in 2nd test I removed both index and tlog directory. Log lines I've
> sent are related to the first case. So S
Was your full logged stripped? You are right, we need more. Yes, the peer sync
failed, but then you cut out all the important stuff about the replication
attempt that happens after.
- Mark
On Jan 23, 2013, at 5:28 AM, Marcin Rzewucki wrote:
> Hi,
> Previously, I took the lines rela
Looks like it shows 3 cores start - 2 with versions that decide they are up to
date and one that replicates. The one that replicates doesn't have much logging
showing that activity.
Is this Solr 4.0?
- Mark
On Jan 23, 2013, at 9:27 AM, Upayavira wrote:
> Mark,
>
> Take
Does the admin cloud UI show all of the nodes as green? (active)
If so, something is not right.
- Mark
On Jan 23, 2013, at 10:02 AM, Roupihs wrote:
> I have a one shard collection, with one replica.
> I did a dataImport from my oracle DB.
> In the master, I have 93835 docs, in the n
It's hard to guess, but I might start by looking at what the new UpdateLog is
costing you. Take it's definition out of solrconfig.xml and try your test
again. Then let's take it from there.
- Mark
On Jan 23, 2013, at 11:00 AM, Kevin Stone wrote:
> I am having some diffic
cy" effects between queries.
- Mark
Yeah, I don't know what you are seeing offhand. You might try Solr 4.1 and see
if it's something that has been resolved.
- Mark
On Jan 23, 2013, at 3:14 PM, Marcin Rzewucki wrote:
> Guys, I pasted you the full log (see pastebin url). Yes, it is Solr4.0. 2
> cores are in sync,
On Jan 24, 2013, at 7:05 AM, Shawn Heisey wrote:
> My experience has been that you put the chroot at the very end, not on every
> host entry
Yup - this came up on the mailing list not too long ago and it's currently
correctly documented on the SolrCloud wiki.
- Mark
ainer#load
{initShardHandler(null);}
};
- Mark
On Jan 24, 2013, at 9:22 AM, Ted Merchant wrote:
> We recently updated from Solr 4.0.0 to Solr 4.1.0. Because of the change we
> were forced to upgrade a custom query parser. While the code change itself
> was minimal, we found
7;s also pretty easy to use
http://wiki.apache.org/solr/SolrCloud#Command_Line_Util to upload a new
schema.xml - then just Collections API reload command. Two lines in a script.
- Mark
as minimal as
just curl posting I know.
Testing and reporting on the issue I posted, as well as discussion around
expanding it, will likely help pushing those features forward.
- Mark
I don't have any targeted advice at the moment, but just for kicks, you might
try using Solr 4.1.
- Mark
On Jan 25, 2013, at 2:47 PM, Sean Siefert wrote:
> So I have quite a few cores already where this exact (as far as replication
> is concerned) solrconfig.xml works. The othe
Yeah, I've noticed this two in some distrib search tests (it's not SolrCloud
related per say I think, but just distrib search in general).
Want to open a JIRA issue about making this consistent?
- Mark
On Jan 25, 2013, at 2:39 PM, Mingfeng Yang wrote:
> We are migrating our So
> production.
Post any questions with your results if you could. Perhaps we can beef up the
wiki a bit so others don't hit the same issues.
- Mark
Shards param.
The CoreAdmin API works with it.
You can pass it for every call, but the first call is the critical one.
- Mark
important feature
for those upgrading from 4.0).
You can also always explicitly set the host address. I would recommend this for
production. It's the host param in solr.xml and by default it's setup so that
you can pass a sys prop to set it on startup.
- Mark
On Jan 25, 2013, at 3:37
tances and remove the stuff left on the filesystem.
- Mark
On Jan 25, 2013, at 7:42 PM, Mingfeng Yang wrote:
> Right now I have an index with four shards on a single EC2 server, each
> running on different ports. Now I'd like to migrate three shards
> to independent servers.
>
&
Hey Shawn - got a suggestion for an addition for the wiki that would
have saved you some time here?
- Mark
On Sat, Jan 26, 2013 at 1:22 PM, Shawn Heisey wrote:
> On 1/26/2013 6:31 AM, Per Steffensen wrote:
>>
>> We have actually tested this and found that the following will do i
I think this has come up on the mailing list before. I don't remember the
details, but you want to restrict the admin UI but not the CoreAdmin url -
/admin/cores.
- Mark
On Jan 28, 2013, at 4:37 PM, Marcin Rzewucki wrote:
> Hi,
>
> If you add security constraint for /admin/*,
On Jan 29, 2013, at 3:50 PM, Gregg Donovan wrote:
> should we
> just try uncommenting that line in ReplicationHandler?
Please try. I'd file a JIRA issue in any case. I can probably take a closer
look.
- Mark
The admin user interface and admin/cores are two very different things - they
just happen to share admin in the url.
It doesn't make any sense to secure admin/cores unless you are also going to
secure all the other Solr API's.
- Mark
On Jan 30, 2013, at 5:55 AM, AlexeyK wrote:
>
ter list is almost guaranteed to be incomplete.
I don't think it is? What is missing?
- Mark
y jetty doesn't get anymore blessed than that. If you want to run another
container, fine, but I would pick jetty myself - specifically, the one we ship
with without darn good reason.
- Mark
You can use 'none' for the lock type in solrconfig.xml.
You risk corruption if two IW's try to modify the index at once though.
- Mark
On Feb 1, 2013, at 6:56 PM, dm_tim wrote:
> Well that makes sense. The problem is that I am working in both Solr and
> Lucene directly
Do you see anything about session expiration in the logs? That is the likely
culprit for something like this. You may need to raise the timeout:
http://wiki.apache.org/solr/SolrCloud#FAQ
If you see no session timeouts, I don't have a guess yet.
- Mark
On Feb 2, 2013, at 7:35 PM, M
What led you to trying that? I'm not connecting the dots in my head - the
exception and the solution.
- Mark
On Feb 3, 2013, at 2:48 PM, Marcin Rzewucki wrote:
> Hi,
>
> I think the issue was not in zk client timeout, but POST request size. When
> I increa
t from the cluster. Stop/remove the tmp node.
- Mark
On Feb 5, 2013, at 12:22 PM, Mike Schultz wrote:
> Just to clarify, I want to be able to replace the down node with a host with
> a different name. If I were repairing that particular machine and replacing
> it, there would be no pr
The request should give you access to the core - the core to the core
descriptor, the descriptor to the core container, which knows about all the
cores.
- Mark
On Feb 5, 2013, at 4:09 PM, Ryan Josal wrote:
> Hey guys,
>
> I am writing an UpdateRequestProcessorFactory plugin which
The SolrCoreAware interface?
- Mark
On Feb 5, 2013, at 5:42 PM, Ryan Josal wrote:
> By way of the deprecated SolrCore.getSolrCore method,
>
> SolrCore.getSolrCore().getCoreDescriptor().getCoreContainer().getCores()
>
> Solr starts up in an infinite recursive loop of lo
ng a force option that
guarantees a replication.
- Mark
On Feb 6, 2013, at 4:23 PM, Gregg Donovan wrote:
> In the process of upgrading from 3.6 to 4.1, we've noticed that much of the
> code we had that relied on the 3.6 behavior of SolrCore#getIndexDir() is
> not working t
re you seeing now?
- Mark
Thanks Gregg - can you file a JIRA issue?
- Mark
On Feb 6, 2013, at 5:57 PM, Gregg Donovan wrote:
> Mark-
>
> You're right that SolrCore#getIndexDir() did not directly read
> index.properties in 3.6. In 3.6, it gets it indirectly from what is passed
> to the constructor
You can unload the core for that node and it will be removed from zookeeper.
You can add it back after if you leave it's state on disk and recreate the core.
- Mark
On Feb 7, 2013, at 5:20 AM, yriveiro wrote:
> Hi,
>
> Exists any way to eject a node from a solr cluster?
>
d master-slave architecture as one
option.
With a small amount of dev, having some polling replication for the index side
and using solrcloud for the search side might be possible, though not
necessarily a perfect marriage.
- Mark
>
>
> Re (2): Deploying new schema/config should be as
ew view on that flushed segment.
- Mark
On Feb 7, 2013, at 11:29 PM, Alexandre Rafalovitch wrote:
> Hello,
>
> What actually happens when using soft (as opposed to hard) commit?
>
> I understand somewhat very high-level picture (documents become available
> faster, but you m
Looks odd - the supposedly missing class looks like an inner class in
MultiPhraseQuery.
- Mark
On Feb 9, 2013, at 6:19 AM, Markus Jelsma wrote:
> Any ideas so far? I've not yet found anything that remotely looks like the
> root of the problem so far :)
>
>
> -
Did you clear the data dir for all 3 zk's? If not, you will find ghosts coming
back to haunt you :)
It's often easier to clear zk programmatically - for example it's one call from
the cmd line zkcli script.
http://wiki.apache.org/solr/SolrCloud#Command_Line_Util
- Mark
On Fe
Nothing will ever open a new searcher unless you explicitly send a commit with
openSearcher=true.
Either change openSearcher on your auto hard commit to true, or start using
soft commit for visibility.
- Mark
On Feb 9, 2013, at 12:44 PM, Alexandre Rafalovitch wrote:
> Hello,
>
Yonik looked into it and said the process was actually fine in his testing.
After the release, we did find one issue - if you don't explicitly set the
host, the host 'guess' feature has changed and may guess a different address.
- Mark
On Feb 11, 2013, at 1:16 PM, Shawn Heisey
nd data to both
dcs.
- Mark
On Feb 11, 2013, at 2:43 PM, mizayah wrote:
> This is good sollution.
>
> One thing here is rly unyoing. The double indexing.
> Is there a way to replicate to another dc? Seams solrcloud cant use his
> ealier replication.
>
> Would be nice if
Eventually, I'll get around to trying some more real world testing. Up till
now, no dev seems to have a real interest in this. I have 0 need for it
currently, so it's fairly low on my itch scale, but it's on my list anyhow.
- Mark
On Feb 11, 2013, at 12:26 PM, Shawn Heisey wro
Doesn't sound right to me. I'd guess you heard wrong.
- mark
Sent from my iPhone
On Feb 11, 2013, at 7:15 PM, Shawn Heisey wrote:
> I have heard that SolrCloud may require the presence of a uniqeKey field
> specifically named 'id' for sharding.
>
> Is th
However just switching group.facet=false, the following is produced, showing
the exclude appears to have been ignored previously:
1492
1361
Anybody else tried using this combination of excludes, facet queries and
grouping and got this working?
Kind Regards,
Mark
By default, on cluster startup, we wait until we see all the replicas for a
shard come up. This is for safety. You may have introduced an old shard with
old data or a new shard with no data, and you don't want something like that
becoming the leader.
If you don't want to do this wait, it's conf
A search for "id" is much too broad. I looked at 3 of the SolrCloud classes you
mention and none of those "id"'s have anything to do with the unique field in
the schema. I have not looked at the hash based router, but if you find a real
issue then please file a JIRA i
On Feb 13, 2013, at 1:17 PM, Amit Nithian wrote:
> doesn't it do a commit to force solr to recognize the changes?
yes.
- Mark
should document well at the least and probably open a JIRA
issue to address if possible - of course if it could easily be addressed, I'm
sure Yonik would have done it when he wrote it.
- Mark
On Feb 13, 2013, at 1:25 PM, Mark Miller wrote:
> A search for "id" is much too broa
Yes, though the reasons are not so interesting.
Soon solr.xml is going away regardless - perhaps in a another release or two.
- mark
On Feb 13, 2013, at 2:02 PM, Anirudha Jadhav wrote:
> is there a strong reason why we still need solr.xml on disk and it cannot
> be persisted and used f
I don't know - by chance, I'm actually doing about the same sequence of events
right now with Solr 4.1, and the cores are running fine…
What do the logs say?
- Mark
On Feb 14, 2013, at 10:18 PM, Anirudha Jadhav wrote:
> *1.empty Zookeeper*
> *2.empty index directories for
On Feb 15, 2013, at 6:04 AM, o.mares wrote:
> Hey when running a solr cloud setup with 4 servers, managing 3 cores each
> splitted on 2 shards, what are the proper steps to do a full index import?
>
> Do you have to import the index on all of the solr instances? Or is it
> sufficient enough to
Sounds like you should file a JIRA issue.
- Mark
On Feb 15, 2013, at 6:07 PM, "Charton, Andre"
wrote:
> Hi,
>
> I upgrade solr form 3.6 to 4.1. Since them the replication is full copy
> the index from master.
> Master is delta import via DIH every 10min. Slave poll i
For 4.2, I'll try and put in https://issues.apache.org/jira/browse/SOLR-4078
soon.
Not sure about the behavior your seeing - you might want to file a JIRA issue.
- Mark
On Feb 15, 2013, at 8:17 PM, Gary Yngve wrote:
> Hi all,
>
> I've been unable to get the collections
We need to see more of your logs to determine why - there should be some
exceptions logged.
- Mark
On Feb 18, 2013, at 9:47 AM, Cool Techi wrote:
> I am seeing the following error in my Admin console and the core/ cloud
> status is taking forever to load.
>
> SEVEREReco
just been too busy with other stuff..
Concerning CloudSolrServer, there is a JIRA to make it hash and send updates to
the "right" leader, but currently it still doesn't - it just favors leaders in
general over non leaders currently.
- Mark
On Feb 18, 2013, at 7:34 AM, Markus
Not sure - any other errors? An optimize once a day is a very heavy operation
by the way! Be sure the gains are worth the pain you pay.
- Mark
On Feb 18, 2013, at 10:04 AM, adm1n wrote:
> Hi,
>
> I'm running SolrCloud (Solr4) with 1 core, 8 shards and zookeeper
> My index
st just to tune
merge parameters and avoid optimize altogether. It's usually pre optimization
that leads to the over use of optimize and it's usually unnecessary and quite
costly.
- Mark
On Feb 18, 2013, at 11:12 AM, adm1n wrote:
> Thanks for your response.
>
> No, nothing el
the actual leader and just move on to the next candidate.
Still some tricky corner cases to deal with and such as well.
I think for most things you would use this to solve, there is probably
an alternate thing that should be addressed.
- Mark
On Mon, Feb 18, 2013 at 4:15 PM, Vaillancourt, Tim wrote:
27;t think SolrJ update requests order deletes and adds in
the same request either, so that would also need to be addressed. Pretty sure
solrj will do the adds then the deletes.
- Mark
On Feb 19, 2013, at 2:23 PM, Vinay Pothnis wrote:
> Hello,
>
> I have the following set up:
>
Swap is unsupported - really it should throw an exception right now.
There is a JIRA issue to add support for swap in SolrCloud mode of some kind.
- Mark
On Feb 20, 2013, at 7:59 PM, Rollin.R.Ma (lab.sh04.Newegg) 41099
wrote:
>
> Hi
>
> I am a newer to solrCloud, I
On Feb 19, 2013, at 9:16 AM, Markus Jelsma wrote:
> Ah, thanks. Got a Jira? I don't think i'm watching that one right now.
https://issues.apache.org/jira/browse/SOLR-3154
- Mark
s nicely I
think.
- Mark
On Feb 20, 2013, at 10:08 AM, Shankar Sundararaju wrote:
> Hi All,
>
> I am using Solr 4.1.
>
> I have a Solr cluster of 3 leaders and 3 replicas hosting collection1
> consisting of thousands of documents currently serving the search requests.
>
Can you give some more details? When you look at the cloud tab of the admin UI,
does the cluster visualization look right? Are all the nodes green? Perhaps the
shard is a leader and a replica single shrad and you just think it's 2 shards?
- Mark
On Feb 20, 2013, at 8:26 PM, rulinma
al over non leaders currently. "
>
> I wonder if one commit to only one leader, not every doc to different leaders
> according the shards.
That's just an optimization. Updates are forwarded to the right now no matter
which you originally send them to.
- Mark
It's not really any different in SolrCloud as the pre-cloud - distrib search is
still the same code done the same way by and large.
shards.qt should be just as valid an option as forcing a query component.
- Mark
On Feb 21, 2013, at 7:56 AM, AlexeyK wrote:
> In pre-cloud version of
The leader doesn't really do a lot more work than any of the replicas, so I
don't think it's likely that important. If someone starts running into
problems, that's usually when we start looking for solutions.
- Mark
On Feb 21, 2013, at 10:20 PM, "Vaillancourt, Ti
We are fixing this bug here: https://issues.apache.org/jira/browse/SOLR-4471
- Mark
On Feb 22, 2013, at 7:07 AM, Artyom wrote:
> I have the same problem. This bug appeared in 4.0 rarely, but 4.1 downloads
> the full index every time.
>
It just means at some point a replication was done that required flipping to a
new directory. It's expected. Once you flip from the index directory to an
index. directory, you never go back.
- Mark
On Feb 22, 2013, at 8:14 PM, Mingfeng Yang wrote:
> I see the items under my solorcl
You could copy each shard to a single node and then use the merge index feature
to merge them into one index and then start up a single Solr node on that. Use
the same configs.
- Mark
On Feb 22, 2013, at 8:11 PM, Erol Akarsu wrote:
> I have a solr cloud 7 nodes, each has 2 shards.
>
How are you doing the backup? You have to coordinate with Solr - files may be
changing when you try and copy it, leaving to an inconsistent index. If you
want to do a live backup, you have to use the backup feature of the replication
handler.
- Mark
On Feb 23, 2013, at 3:54 AM, Prakhar Birla
#x27;s analyzers has some other token filter you forgot
about, so you'd have to bring that logic forward as well.
(Long story of why I'd want to do all this... and I know people think
adding ~2 to all tokens will give bad results anyway, trying to fix
inherited code that can't be scr
You have to put the jar on each node on a std lib dir. Same as non Solrcloud
mode.
We will be adding support to put the jars in zookeeper in an upcoming release.
Mark
Sent from my iPhone
On Feb 24, 2013, at 7:25 AM, mitcoe4 wrote:
> hi,
> We are using solr 4.0.0 . We want to ad
1001 - 1100 of 2249 matches
Mail list logo