message in context:
> http://lucene.472066.n3.nabble.com/SolrCloud-replication-question-tp3993761p3994017.html
> Sent from the Solr - User mailing list archive at Nabble.com.
As also
> observed at
>
> http://carsabi.com/car-news/2012/03/23/optimizing-solr-7x-your-search-speed/
>
> I am now benchmarking my workload to compare replication vs. sharding
> performance on a single machine.
>
> --
> View this message in context:
> http://lucene
-search-speed/
I am now benchmarking my workload to compare replication vs. sharding
performance on a single machine.
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-replication-question-tp3993761p3994017.html
Sent from the Solr - User mailing list archive at
I still get roughly an
> > n-fold increase in query throughput with n replicas? And if so, why would
> > one do master/slave replication with multiple copies of the index at all?
> >
> > --
> > View this message in context:
> > http://lucene.472066.n3.nabble.com
master/slave configuration). Will I still get roughly an
> n-fold increase in query throughput with n replicas? And if so, why would
> one do master/slave replication with multiple copies of the index at all?
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.
-replication-question-tp3993761.html
Sent from the Solr - User mailing list archive at Nabble.com.
Ok, great. Just wanted to make sure someone was aware. Thanks for
looking into this.
On Thu, Feb 16, 2012 at 8:26 AM, Mark Miller wrote:
>
> On Feb 14, 2012, at 10:57 PM, Jamie Johnson wrote:
>
>> Not sure if this is
>> expected or not.
>
> Nope - should be already resolved or will be today th
On Feb 14, 2012, at 10:57 PM, Jamie Johnson wrote:
> Not sure if this is
> expected or not.
Nope - should be already resolved or will be today though.
- Mark Miller
lucidimagination.com
All of the nodes now show as being Active. When starting the replicas
I did receive the following message though. Not sure if this is
expected or not.
INFO: Attempting to replicate from
http://JamiesMac.local:8501/solr/slice2_shard2/
Feb 14, 2012 10:53:34 PM org.apache.solr.common.SolrException
Doing so now, will let you know if I continue to see the same issues
On Tue, Feb 14, 2012 at 4:59 PM, Mark Miller wrote:
> Doh - looks like I was just seeing a test issue. Do you mind updating and
> trying the latest rev? At the least there should be some better logging
> around the recovery.
>
Doh - looks like I was just seeing a test issue. Do you mind updating and
trying the latest rev? At the least there should be some better logging around
the recovery.
I'll keep working on tests in the meantime.
- Mark
On Feb 14, 2012, at 3:15 PM, Jamie Johnson wrote:
> Sounds good, if I pull
Sounds good, if I pull the latest from trunk and rerun will that be
useful or were you able to duplicate my issue now?
On Tue, Feb 14, 2012 at 3:00 PM, Mark Miller wrote:
> Okay Jamie, I think I have a handle on this. It looks like an issue with what
> config files are being used by cores create
Okay Jamie, I think I have a handle on this. It looks like an issue with what
config files are being used by cores created with the admin core handler - I
think it's just picking up default config and not the correct config for the
collection. This means they end up using config that has no Upda
Thanks Mark, not a huge rush, just me trying to get to use the latest
stuff on our project.
On Tue, Feb 14, 2012 at 10:53 AM, Mark Miller wrote:
> Sorry, have not gotten it yet, but will be back trying later today - monday,
> tuesday tend to be slow for me (meetings and crap).
>
> - Mark
>
> On
Sorry, have not gotten it yet, but will be back trying later today - monday,
tuesday tend to be slow for me (meetings and crap).
- Mark
On Feb 14, 2012, at 9:10 AM, Jamie Johnson wrote:
> Has there been any success in replicating this? I'm wondering if it
> could be something with my setup tha
Has there been any success in replicating this? I'm wondering if it
could be something with my setup that is causing the issue...
On Mon, Feb 13, 2012 at 8:55 AM, Jamie Johnson wrote:
> Yes, I have the following layout on the FS
>
> ./bootstrap.sh
> ./example (standard example directory from di
Yes, I have the following layout on the FS
./bootstrap.sh
./example (standard example directory from distro containing jetty
jars, solr confs, solr war, etc)
./slice1
- start.sh
-solr.xml
- slice1_shard1
- data
- slice2_shard2
-data
./slice2
- start.sh
- solr.xml
-slice2_shard1
Do you have unique dataDir for each instance?
13.2.2012 14.30 "Jamie Johnson" kirjoitti:
I don't see any errors in the log. here are the following scripts I'm
running, and to create the cores I run the following commands
curl
'http://localhost:8501/solr/admin/cores?action=CREATE&name=slice1_shard1&collection=collection1&shard=slice1&collection.configName=config1'
curl
'http://local
Yeah, that is what I would expect - for a node to be marked as down, it either
didn't finish starting, or it gave up recovering...either case should be
logged. You might try searching for the recover keyword and see if there are
any interesting bits around that.
Meanwhile, I have dug up a coupl
I didn't see anything in the logs, would it be an error?
On Sat, Feb 11, 2012 at 3:58 PM, Mark Miller wrote:
>
> On Feb 11, 2012, at 3:08 PM, Jamie Johnson wrote:
>
>> I wiped the zk and started over (when I switch networks I get
>> different host names and honestly haven't dug into why). That b
On Feb 11, 2012, at 3:08 PM, Jamie Johnson wrote:
> I wiped the zk and started over (when I switch networks I get
> different host names and honestly haven't dug into why). That being
> said the latest state shows all in sync, why would the cores show up
> as down?
If recovery fails X times (s
I wiped the zk and started over (when I switch networks I get
different host names and honestly haven't dug into why). That being
said the latest state shows all in sync, why would the cores show up
as down?
On Sat, Feb 11, 2012 at 11:08 AM, Mark Miller wrote:
>
> On Feb 10, 2012, at 9:40 PM, Ja
On Feb 10, 2012, at 9:40 PM, Jamie Johnson wrote:
>
>
> how'd you resolve this issue?
>
I was basing my guess on seeing "JamiesMac.local" and "jamiesmac" in your first
cluster state dump - your latest doesn't seem to mismatch like that though.
- Mark Miller
lucidimagination.com
hmmperhaps I'm seeing the issue you're speaking of. I have
everything running right now and my state is as follows:
{"collection1":{
"slice1":{
"JamiesMac.local:8501_solr_slice1_shard1":{
"shard_id":"slice1",
"leader":"true",
"state":"active",
"core":
On Feb 10, 2012, at 9:33 AM, Jamie Johnson wrote:
> jamiesmac
Another note:
Have no idea if this is involved, but when I do tests with my linux box and mac
I run into the following:
My linux box auto finds the address of halfmetal and my macbook mbpro.local. If
I accept those defaults, my ma
Thanks.
If the given ZK snapshot was the end state, then two nodes are marked as
down. Generally that happens because replication failed - if you have not,
I'd check the logs for those two nodes.
- Mark
On Fri, Feb 10, 2012 at 7:35 PM, Jamie Johnson wrote:
> nothing seems that different. In r
nothing seems that different. In regards to the states of each I'll
try to verify tonight.
This was using a version I pulled from SVN trunk yesterday morning
On Fri, Feb 10, 2012 at 6:22 PM, Mark Miller wrote:
> Also, it will help if you can mention the exact version of solrcloud you are
> tal
Also, it will help if you can mention the exact version of solrcloud you are
talking about in each issue - I know you have one from the old branch, and I
assume a version off trunk you are playing with - so a heads up on which and if
trunk, what rev or day will help in the case that I'm trying t
I'm trying, but so far I don't see anything. I'll have to try and mimic your
setup closer it seems.
I tried starting up 6 solr instances on different ports as 2 shards, each with
a replication factor of 3.
Then I indexed 20k documents to the cluster and verified doc counts.
Then I shutdown all
Sorry for pinging this again, is more information needed on this? I
can provide more details but am not sure what to provide.
On Fri, Feb 10, 2012 at 10:26 AM, Jamie Johnson wrote:
> Sorry, I shut down the full solr instance.
>
> On Fri, Feb 10, 2012 at 9:42 AM, Mark Miller wrote:
>> Can you ex
Sorry, I shut down the full solr instance.
On Fri, Feb 10, 2012 at 9:42 AM, Mark Miller wrote:
> Can you explain a little more how you doing this? How are you bringing the
> cores down and then back up? Shutting down a full solr instance, unloading
> the core?
>
> On Feb 10, 2012, at 9:33 AM, J
Can you explain a little more how you doing this? How are you bringing the
cores down and then back up? Shutting down a full solr instance, unloading the
core?
On Feb 10, 2012, at 9:33 AM, Jamie Johnson wrote:
> I know that the latest Solr Cloud doesn't use standard replication but
> I have a q
I know that the latest Solr Cloud doesn't use standard replication but
I have a question about how it appears to be working. I currently
have the following cluster state
{"collection1":{
"slice1":{
"JamiesMac.local:8501_solr_slice1_shard1":{
"shard_id":"slice1",
"state":
34 matches
Mail list logo