That idea was short lived. I excluded the document. The cluster isn't
syncing even after shutting everything down and restarting.
On Sun, Mar 18, 2012 at 2:58 PM, Matthew Parker <
mpar...@apogeeintegration.com> wrote:
> I had tried importing data from Manifold, and one document threw a Tika
> Exc
I had tried importing data from Manifold, and one document threw a Tika
Exception.
If I shut everything down and restart SOLR cloud, the system sync'd on
startup.
Could extraction errors be the issue?
On Sun, Mar 18, 2012 at 2:50 PM, Matthew Parker <
mpar...@apogeeintegration.com> wrote:
> I h
I have nodes running on ports: 8081-8084
A couple of the other SOLR cloud nodes we complaining about not being talk
with 8081, which is the first node brought up in the cluster.
The startup process is:
1. start 3 zookeeper nodes
2. wait until complete
3. start first solr node.
4. wait until c
I think he's asking if all the nodes (same machine or not) return a
response. Presumably you have different ports for each node since they
are on the same machine.
On Sun, 2012-03-18 at 14:44 -0400, Matthew Parker wrote:
> The cluster is running on one machine.
>
> On Sun, Mar 18, 2012 at 2:07 PM
The cluster is running on one machine.
On Sun, Mar 18, 2012 at 2:07 PM, Mark Miller wrote:
> From every node in your cluster you can hit http://MACHINE1:8084/solr in
> your browser and get a response?
>
> On Mar 18, 2012, at 1:46 PM, Matthew Parker wrote:
>
> > My cloud instance finally tried to
From every node in your cluster you can hit http://MACHINE1:8084/solr in your
browser and get a response?
On Mar 18, 2012, at 1:46 PM, Matthew Parker wrote:
> My cloud instance finally tried to sync. It looks like it's having connection
> issues, but I can bring the SOLR instance up in the brow
This might explain another thing I'm seeing. If I take a node down,
clusterstate.json still shows it as active. Also if I'm running 4 nodes,
take one down and assign it a new port, clusterstate.json will show 5 nodes
running.
On Sat, Mar 17, 2012 at 10:10 PM, Mark Miller wrote:
> Nodes talk to Z
Nodes talk to ZooKeeper as well as to each other. You can see the addresses
they are trying to use to communicate with each other in the 'cloud' view of
the Solr Admin UI. Sometimes you have to override these, as the detected
default may not be an address that other nodes can reach. As a limited
I'm still having issues replicating in my work environment. Can anyone
explain how the replication mechanism works? Is it communicating across
ports or through zookeeper to manager the process?
On Thu, Mar 8, 2012 at 10:57 PM, Matthew Parker <
mpar...@apogeeintegration.com> wrote:
> All,
>
> I
All,
I recreated the cluster on my machine at home (Windows 7, Java 1.6.0.23,
apache-solr-4.0-2012-02-29_09-07-30) , sent some document through Manifold
using its crawler, and it looks like it's replicating fine once the
documents are committed.
This must be related to my environment somehow. Tha
I've ensured the SOLR data subdirectories and files were completed cleaned
out, but the issue still occurs.
On Fri, Mar 2, 2012 at 9:06 AM, Erick Erickson wrote:
> Matt:
>
> Just for paranoia's sake, when I was playing around with this (the
> _version_ thing was one of my problems too) I removed
Matt:
Just for paranoia's sake, when I was playing around with this (the
_version_ thing was one of my problems too) I removed the entire data
directory as well as the zoo_data directory between experiments (and
recreated just the data dir). This included various index.2012
files and the tlog
> I assuming the windows configuration looked correct?
Yeah, so far I can not spot any smoking gun...I'm confounded at the moment.
I'll re read through everything once more...
- Mark
I reindex every time I change something.
I also delete any zookeeper data too.
I assuming the windows configuration looked correct?
On Thu, Mar 1, 2012 at 3:39 PM, Mark Miller wrote:
> P.S. FYI you will have to reindex after adding _version_ back the schema...
>
> On Mar 1, 2012, at 3:35 PM, M
I tried publishing to /update/extract request handler using manifold, but
got the same result.
I also tried swapping out the replication handlers too, but that didn't do
anything.
Otherwise, that's it.
On Thu, Mar 1, 2012 at 3:35 PM, Mark Miller wrote:
> Any other customizations you are making
P.S. FYI you will have to reindex after adding _version_ back the schema...
On Mar 1, 2012, at 3:35 PM, Mark Miller wrote:
> Any other customizations you are making to solrconfig?
>
> On Mar 1, 2012, at 1:48 PM, Matthew Parker wrote:
>
>> Added it back in. I still get the same result.
>>
>> On
Any other customizations you are making to solrconfig?
On Mar 1, 2012, at 1:48 PM, Matthew Parker wrote:
> Added it back in. I still get the same result.
>
> On Wed, Feb 29, 2012 at 10:09 PM, Mark Miller wrote:
> Do you have a _version_ field in your schema? I actually just came back to
> this
Added it back in. I still get the same result.
On Wed, Feb 29, 2012 at 10:09 PM, Mark Miller wrote:
> Do you have a _version_ field in your schema? I actually just came back to
> this thread with that thought and then saw your error - so that remains my
> guess.
>
> I'm going to improve the doc
Do you have a _version_ field in your schema? I actually just came back to
this thread with that thought and then saw your error - so that remains my
guess.
I'm going to improve the doc on the wiki around what needs to be defined
for SolrCloud - so far we have things in the example defaults, but i
Mark/Sami
I ran the system with 3 zookeeper nodes, 2 solr cloud nodes, and left
numShards set to its default value (i.e. 1)
I looks like it finally sync'd with the other one after quite a while, but
it's throwing lots of errors like the following:
org.apache.solr.common.SolrException: missing _v
Sami,
I have the latest as of the 26th. My system is running on a standalone
network so it's not easy to get code updates without a wave of paperwork.
I installed as per the detailed instructions I laid out a couple of
messages ago from today (2/29/2012).
I'm running the following query:
http:/
On Wed, Feb 29, 2012 at 7:03 PM, Matthew Parker
wrote:
> I also took out my requestHandler and used the standard /update/extract
> handler. Same result.
How did you install/start the system this time? The same way as
earlier? What kind of queries do you run?
Would it be possible for you to check
I also took out my requestHandler and used the standard /update/extract
handler. Same result.
On Wed, Feb 29, 2012 at 11:47 AM, Matthew Parker <
mpar...@apogeeintegration.com> wrote:
> I tried running SOLR Cloud with the default number of shards (i.e. 1), and
> I get the same results.
>
> On Wed,
I tried running SOLR Cloud with the default number of shards (i.e. 1), and
I get the same results.
On Wed, Feb 29, 2012 at 10:46 AM, Matthew Parker <
mpar...@apogeeintegration.com> wrote:
> Mark,
>
> Nothing appears to be wrong in the logs. I wiped the indexes and imported
> 37 files from SharePo
Mark,
Nothing appears to be wrong in the logs. I wiped the indexes and imported
37 files from SharePoint using Manifold. All 37 make it in, but SOLR still
has issues with the results being inconsistent.
Let me run my setup by you, and see whether that is the issue?
On one machine, I have three z
Hmm...this is very strange - there is nothing interesting in any of the logs?
In clusterstate.json, all of the shards have an active state?
There are quite a few of us doing exactly this setup recently, so there must be
something we are missing here...
Any info you can offer might help.
- Mar
Mark,
I got the codebase from the 2/26/2012, and I got the same inconsistent
results.
I have solr running on four ports 8081-8084
8081 and 8082 are the leaders for shard 1, and shard 2, respectively
8083 - is assigned to shard 1
8084 - is assigned to shard 2
queries come in and sometime it see
I'll have to check on the commit situation. We have been pushing data from
SharePoint the last week or so. Would that somehow block the documents
moving between the solr instances?
I'll try another version tomorrow. Thanks for the suggestions.
On Mon, Feb 27, 2012 at 5:34 PM, Mark Miller wrote:
Hmmm...all of that looks pretty normal...
Did a commit somehow fail on the other machine? When you view the stats for the
update handler, are there a lot of pending adds for on of the nodes? Do the
commit counts match across nodes?
You can also query an individual node with distrib=false to che
Here is most of the cluster state:
Connected to Zookeeper
localhost:2181, localhost: 2182, localhost:2183
/(v=0 children=7) ""
/CONFIGS(v=0, children=1)
/CONFIGURATION(v=0 children=25)
< all the configuration files, velocity info, xslt, etc.
/NODE_STATES(v=0 child
I was trying to use the new interface. I see it using the old admin page.
Is there a piece of it you're interested in? I don't have access to the
Internet where it exists so it would mean transcribing it.
On Mon, Feb 27, 2012 at 2:47 PM, Mark Miller wrote:
>
> On Feb 27, 2012, at 2:22 PM, Matth
On Feb 27, 2012, at 2:22 PM, Matthew Parker wrote:
> Thanks for your reply Mark.
>
> I believe the build was towards the begining of the month. The
> solr.spec.version is 4.0.0.2012.01.10.38.09
>
> I cannot access the clusterstate.json contents. I clicked on it a couple of
> times, but nothing
Thanks for your reply Mark.
I believe the build was towards the begining of the month. The
solr.spec.version is 4.0.0.2012.01.10.38.09
I cannot access the clusterstate.json contents. I clicked on it a couple of
times, but nothing happens. Is that stored on disk somewhere?
I configured a custom r
Hey Matt - is your build recent?
Can you visit the cloud/zookeeper page in the admin and send the contents of
the clusterstate.json node?
Are you using a custom index chain or anything out of the ordinary?
- Mark
On Feb 27, 2012, at 12:26 PM, Matthew Parker wrote:
> TWIMC:
>
> Environment
>
34 matches
Mail list logo