RE: trouble instantiating CloudSolrServer

2012-11-03 Thread Markus Jelsma
Hi, i added the follow dependancy to Apache Nutch:
org="org.apache.solr" name="solr-solrj" rev="4.0.0"
 
 
-Original message-
> From:Lance Norskog 
> Sent: Sat 03-Nov-2012 04:34
> To: solr-user@lucene.apache.org; markrmil...@gmail.com
> Subject: Re: trouble instantiating CloudSolrServer
> 
> What is the maven repo id & version for this?
> 
> - Original Message -
> | From: "Mark Miller" 
> | To: solr-user@lucene.apache.org
> | Sent: Friday, November 2, 2012 6:52:10 PM
> | Subject: Re: trouble instantiating CloudSolrServer
> | 
> | I think the maven jars must be out of whack?
> | 
> | On Fri, Nov 2, 2012 at 6:38 AM, Markus Jelsma
> |  wrote:
> | > Hi,
> | >
> | > We use trunk but got SolrJ 4.0 from Maven. Creating an instance of
> | > CloudSolrServer fails because its constructor calls a not existing
> | > LBServer constructor, it attempts to create an instance by only
> | > passing a HttpClient. How is LBHttpSolrServer supposed to work
> | > without passing a SolrServer URL to it?
> | >
> | >   public CloudSolrServer(String zkHost) throws
> | >   MalformedURLException {
> | >   this.zkHost = zkHost;
> | >   this.myClient = HttpClientUtil.createClient(null);
> | >   this.lbServer = new LBHttpSolrServer(myClient);
> | >   this.updatesToLeaders = true;
> | >   }
> | >
> | > java.lang.NoSuchMethodError:
> | > 
> org.apache.solr.client.solrj.impl.LBHttpSolrServer.(Lorg/apache/http/client/HttpClient;[Ljava/lang/String;)V
> | > at
> | > 
> org.apache.solr.client.solrj.impl.CloudSolrServer.(CloudSolrServer.java:84)
> | >
> | > Thanks,
> | > Markus
> | 
> | 
> | 
> | --
> | - Mark
> | 
> 


RE: SolrCloud indexing blocks if node is recovering

2012-11-03 Thread Markus Jelsma
Hi - yes, i should be able to make sense out of them next monday. I assume 
you're not too interested in the OOM machine but all surrounding nodes that 
blocked instead? 
 


-Original message-
> From:Mark Miller 
> Sent: Sat 03-Nov-2012 03:14
> To: solr-user@lucene.apache.org
> Subject: Re: SolrCloud indexing blocks if node is recovering
> 
> Doesn't sound right. Still have the logs?
> 
> - Mark
> 
> On Fri, Nov 2, 2012 at 9:45 AM, Markus Jelsma
>  wrote:
> > Hi,
> >
> > We just tested indexing some million docs from Hadoop to a 10 node 2 rep 
> > SolrCloud cluster with this week's trunk. One of the nodes gave an OOM but 
> > indexing continued without interruption. When i restarted the node indexing 
> > stopped completely, the node tried to recover - which was unsuccessful. I 
> > restarted the node again but that wasn't very helpful either. Finally i 
> > decided to stop the node completely and see what happens - indexing resumed.
> >
> > Why or how won't the other nodes accept incoming documents when one node 
> > behaves really bad? The dying node wasn't the node we were sending 
> > documents to and we are not using CloudSolrServer yet (see other thread). 
> > Is this known behavior? Is it a bug?
> >
> > Thanks,
> > Markus
> 
> 
> 
> -- 
> - Mark
> 


Solr 3.6 -> 4.0

2012-11-03 Thread Nathan Findley

Hi all,

I have one machine running solr 3.6.  I would like to move this data to 
solr 4.0 and set up a solrcloud.


I feel like I should replicate the existing data.  After that, it isn't 
clear to me what I need to do.


1) Create a slave (4.0) that replicates from the master (3.6).
2) Somehow turn the slave into a part of a solrcloud?

If there are any online articles about this process or you have any 
suggestions, I would appreciate it!


Thanks,
Nate


Re: Solr 3.6 -> 4.0

2012-11-03 Thread Otis Gospodnetic
Hi,

Check the archive for a similar Q&A yesterday.  Reindexing would be the
cleanest.

Otis
--
Performance Monitoring - http://sematext.com/spm
On Nov 3, 2012 8:22 AM, "Nathan Findley"  wrote:

> Hi all,
>
> I have one machine running solr 3.6.  I would like to move this data to
> solr 4.0 and set up a solrcloud.
>
> I feel like I should replicate the existing data.  After that, it isn't
> clear to me what I need to do.
>
> 1) Create a slave (4.0) that replicates from the master (3.6).
> 2) Somehow turn the slave into a part of a solrcloud?
>
> If there are any online articles about this process or you have any
> suggestions, I would appreciate it!
>
> Thanks,
> Nate
>


Re: SolrCloud indexing blocks if node is recovering

2012-11-03 Thread Mark Miller
The OOM machine and any surrounding if possible (eg especially the leader of 
the shard).

Not sure what I'm looking for yet, so the more info the better.

- Mark

On Nov 3, 2012, at 5:23 AM, Markus Jelsma  wrote:

> Hi - yes, i should be able to make sense out of them next monday. I assume 
> you're not too interested in the OOM machine but all surrounding nodes that 
> blocked instead? 
> 
> 
> 
> -Original message-
>> From:Mark Miller 
>> Sent: Sat 03-Nov-2012 03:14
>> To: solr-user@lucene.apache.org
>> Subject: Re: SolrCloud indexing blocks if node is recovering
>> 
>> Doesn't sound right. Still have the logs?
>> 
>> - Mark
>> 
>> On Fri, Nov 2, 2012 at 9:45 AM, Markus Jelsma
>>  wrote:
>>> Hi,
>>> 
>>> We just tested indexing some million docs from Hadoop to a 10 node 2 rep 
>>> SolrCloud cluster with this week's trunk. One of the nodes gave an OOM but 
>>> indexing continued without interruption. When i restarted the node indexing 
>>> stopped completely, the node tried to recover - which was unsuccessful. I 
>>> restarted the node again but that wasn't very helpful either. Finally i 
>>> decided to stop the node completely and see what happens - indexing resumed.
>>> 
>>> Why or how won't the other nodes accept incoming documents when one node 
>>> behaves really bad? The dying node wasn't the node we were sending 
>>> documents to and we are not using CloudSolrServer yet (see other thread). 
>>> Is this known behavior? Is it a bug?
>>> 
>>> Thanks,
>>> Markus
>> 
>> 
>> 
>> -- 
>> - Mark
>> 



Re: Continuous Ping query caused exception: java.util.concurrent.RejectedExecutionException

2012-11-03 Thread Mark Miller

On Nov 1, 2012, at 5:39 AM, Markus Jelsma  wrote:

> File bug?

Please.

- Mark

Re: trunk is unable to replicate between nodes ( Unable to download ... completely)

2012-11-03 Thread Mark Miller
Likely some of the trunk work around allowing any Directory impl to replicate. 
JIRA pls :)

- Mark

On Oct 30, 2012, at 12:29 PM, Markus Jelsma  wrote:

> Hi,
> 
> We're testing again with today's trunk and using the new Lucene 4.1 format by 
> default. When nodes are not restarted things are kind of stable but 
> restarting nodes leads to a lot of mayhem. It seems we can get the cluster 
> back up and running by clearing ZK and restarting everything (another issue) 
> but replication becomes impossible for some nodes leading to a continuous 
> state of failing recovery etc.
> 
> Here are some excepts from the logs:
> 
> 2012-10-30 16:12:39,674 ERROR [solr.servlet.SolrDispatchFilter] - 
> [http-8080-exe
> c-5] - : null:java.lang.IndexOutOfBoundsException
>at java.nio.Buffer.checkBounds(Buffer.java:530)
>at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:218)
>at 
> org.apache.lucene.store.ByteBufferIndexInput.readBytes(ByteBufferInde
> xInput.java:91)
>at 
> org.apache.solr.handler.ReplicationHandler$DirectoryFileStream.write(
> ReplicationHandler.java:1065)
>at 
> org.apache.solr.handler.ReplicationHandler$3.write(ReplicationHandler.java:932)
> 
> 
> 2012-10-30 16:10:32,220 ERROR [solr.handler.ReplicationHandler] - 
> [RecoveryThrea
> d] - : SnapPull failed :org.apache.solr.common.SolrException: Unable to 
> download
> _x.fdt completely. Downloaded 13631488!=13843504
>at 
> org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.cleanup(SnapP
> uller.java:1237)
>at 
> org.apache.solr.handler.SnapPuller$DirectoryFileFetcher.fetchFile(Sna
> pPuller.java:1118)
>at 
> org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java
> :716)
>at 
> org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:387)
>at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:273)
>at 
> org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:152)
>at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:407)
> 
> 2012-10-30 16:12:51,061 WARN [solr.handler.ReplicationHandler] - 
> [http-8080-exec
> -3] - : Exception while writing response for params: 
> file=_p_Lucene41_0.doc&comm
> and=filecontent&checksum=true&generation=6&qt=/replication&wt=filestream
> java.io.EOFException: read past EOF: 
> MMapIndexInput(path="/opt/solr/cores/openindex_h/data/index.20121030152234973/_p_Lucene41_0.doc")
>at 
> org.apache.lucene.store.ByteBufferIndexInput.readBytes(ByteBufferIndexInput.java:100)
>at 
> org.apache.solr.handler.ReplicationHandler$DirectoryFileStream.write(ReplicationHandler.java:1065)
>at 
> org.apache.solr.handler.ReplicationHandler$3.write(ReplicationHandler.java:932)
> 
> 
> Needless to say i'm puzzled so i'm wondering if anyone has seen this before 
> or have some hints that might help digg further.
> 
> Thanks,
> Markus



Re: No lockType configured for NRTCachingDirectory

2012-11-03 Thread Mark Miller
I think I've seen it on 4.X as well yesterday. Let's file a JIRA to track 
looking into it.

- Mark

On Oct 31, 2012, at 11:30 AM, Markus Jelsma  wrote:

> That's 5, the actual trunk/
> 
> -Original message-
>> From:Mark Miller 
>> Sent: Wed 31-Oct-2012 16:29
>> To: solr-user@lucene.apache.org
>> Subject: Re: No lockType configured for NRTCachingDirectory
>> 
>> By trunk do you mean 4X or 5X?
>> 
>> On Wed, Oct 31, 2012 at 7:47 AM, Markus Jelsma
>>  wrote:
>>> Hi,
>>> 
>>> Besides replication issues (see other thread) we're also seeing these 
>>> warnings in the logs on all 10 nodes and for all cores using today's or 
>>> yesterday's trunk.
>>> 
>>> 2012-10-31 11:01:03,328 WARN [solr.core.CachingDirectoryFactory] - [main] - 
>>> : No lockType configured for 
>>> NRTCachingDirectory(org.apache.lucene.store.MMapDirectory@/opt/solr/cores/shard_h/data
>>>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@5dd183b7; 
>>> maxCacheMB=48.0 maxMergeSizeMB=4.0) assuming 'simple'
>>> 
>>> The factory is configured like:
>>> 
>>> 
>>>  LUCENE_50
>>>  >>
>>> class="${solr.directoryFactory:solr.NRTCachingDirectoryFactory}"/>
>>> 
>>> ..
>>> 
>>> 
>>> And the locking mechanisch is configured like:
>>> 
>>> 
>>> ..
>>> native
>>> ..
>>> 
>>> 
>>> Any ideas to why it doesn't seem to see my lockType?
>>> 
>>> Thanks
>>> Markus
>> 
>> 
>> 
>> -- 
>> - Mark
>> 



Re: Possible memory leak in recovery

2012-11-03 Thread Mark Miller
Nothing I know of - file a bug please. Might be related to the EOF issue, so 
you might add the details to that JIRA.

- Mark 

On Nov 2, 2012, at 10:13 AM, Markus Jelsma  wrote:

> Hi,
> 
> We wiped clean the data directories for one node. That node is never able to 
> recover and regularly runs OOM. On another cluster (with an older build, 
> september 10th) memory consumption on recovery is fairly low when recoverign 
> and with only a 250MB heap allocated it's easy to recover two 4GB cores from 
> scratch at the same time. On this new test cluster we see the following 
> happening: 
> - no index, start recovery
> - recovery fails (see other thread, cannot read past EOF when reading index 
> files)
> - heap is not released
> - recovery is retried, fails
> - heap is not released
> .. OOM
> 
> The distinct saw tooth pattern is not there, heap consumption only grows with 
> siginifant steps when recovery is retried but fails. If i increase heap 
> recovery simply fails a few more times.
> 
> I cannot find an existing issue but may have overlooked it. File bug or did i 
> miss an Jira issue?
> 
> Thanks,
> Markus
> 



SolrCloud failover behavior

2012-11-03 Thread Nick Chase
I think there's a change in the behavior of SolrCloud vs. what's in the 
wiki, but I was hoping someone could confirm for me.  I checked JIRA and 
there were a couple of issues requesting partial results if one server 
comes down, but that doesn't seem to be the issue here.  I also checked 
CHANGES.txt and don't see anything that seems to apply.


I'm running "Example B: Simple two shard cluster with shard replicas" 
from the wiki at https://wiki.apache.org/solr/SolrCloud and everything 
starts out as expected.  However, when I get to the part about fail over 
behavior is when things get a little wonky.


I added data to the shard running on 7475.  If I kill 7500, a query to 
any of the other servers works fine.  But if I kill 7475, rather than 
getting zero results on a search to 8983 or 8900, I get a 503 error:



   
  503
  5
  
 *:*
  
   
   
  no servers hosting shard:
  503
   


I don't see any errors in the consoles.

Also, if I kill 8983, which includes the Zookeeper server, everything 
dies, rather than just staying in a steady state; the other servers 
continually show:


Nov 03, 2012 11:39:34 AM org.apache.zookeeper.ClientCnxn$SendThread 
startConnect

NFO: Opening socket connection to server localhost/0:0:0:0:0:0:0:1:9983
ov 03, 2012 11:39:35 AM org.apache.zookeeper.ClientCnxn$SendThread run
ARNING: Session 0x13ac6cf87890002 for server null, unexpected error, 
closing socket connection and attempting reconnect

ava.net.ConnectException: Connection refused: no further information
   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
   at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source)
   at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1143)


ov 03, 2012 11:39:35 AM org.apache.zookeeper.ClientCnxn$SendThread 
startConnect


over and over again, and a call to any of the servers shows a connection 
error to 8983.


This is the current 4.0.0 release, running on Windows 7.

If this is the proper behavior and the wiki needs updating, fine; I just 
need to know.  Otherwise if anybody has any clues as to what I may be 
missing, I'd be grateful. :)


Thanks...

---  Nick


All document keywords must match the query keywords

2012-11-03 Thread SR
Solr 4.0

I need to return documents when all their keywords are matching the query. In 
other words, all the document keywords should match the query keywords

e.g., query: best chinese food restaurant

doc1: chinese food
doc2: italian food
doc3: chinese store

Only doc1 should be returned ("chinese food" is matching the query).

Any idea on how this can be achieved?

Thanks,
-Steve

customize similarity function

2012-11-03 Thread SR
Solr 4.0

I want to avoid the TF.IDF and use a "binary" model, i.e., if the keyword is in 
the document, the score is 1, no matter how frequent the keyword is in that 
document. If the keyword is not in the document, than the score is zero. I also 
want to avoid the idf.

e.g.,

query: pizza

doc: pizza pizza

the score of "pizza" within doc should be 1.

Any idea on how this can be achieved?

Thanks,
-SR

Re: All document keywords must match the query keywords

2012-11-03 Thread Gora Mohanty
On 3 November 2012 22:17, SR  wrote:

> Solr 4.0
>
> I need to return documents when all their keywords are matching the query.
> In other words, all the document keywords should match the query keywords
>
> e.g., query: best chinese food restaurant
>
> doc1: chinese food
> doc2: italian food
> doc3: chinese store
>
> Only doc1 should be returned ("chinese food" is matching the query).
>
> Any idea on how this can be achieved?
>

Not sure what you mean by all the keywords should match, given your
examples above. doc2 will match because of "food" and doc3 will match
because of "chinese".

If you really want all search terms to be matched, you can change the
default operator for solrQueryParser in schema.xml from OR to AND,
but in your example even doc1 will not match as you are searching
for "best chinese food restaurant". If you searched for "chinese food"
it would match.

Regards,
Gora


Re: All document keywords must match the query keywords

2012-11-03 Thread SR

On 2012-11-03, at 12:55 PM, Gora Mohanty wrote:

> On 3 November 2012 22:17, SR  wrote:
> 
>> Solr 4.0
>> 
>> I need to return documents when all their keywords are matching the query.
>> In other words, all the document keywords should match the query keywords
>> 
>> e.g., query: best chinese food restaurant
>> 
>> doc1: chinese food
>> doc2: italian food
>> doc3: chinese store
>> 
>> Only doc1 should be returned ("chinese food" is matching the query).
>> 
>> Any idea on how this can be achieved?
>> 
> 
> Not sure what you mean by all the keywords should match, given your
> examples above. doc2 will match because of "food" and doc3 will match
> because of "chinese".
> 
> If you really want all search terms to be matched, you can change the
> default operator for solrQueryParser in schema.xml from OR to AND,
> but in your example even doc1 will not match as you are searching
> for "best chinese food restaurant". If you searched for "chinese food"
> it would match.
> 
> Regards,
> Gora

Hi Gora,

I really meant that. doc 2 shouldn't match because "italian" is not in the 
query. Same thing for doc3 with "store". It's like applying an AND but on the 
document keywords, instead of the query keywords.

Thanks,
-S

Solr - Disk writes and set up suggestions

2012-11-03 Thread tictacs
Hi,

My site has 30,000 widgets and 500,000 widget users.  

I have created two solr indexes, one for widgets and one for users.  The
widgets index is 324MB and the users index is 9.3GB. 

We are opimizing the index every hour and during this time the server is
slowing to a crawl, looks like due to the amount of disk writes - atop is
showing:

PID  SYSCPU  USRCPU  VGROW  RGROW  RDDSK  WRDSK  ST EXC S  CPU CMD 1/2
15979 118m35s  17h28m   3.3G   1.2G 353.2G 1887.0G  N-   - S   0% java
 8674 441m37s 637m34s  15.9G   3.2G 122.6G 10805.6G  N-   - S   0% java
  484 189m50s   0.00s 0K 0K 94540K  91.3G  N-   - S   0% kjournald
  868 150m19s   0.00s 0K 0K 4K 4K  N-   - S   0% flush-104:0
  116 118m52s   0.00s 0K 0K 0K 383.9M  N-   - S   0% kswapd0
19079  21m38s  80m49s  33.5G   3.1G 113.2G 110.6G  N-   - S   0% mysqld
18955  33m22s  94.26s 77296K  9744K  66.7G  38.9G  N-   - S   0% perl

It is a good spec machine in terms of processor and memory - 24GB RAM and a
6 core Xeon proc but I am wondering if I have made a mistake with the disks,
it only has standard 7200 RPM SATA disks.  

Would I be much better off with going for 15K RPM SAS drives?  If I could
get SSD disks would they be an improvement?  My current server hosts charges
for SSD drives are obscene though so that isn't likely to happen...

Currently the index that my application searches against sits on the server
where optimization takes place and search slows noticably.  I could easily
run a slave on another lower powered machine but my host only has a 100 Mbps
connection between servers and I am concerned that due to the size of the
index copying it between machines will still cause disk writes on the slave
machine and I will be no better off.

Does anyone have any suggestions as to server set up to make my search fast
constantly for end users? 

Cheers,

Tictacs





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Disk-writes-and-set-up-suggestions-tp4018031.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Solr - Disk writes and set up suggestions

2012-11-03 Thread Michael Ryan
I'd recommend not optimizing every hour. Are you seeing a significant 
performance increase from optimizing this frequently?

-Michael


Re: customize similarity function

2012-11-03 Thread Otis Gospodnetic
Hi,

Look where Similarity implementation is specified in solrconfig.xml.  Find
that class in Lucene and you will see tf and idf methods you need for your
implementation, which you can then specify I'm solrconfig. Reindexing
required.

Otis
--
Performance Monitoring - http://sematext.com/spm
On Nov 3, 2012 12:51 PM, "SR"  wrote:

> Solr 4.0
>
> I want to avoid the TF.IDF and use a "binary" model, i.e., if the keyword
> is in the document, the score is 1, no matter how frequent the keyword is
> in that document. If the keyword is not in the document, than the score is
> zero. I also want to avoid the idf.
>
> e.g.,
>
> query: pizza
>
> doc: pizza pizza
>
> the score of "pizza" within doc should be 1.
>
> Any idea on how this can be achieved?
>
> Thanks,
> -SR


Re: Solr - Disk writes and set up suggestions

2012-11-03 Thread Otis Gospodnetic
Hi,

This should become a FAQ. Short version: don't optimize. Check ML archives
for recent messages and explanations.

If you have a monitoring tool, look at disk io during and after
optimization, check solr cache hit rates, etc.

Otis
--
Performance Monitoring - http://sematext.com/spm
On Nov 3, 2012 6:41 PM, "tictacs"  wrote:

> Hi,
>
> My site has 30,000 widgets and 500,000 widget users.
>
> I have created two solr indexes, one for widgets and one for users.  The
> widgets index is 324MB and the users index is 9.3GB.
>
> We are opimizing the index every hour and during this time the server is
> slowing to a crawl, looks like due to the amount of disk writes - atop is
> showing:
>
> PID  SYSCPU  USRCPU  VGROW  RGROW  RDDSK  WRDSK  ST EXC S  CPU CMD 1/2
> 15979 118m35s  17h28m   3.3G   1.2G 353.2G 1887.0G  N-   - S   0% java
>  8674 441m37s 637m34s  15.9G   3.2G 122.6G 10805.6G  N-   - S   0% java
>   484 189m50s   0.00s 0K 0K 94540K  91.3G  N-   - S   0% kjournald
>   868 150m19s   0.00s 0K 0K 4K 4K  N-   - S   0%
> flush-104:0
>   116 118m52s   0.00s 0K 0K 0K 383.9M  N-   - S   0% kswapd0
> 19079  21m38s  80m49s  33.5G   3.1G 113.2G 110.6G  N-   - S   0% mysqld
> 18955  33m22s  94.26s 77296K  9744K  66.7G  38.9G  N-   - S   0% perl
>
> It is a good spec machine in terms of processor and memory - 24GB RAM and a
> 6 core Xeon proc but I am wondering if I have made a mistake with the
> disks,
> it only has standard 7200 RPM SATA disks.
>
> Would I be much better off with going for 15K RPM SAS drives?  If I could
> get SSD disks would they be an improvement?  My current server hosts
> charges
> for SSD drives are obscene though so that isn't likely to happen...
>
> Currently the index that my application searches against sits on the server
> where optimization takes place and search slows noticably.  I could easily
> run a slave on another lower powered machine but my host only has a 100
> Mbps
> connection between servers and I am concerned that due to the size of the
> index copying it between machines will still cause disk writes on the slave
> machine and I will be no better off.
>
> Does anyone have any suggestions as to server set up to make my search fast
> constantly for end users?
>
> Cheers,
>
> Tictacs
>
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solr-Disk-writes-and-set-up-suggestions-tp4018031.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: All document keywords must match the query keywords

2012-11-03 Thread Jack Krupansky
But neither "best" nor "restaurant" are in any of the documents, so how are 
any of these documents reasonable matches?


You have the semantics of query backwards. The documents are the "data" and 
the query is the "operation" to be performed on the data. The intent of a 
query is to specify what documents should be selected. That is the 
function/purpose of any query, in any search system.


-- Jack Krupansky

-Original Message- 
From: SR

Sent: Saturday, November 03, 2012 5:09 PM
To: solr-user@lucene.apache.org
Subject: Re: All document keywords must match the query keywords


On 2012-11-03, at 12:55 PM, Gora Mohanty wrote:


On 3 November 2012 22:17, SR  wrote:


Solr 4.0

I need to return documents when all their keywords are matching the 
query.

In other words, all the document keywords should match the query keywords

e.g., query: best chinese food restaurant

doc1: chinese food
doc2: italian food
doc3: chinese store

Only doc1 should be returned ("chinese food" is matching the query).

Any idea on how this can be achieved?



Not sure what you mean by all the keywords should match, given your
examples above. doc2 will match because of "food" and doc3 will match
because of "chinese".

If you really want all search terms to be matched, you can change the
default operator for solrQueryParser in schema.xml from OR to AND,
but in your example even doc1 will not match as you are searching
for "best chinese food restaurant". If you searched for "chinese food"
it would match.

Regards,
Gora


Hi Gora,

I really meant that. doc 2 shouldn't match because "italian" is not in the 
query. Same thing for doc3 with "store". It's like applying an AND but on 
the document keywords, instead of the query keywords.


Thanks,
-S= 



Re: All document keywords must match the query keywords

2012-11-03 Thread SR
Thanks Jack.

This is not the ultimate goal of my search system; it's only one of the 
features I need. I don't need "best" and "restaurant" to match in this feature.

Yes, I do have the semantic of query backwards, and that's what I need in my 
application.

-S


On 2012-11-03, at 10:05 PM, Jack Krupansky wrote:

> But neither "best" nor "restaurant" are in any of the documents, so how are 
> any of these documents reasonable matches?
> 
> You have the semantics of query backwards. The documents are the "data" and 
> the query is the "operation" to be performed on the data. The intent of a 
> query is to specify what documents should be selected. That is the 
> function/purpose of any query, in any search system.
> 
> -- Jack Krupansky
> 
> -Original Message- From: SR
> Sent: Saturday, November 03, 2012 5:09 PM
> To: solr-user@lucene.apache.org
> Subject: Re: All document keywords must match the query keywords
> 
> 
> On 2012-11-03, at 12:55 PM, Gora Mohanty wrote:
> 
>> On 3 November 2012 22:17, SR  wrote:
>> 
>>> Solr 4.0
>>> 
>>> I need to return documents when all their keywords are matching the query.
>>> In other words, all the document keywords should match the query keywords
>>> 
>>> e.g., query: best chinese food restaurant
>>> 
>>> doc1: chinese food
>>> doc2: italian food
>>> doc3: chinese store
>>> 
>>> Only doc1 should be returned ("chinese food" is matching the query).
>>> 
>>> Any idea on how this can be achieved?
>>> 
>> 
>> Not sure what you mean by all the keywords should match, given your
>> examples above. doc2 will match because of "food" and doc3 will match
>> because of "chinese".
>> 
>> If you really want all search terms to be matched, you can change the
>> default operator for solrQueryParser in schema.xml from OR to AND,
>> but in your example even doc1 will not match as you are searching
>> for "best chinese food restaurant". If you searched for "chinese food"
>> it would match.
>> 
>> Regards,
>> Gora
> 
> Hi Gora,
> 
> I really meant that. doc 2 shouldn't match because "italian" is not in the 
> query. Same thing for doc3 with "store". It's like applying an AND but on the 
> document keywords, instead of the query keywords.
> 
> Thanks,
> -S= 



Re: solr search issue

2012-11-03 Thread Erick Erickson
You really need to spend some time becoming familiar with
1> the results of putting &debugQuery=on in your queries in order to see
how your query terms are spread across various fields.
2> the admin/analysis page to understand field tokenization.

>From your message, it looks like you're confusing several issues, analysis
and query parsing in particular.

For instance, your "id" field is probably a string. Which is completely
unanalyzed. Your data field may well have
something called WordDelimiterFilterFactory in it, which splits numbers and
letters into two tokens that are treated
independently. Do you know if this is occurring? If you look at the debug
output, you'll start to get a feel for this.

Second. Edismax happily allows field qualification, so you can enter
something like q=data:level2 if that is what
you want.

Best
Erick


On Fri, Nov 2, 2012 at 4:01 AM, Romita Saha wrote:

> Hi,
>
> I am new to solr. Could you kindly explain a bit about defining free text
> search.
>
> In my database I have two columns. One is id another is data.
> I want my query to spread across multiple fields. When i search for a
> parameter from id filed, it searches it in both the fields. However
> whenever I search a parameter from data field, it only searches in data.
> Below is my query.
>
> http://localhost:8983/solr/db/select/?defType=dismax&q=2&qf=data
> id^2&start=0&rows=11&fl=data,id
>
> In my table, id=2 for data=level2.
>id=4 for data=cashier2.
>
> When I search q=2&qf=data id, it searches for '2' in data field also and
> gives me both the results i.e data=level2 and data=cashier2.
> However, when i search for q=cashier2&qf=data id, it only gives me result
> as data=cashier2 and not data=level2 (please note that id=2 for data =
> level2. Ideally it should break the query into cashier+2 and search in id
> field as well)
>
>
> Thanks and regards,
> Romita Saha
>
> Panasonic R&D Center Singapore
> Blk 1022 Tai Seng Avenue #06-3530
> Tai Seng Ind. Est. Singapore 534415
> DID: (65) 6550 5383 FAX: (65) 6550 5459
> email: romita.s...@sg.panasonic.com
>
>
>
> From:   Erick Erickson 
> To: solr-user@lucene.apache.org,
> Date:   11/02/2012 02:42 PM
> Subject:Re: solr search issue
>
>
>
> First, define a "free text search". If what you're after is that your
> terms
> (i.e. q=term1 term2) get spread
> across multiple fields, simply add them to your qf parameter
> (qf=field1,field2). If you want the terms
> bound to a particular field, it's just the usual q=field:term, in which
> case any field term does NOT get
> spread amongst all the fields in your qf parameter.
>
> Best
> Erick
>
>
> On Fri, Nov 2, 2012 at 1:56 AM, Romita Saha
> wrote:
>
> > Hi,
> >
> > Thank you for your reply. What if I want to do a free text search?
> >
> > Thanks and regards,
> > Romita
> >
> >
> > From:   Gora Mohanty 
> > To: solr-user@lucene.apache.org,
> > Date:   11/02/2012 12:36 PM
> > Subject:Re: solr search issue
> >
> >
> >
> > On 2 November 2012 09:51, Romita Saha 
> > wrote:
> > >
> > > Hi,
> > >
> > > I am trying to search a database . In my database I have a field
> level2.
> > >
> > > My query:
> > >
> >
> >
>
> http://localhost:8983/solr/db/select/?defType=dismax&q=search%20level2&qf=data%20id
>
> > ^2%20&start=0&rows=11&fl=data,id
> >
> >
> > Where did you get this syntax from? If you want to search just on the
> > field level2, you should have:
> > http://localhost:8983/solr/db/select/?q=term&defType=dismax&qf=level2
> > where "term" is your search term. (I have omitted boosts, and extra
> > parameters.)
> >
> > Regards,
> > Gora
> >
> >
>
>


Re: All document keywords must match the query keywords

2012-11-03 Thread Ahmet Arslan
Hi Steve,

I would store my documents as queries in your case. You may find these 
relevant. 

http://lucene.apache.org/core/4_0_0-BETA/memory/org/apache/lucene/index/memory/MemoryIndex.html

http://www.elasticsearch.org/blog/2011/02/08/percolator.html


--- On Sun, 11/4/12, SR  wrote:

> From: SR 
> Subject: Re: All document keywords must match the query keywords
> To: solr-user@lucene.apache.org
> Date: Sunday, November 4, 2012, 4:16 AM
> Thanks Jack.
> 
> This is not the ultimate goal of my search system; it's only
> one of the features I need. I don't need "best" and
> "restaurant" to match in this feature.
> 
> Yes, I do have the semantic of query backwards, and that's
> what I need in my application.
> 
> -S
> 
> 
> On 2012-11-03, at 10:05 PM, Jack Krupansky wrote:
> 
> > But neither "best" nor "restaurant" are in any of the
> documents, so how are any of these documents reasonable
> matches?
> > 
> > You have the semantics of query backwards. The
> documents are the "data" and the query is the "operation" to
> be performed on the data. The intent of a query is to
> specify what documents should be selected. That is the
> function/purpose of any query, in any search system.
> > 
> > -- Jack Krupansky
> > 
> > -Original Message- From: SR
> > Sent: Saturday, November 03, 2012 5:09 PM
> > To: solr-user@lucene.apache.org
> > Subject: Re: All document keywords must match the query
> keywords
> > 
> > 
> > On 2012-11-03, at 12:55 PM, Gora Mohanty wrote:
> > 
> >> On 3 November 2012 22:17, SR 
> wrote:
> >> 
> >>> Solr 4.0
> >>> 
> >>> I need to return documents when all their
> keywords are matching the query.
> >>> In other words, all the document keywords
> should match the query keywords
> >>> 
> >>> e.g., query: best chinese food restaurant
> >>> 
> >>> doc1: chinese food
> >>> doc2: italian food
> >>> doc3: chinese store
> >>> 
> >>> Only doc1 should be returned ("chinese food" is
> matching the query).
> >>> 
> >>> Any idea on how this can be achieved?
> >>> 
> >> 
> >> Not sure what you mean by all the keywords should
> match, given your
> >> examples above. doc2 will match because of "food"
> and doc3 will match
> >> because of "chinese".
> >> 
> >> If you really want all search terms to be matched,
> you can change the
> >> default operator for solrQueryParser in schema.xml
> from OR to AND,
> >> but in your example even doc1 will not match as you
> are searching
> >> for "best chinese food restaurant". If you searched
> for "chinese food"
> >> it would match.
> >> 
> >> Regards,
> >> Gora
> > 
> > Hi Gora,
> > 
> > I really meant that. doc 2 shouldn't match because
> "italian" is not in the query. Same thing for doc3 with
> "store". It's like applying an AND but on the document
> keywords, instead of the query keywords.
> > 
> > Thanks,
> > -S= 
> 
> 


Re: All document keywords must match the query keywords

2012-11-03 Thread SR
Thanks Ahmet that's exactly what I need. Do you now whether this feature exists 
in Solr? Or do I have to go through Lucene directly?

Thanks,
-SR

On 2012-11-03, at 10:26 PM, Ahmet Arslan wrote:

> Hi Steve,
> 
> I would store my documents as queries in your case. You may find these 
> relevant. 
> 
> http://lucene.apache.org/core/4_0_0-BETA/memory/org/apache/lucene/index/memory/MemoryIndex.html
> 
> http://www.elasticsearch.org/blog/2011/02/08/percolator.html
> 
> 
> --- On Sun, 11/4/12, SR  wrote:
> 
>> From: SR 
>> Subject: Re: All document keywords must match the query keywords
>> To: solr-user@lucene.apache.org
>> Date: Sunday, November 4, 2012, 4:16 AM
>> Thanks Jack.
>> 
>> This is not the ultimate goal of my search system; it's only
>> one of the features I need. I don't need "best" and
>> "restaurant" to match in this feature.
>> 
>> Yes, I do have the semantic of query backwards, and that's
>> what I need in my application.
>> 
>> -S
>> 
>> 
>> On 2012-11-03, at 10:05 PM, Jack Krupansky wrote:
>> 
>>> But neither "best" nor "restaurant" are in any of the
>> documents, so how are any of these documents reasonable
>> matches?
>>> 
>>> You have the semantics of query backwards. The
>> documents are the "data" and the query is the "operation" to
>> be performed on the data. The intent of a query is to
>> specify what documents should be selected. That is the
>> function/purpose of any query, in any search system.
>>> 
>>> -- Jack Krupansky
>>> 
>>> -Original Message- From: SR
>>> Sent: Saturday, November 03, 2012 5:09 PM
>>> To: solr-user@lucene.apache.org
>>> Subject: Re: All document keywords must match the query
>> keywords
>>> 
>>> 
>>> On 2012-11-03, at 12:55 PM, Gora Mohanty wrote:
>>> 
 On 3 November 2012 22:17, SR 
>> wrote:
 
> Solr 4.0
> 
> I need to return documents when all their
>> keywords are matching the query.
> In other words, all the document keywords
>> should match the query keywords
> 
> e.g., query: best chinese food restaurant
> 
> doc1: chinese food
> doc2: italian food
> doc3: chinese store
> 
> Only doc1 should be returned ("chinese food" is
>> matching the query).
> 
> Any idea on how this can be achieved?
> 
 
 Not sure what you mean by all the keywords should
>> match, given your
 examples above. doc2 will match because of "food"
>> and doc3 will match
 because of "chinese".
 
 If you really want all search terms to be matched,
>> you can change the
 default operator for solrQueryParser in schema.xml
>> from OR to AND,
 but in your example even doc1 will not match as you
>> are searching
 for "best chinese food restaurant". If you searched
>> for "chinese food"
 it would match.
 
 Regards,
 Gora
>>> 
>>> Hi Gora,
>>> 
>>> I really meant that. doc 2 shouldn't match because
>> "italian" is not in the query. Same thing for doc3 with
>> "store". It's like applying an AND but on the document
>> keywords, instead of the query keywords.
>>> 
>>> Thanks,
>>> -S= 
>> 
>> 



Re: Nested Join Queries

2012-11-03 Thread Erick Erickson
I'm going to go a bit sideways on you, partly because I can't answer the
question ...

But, every time I see someone doing what looks like substituting "core" for
"table" and
then trying to use Solr like a DB, I get on my soap-box and preach..

In this case, consider de-normalizing your DB so you can ask the query in
terms
of search rather than joins. e.g.

Make each document a combination of the author and the book, with an
additional
field "author_has_written_a_bestseller". Now your query becomes a really
simple
search, "author:name AND author_has_written_a_bestseller:true". True, this
kind
of approach isn't as flexible as an RDBMS, but it's a _search_ rather than
a query.
Yes, it replicates data, but unless you have a huge combinatorial
explosion, that's
not a problem.

And the join functionality isn't called "pseudo" for nothing. It was
written for a specific
use-case. It is often expensive, especially when the field being joined has
many unique
values.

FWIW,
Erick


On Fri, Nov 2, 2012 at 11:32 AM, Gerald Blanck <
gerald.bla...@barometerit.com> wrote:

> At a high level, I have a need to be able to execute a query that joins
> across cores, and that query during its joining may join back to the
> originating core.
>
> Example:
> Find all Books written by an Author who has written a best selling Book.
>
> In Solr query syntax
> A) against the book core - bestseller:true
> B) against the author core - {!join fromIndex=book from=id
> to=bookid}bestseller:true
> C) against the book core - {!join fromIndex=author from=id
> to=authorid}{!join fromIndex=book from=id to=bookid}bestseller:true
>
> A - returns results
> B - returns results
> C - does not return results
>
> Given that A and C use the same core, I started looking for join code that
> compares the originating core to the fromIndex and found this
> in JoinQParserPlugin (line #159).
>
> if (info.getReq().getCore() == fromCore) {
>
>   // if this is the same core, use the searcher passed in...
> otherwise we could be warming and
>
>   // get an older searcher from the core.
>
>   fromSearcher = searcher;
>
> } else {
>
>   // This could block if there is a static warming query with a
> join in it, and if useColdSearcher is true.
>
>   // Deadlock could result if two cores both had useColdSearcher
> and had joins that used eachother.
>
>   // This would be very predictable though (should happen every
> time if misconfigured)
>
>   fromRef = fromCore.getSearcher(false, true, null);
>
>
>   // be careful not to do anything with this searcher that requires
> the thread local
>
>   // SolrRequestInfo in a manner that requires the core in the
> request to match
>
>   fromSearcher = fromRef.get();
>
> }
>
> I found that if I were to modify the above code so that it always follows
> the logic in the else block, I get the results I expect.
>
> Can someone explain to me why the code is written as it is?  And if we were
> to run with only the else block being executed, what type of adverse
> impacts we might have?
>
> Does anyone have other ideas on how to solve this issue?
>
> Thanks in advance.
> -Gerald
>


Re: All document keywords must match the query keywords

2012-11-03 Thread Otis Gospodnetic
It doesn't exist in solr. We've built it for clients. Elasticsearch has it
built in.

Otis
--
Performance Monitoring - http://sematext.com/spm
On Nov 3, 2012 10:37 PM, "SR"  wrote:

> Thanks Ahmet that's exactly what I need. Do you now whether this feature
> exists in Solr? Or do I have to go through Lucene directly?
>
> Thanks,
> -SR
>
> On 2012-11-03, at 10:26 PM, Ahmet Arslan wrote:
>
> > Hi Steve,
> >
> > I would store my documents as queries in your case. You may find these
> relevant.
> >
> >
> http://lucene.apache.org/core/4_0_0-BETA/memory/org/apache/lucene/index/memory/MemoryIndex.html
> >
> > http://www.elasticsearch.org/blog/2011/02/08/percolator.html
> >
> >
> > --- On Sun, 11/4/12, SR  wrote:
> >
> >> From: SR 
> >> Subject: Re: All document keywords must match the query keywords
> >> To: solr-user@lucene.apache.org
> >> Date: Sunday, November 4, 2012, 4:16 AM
> >> Thanks Jack.
> >>
> >> This is not the ultimate goal of my search system; it's only
> >> one of the features I need. I don't need "best" and
> >> "restaurant" to match in this feature.
> >>
> >> Yes, I do have the semantic of query backwards, and that's
> >> what I need in my application.
> >>
> >> -S
> >>
> >>
> >> On 2012-11-03, at 10:05 PM, Jack Krupansky wrote:
> >>
> >>> But neither "best" nor "restaurant" are in any of the
> >> documents, so how are any of these documents reasonable
> >> matches?
> >>>
> >>> You have the semantics of query backwards. The
> >> documents are the "data" and the query is the "operation" to
> >> be performed on the data. The intent of a query is to
> >> specify what documents should be selected. That is the
> >> function/purpose of any query, in any search system.
> >>>
> >>> -- Jack Krupansky
> >>>
> >>> -Original Message- From: SR
> >>> Sent: Saturday, November 03, 2012 5:09 PM
> >>> To: solr-user@lucene.apache.org
> >>> Subject: Re: All document keywords must match the query
> >> keywords
> >>>
> >>>
> >>> On 2012-11-03, at 12:55 PM, Gora Mohanty wrote:
> >>>
>  On 3 November 2012 22:17, SR 
> >> wrote:
> 
> > Solr 4.0
> >
> > I need to return documents when all their
> >> keywords are matching the query.
> > In other words, all the document keywords
> >> should match the query keywords
> >
> > e.g., query: best chinese food restaurant
> >
> > doc1: chinese food
> > doc2: italian food
> > doc3: chinese store
> >
> > Only doc1 should be returned ("chinese food" is
> >> matching the query).
> >
> > Any idea on how this can be achieved?
> >
> 
>  Not sure what you mean by all the keywords should
> >> match, given your
>  examples above. doc2 will match because of "food"
> >> and doc3 will match
>  because of "chinese".
> 
>  If you really want all search terms to be matched,
> >> you can change the
>  default operator for solrQueryParser in schema.xml
> >> from OR to AND,
>  but in your example even doc1 will not match as you
> >> are searching
>  for "best chinese food restaurant". If you searched
> >> for "chinese food"
>  it would match.
> 
>  Regards,
>  Gora
> >>>
> >>> Hi Gora,
> >>>
> >>> I really meant that. doc 2 shouldn't match because
> >> "italian" is not in the query. Same thing for doc3 with
> >> "store". It's like applying an AND but on the document
> >> keywords, instead of the query keywords.
> >>>
> >>> Thanks,
> >>> -S=
> >>
> >>
>
>


Re: All document keywords must match the query keywords

2012-11-03 Thread SR
Thanks Otis.
 By "we" you mean "Lucid works"?

Is there a chance to get it sometime soon in the open source?

Thanks,
-S

On 2012-11-03, at 10:39 PM, Otis Gospodnetic wrote:

> It doesn't exist in solr. We've built it for clients. Elasticsearch has it
> built in.
> 
> Otis
> --
> Performance Monitoring - http://sematext.com/spm
> On Nov 3, 2012 10:37 PM, "SR"  wrote:
> 
>> Thanks Ahmet that's exactly what I need. Do you now whether this feature
>> exists in Solr? Or do I have to go through Lucene directly?
>> 
>> Thanks,
>> -SR
>> 
>> On 2012-11-03, at 10:26 PM, Ahmet Arslan wrote:
>> 
>>> Hi Steve,
>>> 
>>> I would store my documents as queries in your case. You may find these
>> relevant.
>>> 
>>> 
>> http://lucene.apache.org/core/4_0_0-BETA/memory/org/apache/lucene/index/memory/MemoryIndex.html
>>> 
>>> http://www.elasticsearch.org/blog/2011/02/08/percolator.html
>>> 
>>> 
>>> --- On Sun, 11/4/12, SR  wrote:
>>> 
 From: SR 
 Subject: Re: All document keywords must match the query keywords
 To: solr-user@lucene.apache.org
 Date: Sunday, November 4, 2012, 4:16 AM
 Thanks Jack.
 
 This is not the ultimate goal of my search system; it's only
 one of the features I need. I don't need "best" and
 "restaurant" to match in this feature.
 
 Yes, I do have the semantic of query backwards, and that's
 what I need in my application.
 
 -S
 
 
 On 2012-11-03, at 10:05 PM, Jack Krupansky wrote:
 
> But neither "best" nor "restaurant" are in any of the
 documents, so how are any of these documents reasonable
 matches?
> 
> You have the semantics of query backwards. The
 documents are the "data" and the query is the "operation" to
 be performed on the data. The intent of a query is to
 specify what documents should be selected. That is the
 function/purpose of any query, in any search system.
> 
> -- Jack Krupansky
> 
> -Original Message- From: SR
> Sent: Saturday, November 03, 2012 5:09 PM
> To: solr-user@lucene.apache.org
> Subject: Re: All document keywords must match the query
 keywords
> 
> 
> On 2012-11-03, at 12:55 PM, Gora Mohanty wrote:
> 
>> On 3 November 2012 22:17, SR 
 wrote:
>> 
>>> Solr 4.0
>>> 
>>> I need to return documents when all their
 keywords are matching the query.
>>> In other words, all the document keywords
 should match the query keywords
>>> 
>>> e.g., query: best chinese food restaurant
>>> 
>>> doc1: chinese food
>>> doc2: italian food
>>> doc3: chinese store
>>> 
>>> Only doc1 should be returned ("chinese food" is
 matching the query).
>>> 
>>> Any idea on how this can be achieved?
>>> 
>> 
>> Not sure what you mean by all the keywords should
 match, given your
>> examples above. doc2 will match because of "food"
 and doc3 will match
>> because of "chinese".
>> 
>> If you really want all search terms to be matched,
 you can change the
>> default operator for solrQueryParser in schema.xml
 from OR to AND,
>> but in your example even doc1 will not match as you
 are searching
>> for "best chinese food restaurant". If you searched
 for "chinese food"
>> it would match.
>> 
>> Regards,
>> Gora
> 
> Hi Gora,
> 
> I really meant that. doc 2 shouldn't match because
 "italian" is not in the query. Same thing for doc3 with
 "store". It's like applying an AND but on the document
 keywords, instead of the query keywords.
> 
> Thanks,
> -S=
 
 
>> 
>> 



Re: SolrCloud failover behavior

2012-11-03 Thread Erick Erickson
SolrCloud doesn't work unless every shard has at least one server that is
up and running.

I _think_ you might be killing both nodes that host one of the shards. The
admin
page has a link showing you the state of your cluster. So when this happens,
does that page show both nodes for that shard being down?

And yeah, SolrCloud requires a quorum of ZK nodes up. So with only one ZK
node, killing that will bring down the whole cluster. Which is why the
usual
recommendation is that ZK be run externally and usually an odd number of ZK
nodes (three or more).

Anyone can create a login and edit the Wiki, so any clarifications are
welcome!

Best
Erick


On Sat, Nov 3, 2012 at 12:17 PM, Nick Chase  wrote:

> I think there's a change in the behavior of SolrCloud vs. what's in the
> wiki, but I was hoping someone could confirm for me.  I checked JIRA and
> there were a couple of issues requesting partial results if one server
> comes down, but that doesn't seem to be the issue here.  I also checked
> CHANGES.txt and don't see anything that seems to apply.
>
> I'm running "Example B: Simple two shard cluster with shard replicas" from
> the wiki at 
> https://wiki.apache.org/solr/**SolrCloudand
>  everything starts out as expected.  However, when I get to the part
> about fail over behavior is when things get a little wonky.
>
> I added data to the shard running on 7475.  If I kill 7500, a query to any
> of the other servers works fine.  But if I kill 7475, rather than getting
> zero results on a search to 8983 or 8900, I get a 503 error:
>
> 
>
>   503
>   5
>   
>  *:*
>   
>
>
>   no servers hosting shard:
>   503
>
> 
>
> I don't see any errors in the consoles.
>
> Also, if I kill 8983, which includes the Zookeeper server, everything
> dies, rather than just staying in a steady state; the other servers
> continually show:
>
> Nov 03, 2012 11:39:34 AM org.apache.zookeeper.**ClientCnxn$SendThread
> startConnect
> NFO: Opening socket connection to server localhost/0:0:0:0:0:0:0:1:9983
> ov 03, 2012 11:39:35 AM org.apache.zookeeper.**ClientCnxn$SendThread run
> ARNING: Session 0x13ac6cf87890002 for server null, unexpected error,
> closing socket connection and attempting reconnect
> ava.net.ConnectException: Connection refused: no further information
>at sun.nio.ch.SocketChannelImpl.**checkConnect(Native Method)
>at sun.nio.ch.SocketChannelImpl.**finishConnect(Unknown Source)
>at org.apache.zookeeper.**ClientCnxn$SendThread.run(**
> ClientCnxn.java:1143)
>
> ov 03, 2012 11:39:35 AM org.apache.zookeeper.**ClientCnxn$SendThread
> startConnect
>
> over and over again, and a call to any of the servers shows a connection
> error to 8983.
>
> This is the current 4.0.0 release, running on Windows 7.
>
> If this is the proper behavior and the wiki needs updating, fine; I just
> need to know.  Otherwise if anybody has any clues as to what I may be
> missing, I'd be grateful. :)
>
> Thanks...
>
> ---  Nick
>