*-enable and *-disable scripts
Hi, I'm trying to understand what the enable and disable scripts do, for example rsyncd-enable and rsyncd-disable. As far as I can tell, all they do is touch or remove a file logs/rsyncd-enabled, they don't do anything more than that. The daemon start script checks for the presence of this file before starting the daemon. This seems like an extra level of work that's not needed, since if you can enable the daemon you can also start it, so why not just start it - what am I missing? Andrew. -- [EMAIL PROTECTED] / [EMAIL PROTECTED] http://www.andrewsavory.com/
Re: Delete by multiple query doesn't seem to work
Thanks for the suggestion. It didn't do anything. I ended up redoing the deletes as a set of individual xxxid> requests, POSTed one at a time over an open HTTP connection (as suggested in another email thread on the issue). On May 22, 2008, at 12:39 AM, Shalin Shekhar Mangar wrote: Not sure, but try using: document_id:"A-395" OR document_id:"A-1949"delete> On Thu, May 22, 2008 at 7:46 AM, Tracy Flynn <[EMAIL PROTECTED]> wrote: I'm trying to exploit 'Delete by Query' with multiple IDs in the query. I'm using vanilla SOLR 1.2 My schema specifies. document_id My unique document ids are of the form 'A-xxx' , 'T-xxx" and so on. The following individual delete works: curl http://work:8983/solr/update -H "Content-Type: text/xml" -- data-binary 'A-3545' curl http://work:8983/solr/update -H "Content-Type: text/xml" -- data-binary '' I've tried both of the following without successfully deleting anything. Attempt 1: curl http://work:8983/solr/update -H "Content-Type: text/xml" -- data-binary 'id:A-395 OR id:A-1949' curl http://work:8983/solr/update -H "Content-Type: text/xml" -- data-binary '' Attempt 2: curl http://work:8983/solr/update -H "Content-Type: text/xml" -- data-binary 'document_id:A-395 OR document_id:A-1949delete>' curl http://work:8983/solr/update -H "Content-Type: text/xml" -- data-binary '' Any hints / ideas as to what I'm doing wrong, or where to look for the problem? Tracy -- Regards, Shalin Shekhar Mangar.
Re: query for number of field entries in a multivalued field?
Probably the easiest way to do this is keep track of the number of items yourself then retrieve it later on. On Wed, May 21, 2008 at 7:57 AM, Brian Whitman <[EMAIL PROTECTED]> wrote: > Any way to query how many items are in a multivalued field? (Or use a > functionquery against that # or anything?) > > -- Regards, Cuong Hoang
Re: SOLR OOM (out of memory) problem
One correction: I have set documentcache as: initialsize=512 size=710 autowarmcount=512 The total insertion in documentcache goes upto max 45 with 0 evictions in a day. Which means it never grows to 710. Thanx Mike Klaas wrote: > > > On 22-May-08, at 4:27 AM, gurudev wrote: > >> >> Hi Rong, >> >> My cache hit ratio are: >> >> filtercache: 0.96 >> documentcache:0.51 >> queryresultcache:0.58 > > Note that you may be able to reduce the _size_ of the document cache > without materially affecting the hit rate, since typically some > documents are much more frequently accessed than others. > > I'd suggest starting with 700k, which I would still consider a large > cache. > > -Mike > > > -- View this message in context: http://www.nabble.com/SOLR-OOM-%28out-of-memory%29-problem-tp17364146p17424355.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: solr sorting problem
Hi, How was alphaOnlySort defined before you indexed it? It must be untokenized and this must be set before indexing. Otis -- Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch - Original Message > From: pmg <[EMAIL PROTECTED]> > To: solr-user@lucene.apache.org > Sent: Thursday, May 22, 2008 10:19:52 PM > Subject: Re: solr sorting problem > > > I forgot to mention that I made changes to schema after indexing. > > > pmg wrote: > > > > I have problem sorting solr results. Here is my solr config > > > > > > > > > > stored="true"/> > > > > > > > > > > > > > > > > search query > > > > > select/?&rows=100&start=0&q=artistId:100346%20AND%20type:track&sort=alphaTrackSort%20desc&fl=track > > > > does not sort track. > > > > Don't understand what is missing from config > > > > -- > View this message in context: > http://www.nabble.com/solr-sorting-problem-tp17417394p17417408.html > Sent from the Solr - User mailing list archive at Nabble.com.
Re: solr sorting problem
Thanks Otis. Here is my alphaOnlySort. This is exactly the same as default except pattern replace filter What's the easiest way to re-index without cleaning and rebuilding the index from scratch? Thanks! Otis Gospodnetic wrote: > > Hi, > > How was alphaOnlySort defined before you indexed it? It must be > untokenized and this must be set before indexing. > > Otis > -- > Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch > > > - Original Message >> From: pmg <[EMAIL PROTECTED]> >> To: solr-user@lucene.apache.org >> Sent: Thursday, May 22, 2008 10:19:52 PM >> Subject: Re: solr sorting problem >> >> >> I forgot to mention that I made changes to schema after indexing. >> >> >> pmg wrote: >> > >> > I have problem sorting solr results. Here is my solr config >> > >> > >> > >> > >> > stored="true"/> >> > >> > >> > >> > >> > >> > >> > >> > search query >> > >> > >> select/?&rows=100&start=0&q=artistId:100346%20AND%20type:track&sort=alphaTrackSort%20desc&fl=track >> > >> > does not sort track. >> > >> > Don't understand what is missing from config >> > >> >> -- >> View this message in context: >> http://www.nabble.com/solr-sorting-problem-tp17417394p17417408.html >> Sent from the Solr - User mailing list archive at Nabble.com. > > > -- View this message in context: http://www.nabble.com/solr-sorting-problem-tp17417394p17430118.html Sent from the Solr - User mailing list archive at Nabble.com.
SOLR index size
Hi, I'm using SOLR to keep track of customer complaints. I only need to keep recent complaints, but I want to keep as many as I can fit on my hard drive. Is there any way I can configure SOLR to dump old entries in the index when the index reaches a certain size? I'm using a month old version from trunk. Thanks, Marshall
Re: SOLR index size
Hi Marshall, There is nothing such built-in, but you can easily build simple external tools to do this. You can check the disk space used by the index (du) You can check the total number of docs in the index (via Luke request handler) You'll still have to have some mechanism of retrieving the oldest N docs for removal. Otis -- Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch - Original Message > From: Marshall Weir <[EMAIL PROTECTED]> > To: solr-user@lucene.apache.org > Sent: Friday, May 23, 2008 3:58:18 PM > Subject: SOLR index size > > Hi, > > I'm using SOLR to keep track of customer complaints. I only need to > keep recent complaints, but I want to keep as many as I can fit on my > hard drive. Is there any way I can configure SOLR to dump old entries > in the index when the index reaches a certain size? I'm using a month > old version from trunk. > > Thanks, > Marshall
Asking for help?
Dear CXF users, There are many versions of CXF sloshing around. We've got 2.0.6 and 2.1, and many people have picked up earlier versions. If the early universe underwent 'inflation,' CXF could perhaps be described as having experienced 'deflation', in the sense that we worked on and resolved many issues in the last year. Thus, I write to ask all of you to please remember to put plenty of information in your requests to this list. Tell us the version of CXF you are using, at very least. Send us your Spring or otherwise configuration. Thanks, benson
Can't get wild card search to work against data from DataImporter
I have a simple test schema That has users with the following columns: id first_name last_name I added the following two fields into the schema.xml I added the following query in my data-config.xml I then execute a dataimport. Using the admin window I type in the search firstName:Jeff And I get a response (per my data). But typing J?ff Or firstName:J?ff Gets no returns. What else do I need to do to make wild cards search working? Julio Castillo Edgenuity Inc.