Re: Any idea ??? I'm lost .... Thanks
Hi I've found the error, It was actually silly .. solr's jar's file was in the bad folder. I just removed them .. now it works ... GREAT :) sunnyfr wrote: > > Yes I tried to change the name manually but it didn't help, nothing > changed. > we spoke about the file in data/solr/ this directory which contain > solr.war. > Yes I did try !! > > > ryantxu wrote: >> >> hymm -- i've replied to this three times now... but it does not appear >> the list revieved it... >> http://www.nabble.com/Any-idea-I%27m-lost--Thanks-to19762598.html >> (now i'm trying from a different client) >> >> Have you tried "solr.xml" rather then "multicore.xml"? >> >> before 1.3 was released, the file was renamed >> >> >> On Wed, Oct 1, 2008 at 12:31 PM, sunnyfr <[EMAIL PROTECTED]> wrote: >>> >>> It use to work .. I updated it with the patch to make it works with >>> multicore >>> and it worked perfectly three days ago before this snapshooter bug ... >>> it's like if I missed jar files somewhere or ... >>> >>> >>> Brendan Grainger-2 wrote: Sorry Sunny, Will have to punt on this one. If I were you I'd try using 1.3. To be honest, if I remember correctly, 1.2 didn't have multicore support. Regards Brendan On Oct 1, 2008, at 12:20 PM, sunnyfr wrote: > > Thanks Brendan, > > I use solr 1.2 ... I will update to solr 1.3 soon .. I tried to > rename it > ... but still . > help i need somebody .. heppp LOL > > Thanks Brendan > > > > Brendan Grainger-2 wrote: >> >> Hi Sunny, >> >> Sorry, I've not use multicores with tomcat yet. However, I seem to >> remember that multicore.xml changed it's name to solr.xml. I take it >> you're using solr 1.3 or are you using a nightly build of some sort? >> >> Brendan >> >> On Oct 1, 2008, at 11:46 AM, sunnyfr wrote: >> >>> >>> Otherwise I've my solr.war in my foler /data/solr/ >>> I've no idea anymore ... Any idea Brendan? >>> >>> >>> sunnyfr wrote: I have solrconfig.xml in my folder /data/solr/books/conf/ and I've multicore.xml in /data/solr/ >>> sharedLib="lib" > otherwise I've my solr.xml in /etc/tomcat5.5/Catalina/localhost/ solr.xml >>> crossContext="true" > >>> value="/data/solr" override="true" /> and then I start tomcat5.5 ... do I miss something ? Brendan Grainger-2 wrote: > > Hi, > > I think: > > Can't find resource 'solrconfig.xml' in classpath or 'solr/conf/' > > is a major clue no? Do you actually have a solrconfig.xml and how > are > you starting solr? > > Regards > Brendan > > On Oct 1, 2008, at 11:11 AM, sunnyfr wrote: > >> >> Oct 1 16:45:10 solr-test jsvc.exec[23757]: eaa main >> Oct 1 16:45:10 solr-test jsvc.exec[23757]: Oct 1, 2008 4:45:10 >> PM >> org.apache.solr.servlet.SolrServlet init INFO: SolrServlet.init() >> Oct 1 16:45:10 solr-test jsvc.exec[23757]: Oct 1, 2008 4:45:10 >> PM >> org.apache.solr.servlet.SolrServlet init INFO: SolrServlet.init() >> done >> Oct 1 16:45:10 solr-test jsvc.exec[23757]: Oct 1, 2008 4:45:10 >> PM >> org.apache.solr.servlet.SolrUpdateServlet init INFO: >> SolrUpdateServlet.init() done >> Oct 1 16:45:10 solr-test jsvc.exec[23757]: Oct 1, 2008 4:45:10 >> PM >> org.apache.catalina.startup.HostConfig deployWAR INFO: Deploying >> web >> application archive apache-solr-1.3-dev.war >> Oct 1 16:45:10 solr-test jsvc.exec[23757]: Oct 1, 2008 4:45:10 >> PM >> org.apache.solr.servlet.SolrDispatchFilter init INFO: >> SolrDispatchFilter.init() >> Oct 1 16:45:10 solr-test jsvc.exec[23757]: Oct 1, 2008 4:45:10 >> PM >> org.apache.solr.core.SolrResourceLoader locateInstanceDir INFO: >> No >> /solr/home in JNDI >> Oct 1 16:45:10 solr-test jsvc.exec[23757]: Oct 1, 2008 4:45:10 >> PM >> org.apache.solr.core.SolrResourceLoader locateInstanceDir INFO: >> solr >> home >> defaulted to 'solr/' (could not find system property or JNDI) >> Oct 1 16:45:10 solr-test jsvc.exec[23757]: Oct 1, 2008 4:45:10 >> PM >> org.apache.solr.servlet.SolrDispatchFilter initMultiCore INFO: >> looking for >> multicore.xml: /solr/multicore.xml >> Oct 1 16:45:10 solr-test jsvc.exec[23757]: Oct 1, 2008 4:45:10 >> PM >> org.apache.solr.core.SolrResourceLoader locateInstanceDir INFO: >> No >> /solr/home in JNDI >>
solr servers synchronisation
Hello experts, I've gotta a question with regards to synchronisation under solr. I would like to have 2 Linux servers both running Solr. One that could act as master and the other one as slave.. Then I want to use HeartBeat in order change the IP when the master is down... My question is : do you think of any open source third party tools that sync both solrs ( or may be data/index directories ) when master is re-indexed at any time ? In another word, I would like to use the master for indexing and searching. Then a third party sync tool to do the synchronisation between master/slave I have Tsync in mind ( as a snyc tool) , but surely I would like to know your suggestions.. many thanks ak _ Win New York holidays with Kellogg’s & Live Search http://clk.atdmt.com/UKM/go/111354033/direct/01/
Re: Replication on solr
Hi Bill, Just to know, so you use post commit and post optimize and did you create a cron job for snapshooter ? If yes when, the same minute as delta-import ? Thanks, Bill Au wrote: > > If you use cron, you should use the new "-c" option of snapshooter which > only takes a snapshot where there have been changes. My personal > preference > is to use postCommit and postOptimize event listeners. > > Bill > > On Wed, Oct 1, 2008 at 4:28 AM, sunnyfr <[EMAIL PROTECTED]> wrote: > >> >> Hi guys, >> >> Do you think it works better automaticly by solr after commit, fire >> snapshooter or start cron job. >> >> Thanks, >> >> >> hossman_lucene wrote: >> > >> > >> > : I want to run 3 to 4 instances of solr on different machines. the >> > other >> > : servers will be replicatin the index from the single server. >> > : how is that done and what options needed to modifies or added to >> config >> > : xml file of solr. >> > >> > I would start by looking at these wiki pages... >> > >> > http://wiki.apache.org/solr/CollectionDistribution >> > http://wiki.apache.org/solr/SolrCollectionDistributionOperationsOutline >> > >> > ...they explain everything you need to know about how the >> > creation/replication/installation of snapshots works. then if you look >> at >> > the example solrconfig.xml you'll see where the event listeners for >> > "snapshooter" are commented out ... just decide wether you want to >> create >> > snapshoots on each commit, or just after an optimize, and uncomment the >> > appropriate code. >> > >> > >> > >> > -Hoss >> > >> > >> > >> >> -- >> View this message in context: >> http://www.nabble.com/Replication-on-solr-tp5780286p19756456.html >> Sent from the Solr - User mailing list archive at Nabble.com. >> >> > > -- View this message in context: http://www.nabble.com/Replication-on-solr-tp5780286p19774832.html Sent from the Solr - User mailing list archive at Nabble.com.
Optimize
Hi, Can somebody explain me a bit how works optimize? Is it automatic, because I configurated my snapshooter on the postOptimize, and not the postCommit, Did I miss something ? I read the doc but didn't get really what fire optimize, I did that: 1 1000 /data/solr/video/bin/snapshooter /data/solr/video/bin true Thanks a lot, -- View this message in context: http://www.nabble.com/Optimize-tp19775177p19775177.html Sent from the Solr - User mailing list archive at Nabble.com.
Optimize
Hi, Can somebody explain me a bit how works optimize? I read the doc but didn't get really what fire optimize. Thanks a lot, -- View this message in context: http://www.nabble.com/Optimize-tp19775320p19775320.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: termFreq always = 1 ?
Yes, each one is a document. A real example : k1_en:men 0.81426066 ... 846 ... ;arm;arms;elbow;elbows;man;men;male;males;indoors;one;person;Men's;moods; ... ... 0.6232885 ... 652 ;portrait;portraits;young;adult;young;adults;*man*;*men*;male;males;male;males;young;*men*;young;*man*;identity;identities;self-confidence;assertiveness;male;beauty;masculine;beauty;*men's*;beauty;indoors;inside;day;daytime;one;person;one;individual;northern;european;caucasian ... .;. 0.81426066 = (MATCH) weight(k1_en:men in 35050), product of: 0.9994 = queryWeight(k1_en:men), product of: 2.3030772 = idf(docFreq=17576, numDocs=64694) 0.43420166 = queryNorm 0.8142607 = (MATCH) fieldWeight(k1_en:men in 35050), product of: *1.4142135 = tf(termFreq(k1_en:men)=2)* 2.3030772 = idf(docFreq=17576, numDocs=64694) 0.25 = fieldNorm(field=k1_en, doc=35050) ... 0.62328845 = (MATCH) weight(k1_en:men in 13312), product of: 0.9994 = queryWeight(k1_en:men), product of: 2.3030772 = idf(docFreq=17576, numDocs=64694) 0.43420166 = queryNorm 0.6232885 = (MATCH) fieldWeight(k1_en:men in 13312), product of: *1.7320508 = tf(termFreq(k1_en:men)=3)* 2.3030772 = idf(docFreq=17576, numDocs=64694) 0.15625 = fieldNorm(field=k1_en, doc=13312) ... You can see here for the first document termFreq = 2 and for the second document termFreq = 3 ... And I would like to have termFreq = 1 in each case for this field (k1_en). Thanks for in advance your help, On Wed, Oct 1, 2008 at 8:45 PM, Otis Gospodnetic <[EMAIL PROTECTED] > wrote: > In each of your examples (is each one a documen?) I see only 1 "men" > instance, so "men" term frequency should be 1 for that document. > > Otis > -- > Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch > > > > - Original Message > > From: KLessou <[EMAIL PROTECTED]> > > To: solr-user@lucene.apache.org > > Sent: Wednesday, October 1, 2008 11:43:59 AM > > Subject: Re: termFreq always = 1 ? > > > > Yes this may be my problem, > > > > But is there any solution to have only one "men" keyword indexed when > i''ve > > got something like this : > > > > 1 - k1_en = men;business;Men > > or : > > 2 - k1_en = man,business,men > > or : > > 3 - k1_en = Man,men,business,Men,man > > ... > > > > Thx in advance, > > > > On Wed, Oct 1, 2008 at 5:12 PM, Otis Gospodnetic > > > wrote: > > > > > Hi, > > > > > > Note that RemoveDuplicatesTokenFilterFactory "filters out any tokens > which > > > are at the same logical position in the tokenstream as a previous token > with > > > the same text." > > > > > > So if you have "men in black are real men" then > > > RemoveDuplicatesTokenFilterFactory will not remove duplicate "men". > > > > > > This may or may not be your problem. > > > > > > Otis > > > -- > > > Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch > > > > > > > > > > > > - Original Message > > > > From: KLessou > > > > To: solr-user@lucene.apache.org > > > > Sent: Wednesday, October 1, 2008 9:48:28 AM > > > > Subject: termFreq always = 1 ? > > > > > > > > Hi, > > > > > > > > I want to index a list of keywords. > > > > > > > > When I search "k1_en:men", I find a lot of documents like that : > > > > > > > > DocA : > > > > (k1_en = man;men;Men;business... termFreq=2) > > > > DocB : > > > > (k1_en = man;Men;business... termFreq=1) > > > > DocC : > > > > ... > > > > DocD : > > > > ... > > > > DocE : > > > > ... > > > > > > > > But I don't want to have a different termFreq for DocA & DocB. > > > > > > > > I try RemoveDuplicatesTokenFilterFactory but it doesn't seem to help > me > > > :-/ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ignoreCase="true"/> > > > > > > > > protected="protwords.txt" /> > > > > > > > > > > > > > > > > > > > > generateWordParts="0" > > > > generateNumberParts="0" > > > > catenateWords="0" > > > > catenateNumbers="0" > > > > catenateAll="0" > > > > /> > > > > > > > > > > > > > > > > > > > > /> > > > > > > > > > > > > > > > > > > > > ignoreCase="true"/> > > > > > > > > protected="protwords.txt" /> > > > > > > > > > > > > > > > > generateWordParts="0" > > > > generateNumberParts="0" > > > > catenateWords="0" > > > > catenateNumbers="0" > > > > catenateAll="0" > > > > /> > > > > > > > > > > > > > > > > > > > > > > > > ... > > > > > > > > > > > > > > > > required="false" /> > > > > > > > > > > > > If you have any idea, thx in advance. > > > > > > > > -- > > > > ~ > > > > | klessou | > > > > ~ > > > > > > > > > > > > -- > > ~ > > | klessou | > > ~ > > -- ~ | klessou | ~
Fwd: CFP open for ApacheCon Europe 2009
Begin forwarded message: From: Noirin Shirley <[EMAIL PROTECTED]> Date: October 2, 2008 4:22:06 AM EDT To: [EMAIL PROTECTED] Subject: CFP open for ApacheCon Europe 2009 Reply-To: [EMAIL PROTECTED] Reply-To: [EMAIL PROTECTED] PMCs: Please send this on to your users@ lists! If you only have thirty seconds: The Call for Papers for ApacheCon Europe 2009, to be held in Amsterdam, from 23rd to 27th March, is now open! Submit your proposals at http://eu.apachecon.com/c/aceu2009/cfp/ before 24th October. Remember that early bird prices for ApacheCon US 2008, to be held in New Orleans, from 3rd to 7th November, will go up this Friday, at midnight Eastern time! Sponsorship opportunities for ApacheCon US 2008 and ApacheCon EU 2009 are still available. If you or your company are interested in becoming a sponsor, please contact Delia Frees at [EMAIL PROTECTED] for details. *** If you want all the details: ApacheCon Europe 2009 - Leading the Wave of Open Source Amsterdam, The Netherlands 23rd to 27th March, 2009 Call for Papers Opens for ApacheCon Europe 2009 The Apache Software Foundation (ASF) invites submissions to its official conference, ApacheCon Europe 2009. To be held 23rd to 27th March, 2009 at the Mövenpick Hotel Amsterdam City Centre, ApacheCon serves as a forum for showcasing the ASF's latest developments, including its projects, membership, and community. ApacheCon offers unparalleled educational opportunities, with dedicated presentations, hands-on trainings, and sessions that address core technology, development, business/marketing, and licensing issues in Open Source. ApacheCon's wide range of activities are designed to promote the exchange of ideas amongst ASF Members, innovators, developers, vendors, and users interested in the future of Open Source technology. The conference program includes competitively selected presentations, trainings/workshops, and a small number of invited speakers. All sessions undergo a peer review process by the ApacheCon Conference Planning team. The following information provides presentation category descriptions, and information about how to submit your proposal. Conference Themes and Topics APACHECON 2009 - LEADING THE WAVE OF OPEN SOURCE Building on the success of the last two years, we are excited to return to Amsterdam in 2009. We'll be continuing to offer our very popular two-day trainings, including certifications of completion for those who fulfill all the requirements of these trainings. The ASF comprises some of the most active and recognized developers in the Open Source community. By bringing together the pioneers, developers, and users of flagship Open Source technologies, ApacheCon provides an influential platform for dialogue, between the speaker and the audience, between project contributors and the community at large, traversing a wide range of ideas, expertise, and personalities. ApacheCon welcomes submissions from like-minded delegates across many fields, geographic locations, and areas of development. The breadth and loosely-structured nature of the Apache community lends itself to conference content that is also somewhat loosely- structured. Common themes of interest address groundbreaking technologies and emerging trends, successful practices (from development to deployment), and lessons learned (tips, tools, and tricks). In addition to technical content, ApacheCon invites Business Track submissions that address Open Source business, marketing, and legal/licensing issues. Topics appropriate for submission to this conference are manifold, and may include but are not restricted to: - Apache HTTP server topics such as installation, configuration, and migration - ASF-wide projects such as Lucene, SpamAssassin, Jackrabbit, and Maven - Scripting languages and dynamic content such as Java, Perl, Python, Ruby, XSL, and PHP - Security and e-commerce - Performance tuning, load balancing and high availability - New technologies and broader initiatives such as Web Services and Web 2.0 - ASF-Incubated projects such as Sling, UIMA, and Shindig Submission Guidelines Submissions must include - Title - Speaker name, with affiliation and email address - Speaker bio (100 words or less) - Short description (50 words or less) - Full description including abstract and objectives (200 words or less) - Expertise level (beginner to advanced) - Format and duration (trainings vs. general presentation; half-, full- or two-day workshop, etc.) - Intended audience and maximum number of participants (trainings only) - Background knowledge expected of the participants (trainings only) Types of Presentations - Trainings/Workshops - General Sessions - Case Studies/Industry Profiles - Invited Keynotes/Panels/Speakers - Corporate Showcases & Demonstrations BoF sessions and Fast Feather Track talks will be selected separately Pre Conference Trainin
SEVERE: Exception while adding: Document, an Idea?
Oct 2 11:35:02 solr-test jsvc.exec[11422]: Oct 2, 2008 11:35:02 AM org.apache.solr.handler.dataimport.SolrWriter upload SEVERE: Exception while adding: Document indexed,omitNorms indexed,tokenized indexed,omitNorms indexed,tokenized indexed,omitNorms indexed,tokenized indexed,omitNorms indexed,tokenized indexed,termVector,omitNorms indexed,termVector,omitNorms indexed,termVector,omitNorms indexed,termVector,omitNorms indexed,termVector,omitNorms indexed,termVector,omitNorms indexed,tokenized indexed,omitNorms stored/uncompressed,indexed,omitNorms indexed,omitNorms indexed,omitNorms stored/uncompressed,indexed,omitNorms indexed,omitNorms indexed,omitNorms Hi, I got this error and really I've no idea, Thangks guys to let me know if you got the same, -- View this message in context: http://www.nabble.com/SEVERE%3A-Exception-while-adding%3A-Document%2C-an-Idea--tp19776266p19776266.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Optimize
No, optimize is not automatic. You have to invoke it yourself just like commits. Take a look at the following for examples: http://wiki.apache.org/solr/UpdateXmlMessages On Thu, Oct 2, 2008 at 2:03 PM, sunnyfr <[EMAIL PROTECTED]> wrote: > > > Hi, > > Can somebody explain me a bit how works optimize? > I read the doc but didn't get really what fire optimize. > > Thanks a lot, > -- > View this message in context: > http://www.nabble.com/Optimize-tp19775320p19775320.html > Sent from the Solr - User mailing list archive at Nabble.com. > > -- Regards, Shalin Shekhar Mangar.
complex XML structure problem
Hello, I would appreciate any suggestions on solving following problem: I'm trying to index newspaper. After processing logical structure and articles, I have similar structure to this... ... ... Obviously, I would like to have all the benefits of full-text search with proximity and other advanced options. After going through SCHEMA.XML and docs, I can see that I should split each "word" into something like this... ARTICLE 201 5 6 18560301 Une 1137 147 1665 951 1 TEXT 0 However, if I use this approach, it seems like I lost some core functionality of search... - multiword searching ? For example searching for "Une date" ? Since each word is treated as standalone document ? - Proximity search ? ... and so on. So I guess this approach isn't solution to my goal. Does anyone have some recommendations on how to solve this ? Goal would be to receive results that would have mentioned "attributes" for each hit...so for previous example "Une date", I would receive hits with all attributes that would allow me to correctly position them on image (t,l,b,r as coordinates for example). Kind Regards, Sasha
Error for creating Index folder?
Oct 2 14:09:30 solr-test jsvc.exec[12890]: Oct 2, 2008 2:09:30 PM org.apache.solr.core.SolrCore initIndex WARNING: [video] Solr index directory '/data/solr/video/data/index' doesn't exist. Creating new index... Oct 2 14:09:30 solr-test jsvc.exec[12890]: Oct 2, 2008 2:09:30 PM org.apache.solr.common.SolrException log SEVERE: java.lang.RuntimeException: java.io.IOException: Cannot create directory: /data/solr/video/data/index Hi sorry but I got this error after deleting cuz they were damaged... do you know which kind or right ?? what's wrong ? Thanks ! -- View this message in context: http://www.nabble.com/Error-for-creating-Index-folder--tp19778037p19778037.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Error for creating Index folder?
You probably have a permission problem. Check to make sure that the user id running Solr has write permission in the directory /data/solr/video/data. Bill On Thu, Oct 2, 2008 at 8:11 AM, sunnyfr <[EMAIL PROTECTED]> wrote: > > Oct 2 14:09:30 solr-test jsvc.exec[12890]: Oct 2, 2008 2:09:30 PM > org.apache.solr.core.SolrCore initIndex WARNING: [video] Solr index > directory '/data/solr/video/data/index' doesn't exist. Creating new > index... > Oct 2 14:09:30 solr-test jsvc.exec[12890]: Oct 2, 2008 2:09:30 PM > org.apache.solr.common.SolrException log SEVERE: > java.lang.RuntimeException: > java.io.IOException: Cannot create directory: /data/solr/video/data/index > > Hi sorry but I got this error after deleting cuz they were damaged... do > you > know which kind or right ?? what's wrong ? > Thanks ! > -- > View this message in context: > http://www.nabble.com/Error-for-creating-Index-folder--tp19778037p19778037.html > Sent from the Solr - User mailing list archive at Nabble.com. > >
Re: Replication on solr
Not we are not using a cron job for snapshooter. Bill On Thu, Oct 2, 2008 at 3:53 AM, sunnyfr <[EMAIL PROTECTED]> wrote: > > Hi Bill, > > Just to know, so you use post commit and post optimize and did you create > a > cron job for snapshooter ? > If yes when, the same minute as delta-import ? > > Thanks, > > > > Bill Au wrote: > > > > If you use cron, you should use the new "-c" option of snapshooter which > > only takes a snapshot where there have been changes. My personal > > preference > > is to use postCommit and postOptimize event listeners. > > > > Bill > > > > On Wed, Oct 1, 2008 at 4:28 AM, sunnyfr <[EMAIL PROTECTED]> wrote: > > > >> > >> Hi guys, > >> > >> Do you think it works better automaticly by solr after commit, fire > >> snapshooter or start cron job. > >> > >> Thanks, > >> > >> > >> hossman_lucene wrote: > >> > > >> > > >> > : I want to run 3 to 4 instances of solr on different machines. the > >> > other > >> > : servers will be replicatin the index from the single server. > >> > : how is that done and what options needed to modifies or added to > >> config > >> > : xml file of solr. > >> > > >> > I would start by looking at these wiki pages... > >> > > >> > http://wiki.apache.org/solr/CollectionDistribution > >> > > http://wiki.apache.org/solr/SolrCollectionDistributionOperationsOutline > >> > > >> > ...they explain everything you need to know about how the > >> > creation/replication/installation of snapshots works. then if you > look > >> at > >> > the example solrconfig.xml you'll see where the event listeners for > >> > "snapshooter" are commented out ... just decide wether you want to > >> create > >> > snapshoots on each commit, or just after an optimize, and uncomment > the > >> > appropriate code. > >> > > >> > > >> > > >> > -Hoss > >> > > >> > > >> > > >> > >> -- > >> View this message in context: > >> http://www.nabble.com/Replication-on-solr-tp5780286p19756456.html > >> Sent from the Solr - User mailing list archive at Nabble.com. > >> > >> > > > > > > -- > View this message in context: > http://www.nabble.com/Replication-on-solr-tp5780286p19774832.html > Sent from the Solr - User mailing list archive at Nabble.com. > >
Re: solr servers synchronisation
Have you seen these two Wiki pages: http://wiki.apache.org/solr/CollectionDistribution http://wiki.apache.org/solr/SolrCollectionDistributionOperationsOutline Solr comes with tools to let you sync the index directory. Bill On Thu, Oct 2, 2008 at 3:52 AM, dudes dudes <[EMAIL PROTECTED]> wrote: > > Hello experts, > > I've gotta a question with regards to synchronisation under solr. > > I would like to have 2 Linux servers both running Solr. One that could act > as master and the other one as slave.. > Then I want to use HeartBeat in order change the IP when the master is > down... > My question is : do you think of any open source third party tools that > sync both solrs ( or may be data/index directories ) when master is > re-indexed at any time ? > In another word, I would like to use the master for indexing and searching. > Then a third party sync tool to do the synchronisation between master/slave > > I have Tsync in mind ( as a snyc tool) , but surely I would like to know > your suggestions.. > > many thanks > ak > > > _ > Win New York holidays with Kellogg's & Live Search > http://clk.atdmt.com/UKM/go/111354033/direct/01/
RE: solr servers synchronisation
Thanks Bill, I'm aware of these links.. I have also deployed them in my environment,,, However; I'm looking for a complete sync between 2 server rather than using one server for indexing and the other one for searching. It would be nice to have a complete transparency.. thanks for your time ak > Date: Thu, 2 Oct 2008 08:39:02 -0400 > From: [EMAIL PROTECTED] > To: solr-user@lucene.apache.org > Subject: Re: solr servers synchronisation > > Have you seen these two Wiki pages: > > http://wiki.apache.org/solr/CollectionDistribution > http://wiki.apache.org/solr/SolrCollectionDistributionOperationsOutline > > Solr comes with tools to let you sync the index directory. > > Bill > > On Thu, Oct 2, 2008 at 3:52 AM, dudes dudes wrote: > >> >> Hello experts, >> >> I've gotta a question with regards to synchronisation under solr. >> >> I would like to have 2 Linux servers both running Solr. One that could act >> as master and the other one as slave.. >> Then I want to use HeartBeat in order change the IP when the master is >> down... >> My question is : do you think of any open source third party tools that >> sync both solrs ( or may be data/index directories ) when master is >> re-indexed at any time ? >> In another word, I would like to use the master for indexing and searching. >> Then a third party sync tool to do the synchronisation between master/slave >> >> I have Tsync in mind ( as a snyc tool) , but surely I would like to know >> your suggestions.. >> >> many thanks >> ak >> >> >> _ >> Win New York holidays with Kellogg's & Live Search >> http://clk.atdmt.com/UKM/go/111354033/direct/01/ _ Win New York holidays with Kellogg’s & Live Search http://clk.atdmt.com/UKM/go/111354033/direct/01/
Re: Anyproblem in running two solr instances on the same machine using the same directory ?
as one instance is only reading the index and the other is writing into it... It doesn't look like it is going to crash. The instance which is only reading the index needs its searcher to be updated. Assuming that this instance is listen on port 8984, I am achieving this by --- *curl http://localhost:8984/solr/update -s -H 'Content-type:text/xml; charset=utf-8' -d ""* My question is would this commit write anything to the index or crash the index ? Thanks Jagadish Rath On Thu, Oct 2, 2008 at 12:26 AM, Walter Underwood <[EMAIL PROTECTED]>wrote: > In many search engines, that would be a guaranteed crash. A postings > list is read into memory, then replaced on disk, so now the engine > has an inconsistent index. > > Even if it works, don't be surprised if it breaks in a later release. > > The snapshot system exists precisely to do this in a safe manner. > If you want a reliable search engine, I'd use the collection distribution > scripts to do same machine distribution. > > I'm always amazed that people even think this might work. Do people > try this with RDMBs, too? > > wunder > > On 10/1/08 11:47 AM, "Otis Gospodnetic" <[EMAIL PROTECTED]> > wrote: > > > You should be able to run like that. Most likely nobody can answer your > last > > question with certainty because it's likely very few people, if any, are > > running Solr in this type of a setup. > > > > Otis > > -- > > Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch > > > > > > > > - Original Message > >> From: Jagadish Rath <[EMAIL PROTECTED]> > >> To: solr-user@lucene.apache.org > >> Sent: Wednesday, October 1, 2008 11:49:27 AM > >> Subject: Re: Anyproblem in running two solr instances on the same > machine > >> using the same directory ? > >> > >> Ok. Can we run two solr instances(using the same data directory) one for > >> commits and the other for queries on the same machine ? Are there any > known > >> issues for this ? > >> > >> On Fri, Sep 26, 2008 at 11:48 AM, Jagadish Rath wrote: > >> > >>> Hi > >>> > >>> I am running two solr instances on the same machine using the same > data > >>> directory. one on port 8982 and the other on 8984. > >>> > >>>- 1st one *only accepts commits* (indexer) -- *port 8982* > >>> > >>> -- It has all tha cache size as 0, to get rid of warmup of > >>> searchers > >>> > >>>- 2nd one* accepts all the queries*.(searcher) -- *port 8984* > >>> > >>> -- It has non-zero cache size as it needs to handle queries > >>> > >>>- I have a cron *which does a dummy commit to the 2nd instance (on > port > >>>8984)* to update its searcher every 1 minute. > >>> > >>> --- *curl http://localhost:8984/solr/update -s -H > >>> 'Content-type:text/xml; charset=utf-8' -d ""* > >>> > >>> I am trying to use this as a *solution to the maxWarmingSearcher limit > >>> exceeded Error* that occurs as a result of a large no. of commits. I am > >>> trying to use this solution as an alternate to the conventional > master/slave > >>> solution. > >>> > >>> I have following questions > >>> > >>>- *Is there any known issue with this solution or any issues that > can > >>>be foreseen for this solution?* > >>> > >>> * -- does it result in a corrupted index ? > >>> * > >>> > >>>- *What are the other solutions to the problem of > "maxWarmingSearchers > >>>limit exceeded error " ?** * > >>> > >>> A would really appreciate a quick response. > >>> > >>> Thanks > >>> Jagadish Rath > >>> > > > >
Re: SEVERE: Exception while adding: Document, an Idea?
Can you share more about what you are doing? An exception without any context is hard to figure out. What's the schema? Is there another exception associated with it (root cause)? -Grant On Oct 2, 2008, at 5:38 AM, sunnyfr wrote: Oct 2 11:35:02 solr-test jsvc.exec[11422]: Oct 2, 2008 11:35:02 AM org.apache.solr.handler.dataimport.SolrWriter upload SEVERE: Exception while adding: Document indexed,omitNorms indexed,tokenized indexed,omitNorms indexed,tokenized indexed,omitNorms indexed,tokenized indexed,omitNorms indexed,tokenized indexed,termVector,omitNorms indexed,termVector,omitNorms indexed,termVector,omitNorms indexed,termVector,omitNorms indexed,termVector,omitNorms indexed,termVector,omitNorms indexed,tokenized indexed,omitNorms stored/uncompressed,indexed,omitNorms indexed,omitNorms indexed,omitNorms stored/uncompressed,indexed,omitNorms indexed,omitNorms indexed,omitNorms Hi, I got this error and really I've no idea, Thangks guys to let me know if you got the same, -- View this message in context: http://www.nabble.com/SEVERE%3A-Exception-while-adding%3A-Document%2C-an-Idea--tp19776266p19776266.html Sent from the Solr - User mailing list archive at Nabble.com. -- Grant Ingersoll Lucene Helpful Hints: http://wiki.apache.org/lucene-java/BasicsOfPerformance http://wiki.apache.org/lucene-java/LuceneFAQ
Re: Anyproblem in running two solr instances on the same machine using the same directory ?
This is a completely false deduction. With Ultraseek, this would guarantee a crash. You cannot assume this will work for any indexed file structure, whether search or database. You need to find out whether multi-process synchronization is a design goal (and tested on each release) for Lucene. If it is not, then don't depend on it. wunder On 10/2/08 6:00 AM, "Jagadish Rath" <[EMAIL PROTECTED]> wrote: > as one instance is only reading the index and the other is writing into > it... It doesn't look like it is going to crash.
Re: solr on ubuntu 8.04
Hi, I've some issue with my tomcat, can you please tell me what you have in your folder ./var/lib/tomcat5.5/webapps ./usr/share/tomcat5.5/webapps Cuz really I'm a bit lost with tomcat55 and what's happening ... how did you manage it ?? Thanks a lot Jack Bates-2 wrote: > > Thanks for your suggestions. I have now tried installing Solr on two > different machines. On one machine I installed the Ubuntu solr-tomcat5.5 > package, and on the other I simply dropped "solr.war" > into /var/lib/tomcat5.5/webapps > > Both machines are running Tomcat 5.5 > > I get the same error message on both machines: > > SEVERE: Exception starting filter SolrRequestFilter > java.lang.NoClassDefFoundError: Could not initialize class > org.apache.solr.core.SolrConfig > > The full error message is attached. > > I can confirm that the /usr/share/solr/WEB-INF/lib/apache-solr-1.2.0.jar > jar file contains: org/apache/solr/core/SolrConfig.class > > - however I do not know why Tomcat does not find it. > > Thanks again, Jack > >> Hardy has solr packages already. You might want to look how they packaged >> solr if you cannot move to that version. >> Did you just drop the war file? Or did you use JNDI? You probably need to >> configure solr/home, and maybe fiddle with >> securitymanager stuff. >> >> Albert >> >> On Thu, May 1, 2008 at 6:46 PM, Jack Bates freezone.co.uk> >> wrote: >> >> > I am trying to evaluate Solr for an open source records management >> > project to which I contribute: http://code.google.com/p/qubit-toolkit/ >> > >> > I installed the Ubuntu solr-tomcat5.5 package: >> > http://packages.ubuntu.com/hardy/solr-tomcat5.5 >> > >> > - and pointed my browser at: http://localhost:8180/solr/admin (The >> > Ubuntu and Debian Tomcat packages run on port 8180) >> > >> > However, in response I get a Tomcat 404: The requested >> > resource(/solr/admin) is not available. >> > >> > This differs from the response I get accessing a random URL: >> > http://localhost:8180/foo/bar >> > >> > - which displays a blank page. >> > >> > From this I gather that the solr-tomcat5.5 package installed >> > *something*, but that it's misconfigured or missing something. >> > Unfortunately I lack the Java / Tomcat experience to track down this >> > problem. Can someone recommend where to look, to learn why the Ubuntu >> > solr-tomcat5.5 package is not working? >> > >> > I started an Ubuntu wiki page to eventually describe the process of >> > installing Solr on Ubuntu: https://wiki.ubuntu.com/Solr >> > >> > Thanks, Jack > > Apr 25, 2008 4:46:41 PM org.apache.catalina.core.StandardContext > filterStart > SEVERE: Exception starting filter SolrRequestFilter > java.lang.NoClassDefFoundError: Could not initialize class > org.apache.solr.core.SolrConfig > at > org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:74) > at > org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:221) > at > org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:302) > at > org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:78) > at > org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:3635) > at > org.apache.catalina.core.StandardContext.start(StandardContext.java:4222) > at > org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:760) > at > org.apache.catalina.core.ContainerBase.access$0(ContainerBase.java:744) > at > org.apache.catalina.core.ContainerBase$PrivilegedAddChild.run(ContainerBase.java:144) > at java.security.AccessController.doPrivileged(Native Method) > at > org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:738) > at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:544) > at > org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:626) > at > org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:553) > at > org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:488) > at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1138) > at > org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311) > at > org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:120) > at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1022) > at org.apache.catalina.core.StandardHost.start(StandardHost.java:736) > at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1014) > at > org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) > at > org.apache.catalina.core.StandardService.start(StandardService.java:448) > at > org.apache.catalina.core.StandardServer.start(StandardServer.java:700) > at org.apache.catalina.startup.Catalina.start(Catalina.java:552) > at sun.reflect.NativeMethod
Quick RSS feed questions
Hi everybody, With regard to RSS feeds; I noticed that there's a stylesheet to convert the output of a Solr search into RSS format in the example\solr\conf\xlst directory. My questions are: 1) Where can I find docs on how to get Solr to feed RSS directly? 2) Correct me if I'm wrong here: Normal searches return results based on what's indexed at the time. If documents are only added to an index, then subsequent searches (ordered by date new-to-oldest) will have the newer docs appear on top, and thus an RSS feed will have "new entries" when the search is re-performed. However, how do you handle RSS feeds for indexes where data can be both added and removed? For example, if I want to have an RSS feed of users on my site, I want new users to show up as new items in the RSS feed as they come along. However, users don't stick around forever, they can also disappear from the database. Similarly, users can change their information and thus they may not match a particular query anymore (and would thus disappear from the RSS feed, right?). Wouldn't this cause havoc for RSS readers if results changed often? Aren't they used to getting only new items and the old items hanging around? Lists of users are not like blogs, and yet (for my application) some people may want to have a feed of new users of a particular type (where users are free to change their type at any time). Any advice about how to approach this would be appreciated. Sincerely, Daryl.
Re: solr on ubuntu 8.04
No sweat - did you install the Ubuntu solr package or the solr.war from http://lucene.apache.org/solr/? When you say it doesn't work, what exactly do you mean? On Thu, 2008-10-02 at 07:43 -0700, [EMAIL PROTECTED] wrote: > Hi Jack, > Really I would love if you could help me about it ... and tell me what you > have in your file > ./var/lib/tomcat5.5/webapps > ./usr/share/tomcat5.5/webapps > > It doesn't work I dont know why :( > Thanks a lot > Johanna > > Jack Bates-2 wrote: > > > > Thanks for your suggestions. I have now tried installing Solr on two > > different machines. On one machine I installed the Ubuntu solr-tomcat5.5 > > package, and on the other I simply dropped "solr.war" > > into /var/lib/tomcat5.5/webapps > > > > Both machines are running Tomcat 5.5 > > > > I get the same error message on both machines: > > > > SEVERE: Exception starting filter SolrRequestFilter > > java.lang.NoClassDefFoundError: Could not initialize class > > org.apache.solr.core.SolrConfig > > > > The full error message is attached. > > > > I can confirm that the /usr/share/solr/WEB-INF/lib/apache-solr-1.2.0.jar > > jar file contains: org/apache/solr/core/SolrConfig.class > > > > - however I do not know why Tomcat does not find it. > > > > Thanks again, Jack > > > >> Hardy has solr packages already. You might want to look how they packaged > >> solr if you cannot move to that version. > >> Did you just drop the war file? Or did you use JNDI? You probably need to > >> configure solr/home, and maybe fiddle with > >> securitymanager stuff. > >> > >> Albert > >> > >> On Thu, May 1, 2008 at 6:46 PM, Jack Bates freezone.co.uk> > >> wrote: > >> > >> > I am trying to evaluate Solr for an open source records management > >> > project to which I contribute: http://code.google.com/p/qubit-toolkit/ > >> > > >> > I installed the Ubuntu solr-tomcat5.5 package: > >> > http://packages.ubuntu.com/hardy/solr-tomcat5.5 > >> > > >> > - and pointed my browser at: http://localhost:8180/solr/admin (The > >> > Ubuntu and Debian Tomcat packages run on port 8180) > >> > > >> > However, in response I get a Tomcat 404: The requested > >> > resource(/solr/admin) is not available. > >> > > >> > This differs from the response I get accessing a random URL: > >> > http://localhost:8180/foo/bar > >> > > >> > - which displays a blank page. > >> > > >> > From this I gather that the solr-tomcat5.5 package installed > >> > *something*, but that it's misconfigured or missing something. > >> > Unfortunately I lack the Java / Tomcat experience to track down this > >> > problem. Can someone recommend where to look, to learn why the Ubuntu > >> > solr-tomcat5.5 package is not working? > >> > > >> > I started an Ubuntu wiki page to eventually describe the process of > >> > installing Solr on Ubuntu: https://wiki.ubuntu.com/Solr > >> > > >> > Thanks, Jack > > > > Apr 25, 2008 4:46:41 PM org.apache.catalina.core.StandardContext > > filterStart > > SEVERE: Exception starting filter SolrRequestFilter > > java.lang.NoClassDefFoundError: Could not initialize class > > org.apache.solr.core.SolrConfig > > at > > org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:74) > > at > > org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:221) > > at > > org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:302) > > at > > org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:78) > > at > > org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:3635) > > at > > org.apache.catalina.core.StandardContext.start(StandardContext.java:4222) > > at > > org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:760) > > at > > org.apache.catalina.core.ContainerBase.access$0(ContainerBase.java:744) > > at > > org.apache.catalina.core.ContainerBase$PrivilegedAddChild.run(ContainerBase.java:144) > > at java.security.AccessController.doPrivileged(Native Method) > > at > > org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:738) > > at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:544) > > at > > org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:626) > > at > > org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:553) > > at > > org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:488) > > at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1138) > > at > > org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311) > > at > > org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:120) > > at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1022) > > at org.apache.catalina.core.StandardHost.start(StandardHost.java:736) > > at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1
Re: solr on ubuntu 8.04
I haven't tried installing the ubuntu package, but the releases from apache.org come with an example that contains a directory called "solr" which contains a directory called "conf" where schema.xml and solrconfig.xml are important. Is it possible these files do not exist in the path? Tricia Jack Bates wrote: No sweat - did you install the Ubuntu solr package or the solr.war from http://lucene.apache.org/solr/? When you say it doesn't work, what exactly do you mean? On Thu, 2008-10-02 at 07:43 -0700, [EMAIL PROTECTED] wrote: Hi Jack, Really I would love if you could help me about it ... and tell me what you have in your file ./var/lib/tomcat5.5/webapps ./usr/share/tomcat5.5/webapps It doesn't work I dont know why :( Thanks a lot Johanna Jack Bates-2 wrote: Thanks for your suggestions. I have now tried installing Solr on two different machines. On one machine I installed the Ubuntu solr-tomcat5.5 package, and on the other I simply dropped "solr.war" into /var/lib/tomcat5.5/webapps Both machines are running Tomcat 5.5 I get the same error message on both machines: SEVERE: Exception starting filter SolrRequestFilter java.lang.NoClassDefFoundError: Could not initialize class org.apache.solr.core.SolrConfig The full error message is attached. I can confirm that the /usr/share/solr/WEB-INF/lib/apache-solr-1.2.0.jar jar file contains: org/apache/solr/core/SolrConfig.class - however I do not know why Tomcat does not find it. Thanks again, Jack Hardy has solr packages already. You might want to look how they packaged solr if you cannot move to that version. Did you just drop the war file? Or did you use JNDI? You probably need to configure solr/home, and maybe fiddle with securitymanager stuff. Albert On Thu, May 1, 2008 at 6:46 PM, Jack Bates freezone.co.uk> wrote: I am trying to evaluate Solr for an open source records management project to which I contribute: http://code.google.com/p/qubit-toolkit/ I installed the Ubuntu solr-tomcat5.5 package: http://packages.ubuntu.com/hardy/solr-tomcat5.5 - and pointed my browser at: http://localhost:8180/solr/admin (The Ubuntu and Debian Tomcat packages run on port 8180) However, in response I get a Tomcat 404: The requested resource(/solr/admin) is not available. This differs from the response I get accessing a random URL: http://localhost:8180/foo/bar - which displays a blank page. From this I gather that the solr-tomcat5.5 package installed *something*, but that it's misconfigured or missing something. Unfortunately I lack the Java / Tomcat experience to track down this problem. Can someone recommend where to look, to learn why the Ubuntu solr-tomcat5.5 package is not working? I started an Ubuntu wiki page to eventually describe the process of installing Solr on Ubuntu: https://wiki.ubuntu.com/Solr Thanks, Jack Apr 25, 2008 4:46:41 PM org.apache.catalina.core.StandardContext filterStart SEVERE: Exception starting filter SolrRequestFilter java.lang.NoClassDefFoundError: Could not initialize class org.apache.solr.core.SolrConfig at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:74) at org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:221) at org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:302) at org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:78) at org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:3635) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4222) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:760) at org.apache.catalina.core.ContainerBase.access$0(ContainerBase.java:744) at org.apache.catalina.core.ContainerBase$PrivilegedAddChild.run(ContainerBase.java:144) at java.security.AccessController.doPrivileged(Native Method) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:738) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:544) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:626) at org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:553) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:488) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1138) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:120) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1022) at org.apache.catalina.core.StandardHost.start(StandardHost.java:736) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1014) at org.apache.catalina.core.StandardEngine.start(StandardEng
Re: termFreq always = 1 ?
You have: ;arm;arms;elbow;elbows;man;men;male;males;indoors;one;person;Men's;moods; Note these two: men Men's You probably tokenize that field and you probably lowercase it, and you probably stem it and you probably end up with 2 "men" tokens: men ==> men Men's ==> men Hence your term freq of 2. You could: 1) lowercase outside of Solr, before indexing 2) feed text with sorted words to Solr 3) use that token filter that removes duplicates after stemming That could work. Otis -- Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch - Original Message > From: KLessou <[EMAIL PROTECTED]> > To: solr-user@lucene.apache.org > Sent: Thursday, October 2, 2008 4:41:12 AM > Subject: Re: termFreq always = 1 ? > > Yes, each one is a document. > > A real example : > > k1_en:men > > > 0.81426066 > ... > 846 > ... > > > ;arm;arms;elbow;elbows;man;men;male;males;indoors;one;person;Men's;moods; > > ... > > > ... > > > 0.6232885 > > ... > > 652 > > > > ;portrait;portraits;young;adult;young;adults;*man*;*men*;male;males;male;males;young;*men*;young;*man*;identity;identities;self-confidence;assertiveness;male;beauty;masculine;beauty;*men's*;beauty;indoors;inside;day;daytime;one;person;one;individual;northern;european;caucasian > > > ... > > > > .;. > > > > 0.81426066 = (MATCH) weight(k1_en:men in 35050), product of: > 0.9994 = queryWeight(k1_en:men), product of: > 2.3030772 = idf(docFreq=17576, numDocs=64694) > 0.43420166 = queryNorm > 0.8142607 = (MATCH) fieldWeight(k1_en:men in 35050), product of: > *1.4142135 = tf(termFreq(k1_en:men)=2)* > 2.3030772 = idf(docFreq=17576, numDocs=64694) > 0.25 = fieldNorm(field=k1_en, doc=35050) > > ... > > 0.62328845 = (MATCH) weight(k1_en:men in 13312), product of: > 0.9994 = queryWeight(k1_en:men), product of: > 2.3030772 = idf(docFreq=17576, numDocs=64694) > 0.43420166 = queryNorm > 0.6232885 = (MATCH) fieldWeight(k1_en:men in 13312), product of: > *1.7320508 = tf(termFreq(k1_en:men)=3)* > 2.3030772 = idf(docFreq=17576, numDocs=64694) > 0.15625 = fieldNorm(field=k1_en, doc=13312) > > ... > > You can see here for the first document termFreq = 2 and for the > second document termFreq = 3 ... > > And I would like to have termFreq = 1 in each case for this field (k1_en). > > Thanks for in advance your help, > > > > > > > > On Wed, Oct 1, 2008 at 8:45 PM, Otis Gospodnetic > > wrote: > > > In each of your examples (is each one a documen?) I see only 1 "men" > > instance, so "men" term frequency should be 1 for that document. > > > > Otis > > -- > > Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch > > > > > > > > - Original Message > > > From: KLessou > > > To: solr-user@lucene.apache.org > > > Sent: Wednesday, October 1, 2008 11:43:59 AM > > > Subject: Re: termFreq always = 1 ? > > > > > > Yes this may be my problem, > > > > > > But is there any solution to have only one "men" keyword indexed when > > i''ve > > > got something like this : > > > > > > 1 - k1_en = men;business;Men > > > or : > > > 2 - k1_en = man,business,men > > > or : > > > 3 - k1_en = Man,men,business,Men,man > > > ... > > > > > > Thx in advance, > > > > > > On Wed, Oct 1, 2008 at 5:12 PM, Otis Gospodnetic > > > > wrote: > > > > > > > Hi, > > > > > > > > Note that RemoveDuplicatesTokenFilterFactory "filters out any tokens > > which > > > > are at the same logical position in the tokenstream as a previous token > > with > > > > the same text." > > > > > > > > So if you have "men in black are real men" then > > > > RemoveDuplicatesTokenFilterFactory will not remove duplicate "men". > > > > > > > > This may or may not be your problem. > > > > > > > > Otis > > > > -- > > > > Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch > > > > > > > > > > > > > > > > - Original Message > > > > > From: KLessou > > > > > To: solr-user@lucene.apache.org > > > > > Sent: Wednesday, October 1, 2008 9:48:28 AM > > > > > Subject: termFreq always = 1 ? > > > > > > > > > > Hi, > > > > > > > > > > I want to index a list of keywords. > > > > > > > > > > When I search "k1_en:men", I find a lot of documents like that : > > > > > > > > > > DocA : > > > > > (k1_en = man;men;Men;business... termFreq=2) > > > > > DocB : > > > > > (k1_en = man;Men;business... termFreq=1) > > > > > DocC : > > > > > ... > > > > > DocD : > > > > > ... > > > > > DocE : > > > > > ... > > > > > > > > > > But I don't want to have a different termFreq for DocA & DocB. > > > > > > > > > > I try RemoveDuplicatesTokenFilterFactory but it doesn't seem to help > > me > > > > :-/ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ignoreCase="true"/> > > > > > > > > > > protected="protwords.txt" /> > > > > > > > > > > > > > > > > > > > > > > > > > generateWordParts="0" > > > > > generateNumberParts="0" > > > > > c
Re: complex XML structure problem
Bok Saša, It sounds like you need to keep per-word metadata, plus the raw content so you can full-text search it. If so, consider keeping the meta data elsewhere - e.g. different index, external DB, etc. For full-text search you probably want to index the full content, something like: article Une date.. 123 You could create another index with words and each word Document have an ID of their "parent" (e.g. the article's ID), so you do a query against the above index, get the IDs of matches, and then get words for those matches. Of course, you can also use a RDBMS or some other storage for the second part. Otis -- Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch - Original Message > From: Saša Mutić <[EMAIL PROTECTED]> > To: solr-user@lucene.apache.org > Sent: Thursday, October 2, 2008 6:14:14 AM > Subject: complex XML structure problem > > Hello, > > I would appreciate any suggestions on solving following problem: > > I'm trying to index newspaper. After processing logical structure and > articles, I have similar structure to this... > > > date="18560301"> > > type="TEXT" cont="0"/> > > type="TEXT" cont="0"/> > > type="TEXT" cont="0"/> > ... > > date="18560301"> > > type="ADVERTISEMENT" cont="0"/> > ... > > Obviously, I would like to have all the benefits of full-text search with > proximity and other advanced options. > After going through SCHEMA.XML and docs, I can see that I should split each > "word" into something like this... > > ARTICLE > 201 > 5 > 6 > 18560301 > Une > 1137 > 147 > 1665 > 951 > 1 > TEXT > 0 > > > However, if I use this approach, it seems like I lost some core > functionality of search... > > - multiword searching ? For example searching for "Une date" ? Since each > word is treated as standalone document ? > > - Proximity search ? > > ... and so on. > > So I guess this approach isn't solution to my goal. Does anyone have some > recommendations on how to solve this ? > > Goal would be to receive results that would have mentioned "attributes" for > each hit...so for previous example "Une date", I would receive hits with all > attributes that would allow me to correctly position them on image (t,l,b,r > as coordinates for example). > > Kind Regards, > > Sasha
Re: Problem restarting Solr after shutting it down.
Hi and thank you! This is what I got when I user the -QUIT flag. Does it say you anything? Regards Erik Full thread dump OpenJDK 64-Bit Server VM (1.6.0-b09 mixed mode): "DestroyJavaVM" prio=10 tid=0x010c7c00 nid=0x13c waiting on condition [0x..0x00507c60] java.lang.Thread.State: RUNNABLE "Timer-2" prio=10 tid=0x013f0400 nid=0x156 in Object.wait() [0x40ff6000..0x40ff6c60] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x7f152b6a29f0> (a java.util.TaskQueue) at java.util.TimerThread.mainLoop(Timer.java:531) - locked <0x7f152b6a29f0> (a java.util.TaskQueue) at java.util.TimerThread.run(Timer.java:484) "pool-1-thread-1" prio=10 tid=0x7f14e8196000 nid=0x155 waiting on condition [0x4175e000..0x4175ebe0] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x7f152e0f5ad0> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1978) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:386) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636) "Timer-1" prio=10 tid=0x7f14e8170c00 nid=0x154 in Object.wait() [0x41d8e000..0x41d8eb60] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x7f152e08f070> (a java.util.TaskQueue) at java.util.TimerThread.mainLoop(Timer.java:531) - locked <0x7f152e08f070> (a java.util.TaskQueue) at java.util.TimerThread.run(Timer.java:484) "btpool0-9" prio=10 tid=0x7f14e8126400 nid=0x153 in Object.wait() [0x4165d000..0x4165dae0] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x7f152e0aa038> (a org.mortbay.thread.BoundedThreadPool$PoolThread) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:482) - locked <0x7f152e0aa038> (a org.mortbay.thread.BoundedThreadPool$PoolThread) "btpool0-8" prio=10 tid=0x7f14e8125000 nid=0x152 in Object.wait() [0x40669000..0x40669a60] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x7f152e0aa280> (a org.mortbay.thread.BoundedThreadPool$PoolThread) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:482) - locked <0x7f152e0aa280> (a org.mortbay.thread.BoundedThreadPool$PoolThread) "btpool0-7" prio=10 tid=0x7f14e80d6800 nid=0x151 in Object.wait() [0x40568000..0x405689e0] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x7f152e0aa4c8> (a org.mortbay.thread.BoundedThreadPool$PoolThread) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:482) - locked <0x7f152e0aa4c8> (a org.mortbay.thread.BoundedThreadPool$PoolThread) "btpool0-6" prio=10 tid=0x7f14e8116c00 nid=0x150 in Object.wait() [0x40ef5000..0x40ef5d60] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x7f152e0aa710> (a org.mortbay.thread.BoundedThreadPool$PoolThread) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:482) - locked <0x7f152e0aa710> (a org.mortbay.thread.BoundedThreadPool$PoolThread) "btpool0-5" prio=10 tid=0x7f14e8115800 nid=0x14f in Object.wait() [0x40df4000..0x40df4ce0] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x7f152e0aa958> (a org.mortbay.thread.BoundedThreadPool$PoolThread) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:482) - locked <0x7f152e0aa958> (a org.mortbay.thread.BoundedThreadPool$PoolThread) "btpool0-4" prio=10 tid=0x7f14e810d800 nid=0x14e in Object.wait() [0x40cf3000..0x40cf3c60] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x7f152e0aaba0> (a org.mortbay.thread.BoundedThreadPool$PoolThread)
Using filter to search in SOLR 1.3 with solrj
i can execute what i want simply with using lucene directly Hits hits = searcher.search(customScoreQuery, myQuery.getFilter()); howerver, i can't find the right Class , or method in the API to do this for SOLR the searcher I am using the SOLRServer(Embeded version) to execute the .query... QueryResponse queryResponse = SolrServer.query(customScoreQuery); //will work, BUT I NEED to use the filter as well... Thanks -- Jeryl Cook /^\ Pharaoh /^\ http://pharaohofkush.blogspot.com/ "Whether we bring our enemies to justice, or bring justice to our enemies, justice will be done." --George W. Bush, Address to a Joint Session of Congress and the American People, September 20, 2001
Re: Using filter to search in SOLR 1.3 with solrj
what about: SolrQuery query = ...; query.addFilterQuery( "type:xxx" ); On Oct 2, 2008, at 1:23 PM, Jeryl Cook wrote: i can execute what i want simply with using lucene directly Hits hits = searcher.search(customScoreQuery, myQuery.getFilter()); howerver, i can't find the right Class , or method in the API to do this for SOLR the searcher I am using the SOLRServer(Embeded version) to execute the .query... QueryResponse queryResponse = SolrServer.query(customScoreQuery); //will work, BUT I NEED to use the filter as well... Thanks -- Jeryl Cook /^\ Pharaoh /^\ http://pharaohofkush.blogspot.com/ "Whether we bring our enemies to justice, or bring justice to our enemies, justice will be done." --George W. Bush, Address to a Joint Session of Congress and the American People, September 20, 2001
SolrPluginRepository
Hi All, I didn't see anywhere to share the plugin I created for my multipart work (see https://issues.apache.org/jira/browse/SOLR-380 for more). So I created one here: http://wiki.apache.org/solr/SolrPluginRepository. I'm open to other ways of sharing plugins. Tricia
Re: Using filter to search in SOLR 1.3 with solrj
I don't have issues adding a filter query to a "SolrQuery"... i guess ill look at the source code, i just need to pass the a custom Filter object at runtime before i execute a search using the SolrServer.. currently this is all i can do the below with SOLR... SolrServer.query(customScoreQuery); i need a method that would accept this: searcher.search(customScoreQuery, myfilter ); , like i am able todo using lucene searcher. On Thu, Oct 2, 2008 at 1:43 PM, Ryan McKinley <[EMAIL PROTECTED]> wrote: > what about: > >SolrQuery query = ...; >query.addFilterQuery( "type:xxx" ); > > > On Oct 2, 2008, at 1:23 PM, Jeryl Cook wrote: > >> i can execute what i want simply with using lucene directly >> >> Hits hits = searcher.search(customScoreQuery, myQuery.getFilter()); >> >> >> howerver, i can't find the right Class , or method in the API to do >> this for SOLR the searcher >> I am using the SOLRServer(Embeded version) to execute the .query... >> >> >> QueryResponse queryResponse = SolrServer.query(customScoreQuery); >> //will work, BUT I NEED to use the filter as well... >> >> >> Thanks >> >> >> -- >> Jeryl Cook >> /^\ Pharaoh /^\ >> http://pharaohofkush.blogspot.com/ >> "Whether we bring our enemies to justice, or bring justice to our >> enemies, justice will be done." >> --George W. Bush, Address to a Joint Session of Congress and the >> American People, September 20, 2001 > > -- Jeryl Cook /^\ Pharaoh /^\ http://pharaohofkush.blogspot.com/ "Whether we bring our enemies to justice, or bring justice to our enemies, justice will be done." --George W. Bush, Address to a Joint Session of Congress and the American People, September 20, 2001
Re: Using filter to search in SOLR 1.3 with solrj
On Oct 2, 2008, at 2:24 PM, Jeryl Cook wrote: I don't have issues adding a filter query to a "SolrQuery"... i guess ill look at the source code, i just need to pass the a custom Filter object at runtime before i execute a search using the SolrServer.. currently this is all i can do the below with SOLR... SolrServer.query(customScoreQuery); i need a method that would accept this: searcher.search(customScoreQuery, myfilter ); , like i am able todo using lucene searcher. aaah -- that lands you in custom plugin territory... perhaps look at building a QueryComponent ryan
Re: SolrPluginRepository
Thanks! If there is interest, we could start a non-apache project for plugins that don't make sense in core or contrib... Apache Wicket has a project called "Wicket Stuff" on sourceforge that is a repository for non-core components. This is where components linking to non-Apache compatible libraries live and also has very low barrier to entry (anyone can have commit rights). I can also run live demos on solrstuff.org... for solrjs, we are running: http://example.solrstuff.org/solrjs/ (a solrjs example page should be up shortly) ryan On Oct 2, 2008, at 2:02 PM, Tricia Williams wrote: Hi All, I didn't see anywhere to share the plugin I created for my multipart work (see https://issues.apache.org/jira/browse/SOLR-380 for more). So I created one here: http://wiki.apache.org/solr/SolrPluginRepository . I'm open to other ways of sharing plugins. Tricia
Re: SolrPluginRepository
Nice, nice. I think that's what contrib/ is for, among other things, couldn't we use that? Otis -- Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch - Original Message > From: Ryan McKinley <[EMAIL PROTECTED]> > To: solr-user@lucene.apache.org > Sent: Thursday, October 2, 2008 2:46:20 PM > Subject: Re: SolrPluginRepository > > Thanks! > > If there is interest, we could start a non-apache project for plugins > that don't make sense in core or contrib... > > Apache Wicket has a project called "Wicket Stuff" on sourceforge that > is a repository for non-core components. This is where components > linking to non-Apache compatible libraries live and also has very low > barrier to entry (anyone can have commit rights). > > I can also run live demos on solrstuff.org... for solrjs, we are > running: > http://example.solrstuff.org/solrjs/ > > (a solrjs example page should be up shortly) > > ryan > > > On Oct 2, 2008, at 2:02 PM, Tricia Williams wrote: > > > Hi All, > > > > I didn't see anywhere to share the plugin I created for my > > multipart work (see https://issues.apache.org/jira/browse/SOLR-380 > > for more). So I created one here: > http://wiki.apache.org/solr/SolrPluginRepository > > . I'm open to other ways of sharing plugins. > > > > Tricia
Re: Using filter to search in SOLR 1.3 with solrj
i see, ..would be nice to build component within the code.. programmatically...rather than as a component to add to the configuration file..but i will read the docs on how to do this. thanks On Thu, Oct 2, 2008 at 2:37 PM, Ryan McKinley <[EMAIL PROTECTED]> wrote: > > On Oct 2, 2008, at 2:24 PM, Jeryl Cook wrote: > >> I don't have issues adding a filter query to a "SolrQuery"... >> >> i guess ill look at the source code, i just need to pass the a custom >> Filter object at runtime before i execute a search using the >> SolrServer.. >> currently this is all i can do the below with SOLR... >> SolrServer.query(customScoreQuery); >> >> i need a method that would accept this: >> searcher.search(customScoreQuery, myfilter ); , like i am able todo >> using lucene searcher. >> > > aaah -- that lands you in custom plugin territory... > > perhaps look at building a QueryComponent > > ryan > > -- Jeryl Cook /^\ Pharaoh /^\ http://pharaohofkush.blogspot.com/ "Whether we bring our enemies to justice, or bring justice to our enemies, justice will be done." --George W. Bush, Address to a Joint Session of Congress and the American People, September 20, 2001
Re: SolrPluginRepository
Yes, contrib should be for anything general and fits within apache guidelines. SOLR-380 may belong as a contrib (or core) -- i have not looked at it. Just throwing it out there as an option with fewer restrictions. In particular it would be nice to have off the shelf plugins that can work with: * hibernate (LGPL) * geotools (LGPL) * perhaps LingPipe * ... ryan On Oct 2, 2008, at 2:57 PM, Otis Gospodnetic wrote: Nice, nice. I think that's what contrib/ is for, among other things, couldn't we use that? Otis -- Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch - Original Message From: Ryan McKinley <[EMAIL PROTECTED]> To: solr-user@lucene.apache.org Sent: Thursday, October 2, 2008 2:46:20 PM Subject: Re: SolrPluginRepository Thanks! If there is interest, we could start a non-apache project for plugins that don't make sense in core or contrib... Apache Wicket has a project called "Wicket Stuff" on sourceforge that is a repository for non-core components. This is where components linking to non-Apache compatible libraries live and also has very low barrier to entry (anyone can have commit rights). I can also run live demos on solrstuff.org... for solrjs, we are running: http://example.solrstuff.org/solrjs/ (a solrjs example page should be up shortly) ryan On Oct 2, 2008, at 2:02 PM, Tricia Williams wrote: Hi All, I didn't see anywhere to share the plugin I created for my multipart work (see https://issues.apache.org/jira/browse/SOLR-380 for more). So I created one here: http://wiki.apache.org/solr/SolrPluginRepository . I'm open to other ways of sharing plugins. Tricia
Re: complex XML structure problem
Bok Otis, I was thinking about this approach, but was wondering if there is more elegant approach where I wouldn't have to recreate logic for proximity and quoted complex queries (identification of neighbor hits and quote queries for highlighting and positioning on image). If nobody comes up with better approach, I will use something similar as you described. Thanks for fast response :) Kind Regards, Saša On Thu, Oct 2, 2008 at 5:51 PM, Otis Gospodnetic <[EMAIL PROTECTED] > wrote: > Bok Saša, > > It sounds like you need to keep per-word metadata, plus the raw content so > you can full-text search it. > If so, consider keeping the meta data elsewhere - e.g. different index, > external DB, etc. > For full-text search you probably want to index the full content, something > like: > > article > Une date.. > 123 > > > You could create another index with words and each word Document have an ID > of their "parent" (e.g. the article's ID), so you do a query against the > above index, get the IDs of matches, and then get words for those matches. > Of course, you can also use a RDBMS or some other storage for the second > part. > > Otis > -- > Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch > > > > - Original Message > > From: Saša Mutić <[EMAIL PROTECTED]> > > To: solr-user@lucene.apache.org > > Sent: Thursday, October 2, 2008 6:14:14 AM > > Subject: complex XML structure problem > > > > Hello, > > > > I would appreciate any suggestions on solving following problem: > > > > I'm trying to index newspaper. After processing logical structure and > > articles, I have similar structure to this... > > > > > > date="18560301"> > > > > type="TEXT" cont="0"/> > > > > type="TEXT" cont="0"/> > > > > type="TEXT" cont="0"/> > > ... > > > > date="18560301"> > > > > type="ADVERTISEMENT" cont="0"/> > > ... > > > > Obviously, I would like to have all the benefits of full-text search with > > proximity and other advanced options. > > After going through SCHEMA.XML and docs, I can see that I should split > each > > "word" into something like this... > > > > ARTICLE > > 201 > > 5 > > 6 > > 18560301 > > Une > > 1137 > > 147 > > 1665 > > 951 > > 1 > > TEXT > > 0 > > > > > > However, if I use this approach, it seems like I lost some core > > functionality of search... > > > > - multiword searching ? For example searching for "Une date" ? Since each > > word is treated as standalone document ? > > > > - Proximity search ? > > > > ... and so on. > > > > So I guess this approach isn't solution to my goal. Does anyone have some > > recommendations on how to solve this ? > > > > Goal would be to receive results that would have mentioned "attributes" > for > > each hit...so for previous example "Une date", I would receive hits with > all > > attributes that would allow me to correctly position them on image > (t,l,b,r > > as coordinates for example). > > > > Kind Regards, > > > > Sasha > >
Re: solr on ubuntu 8.04
I had absolutely not luck with the jetty-solr package on Ubuntu 8.04. I haven't tried Tomcat for solr. I do have it running on Ubuntu though. Here's what I did. Hope this helps. Don't do this unless you understand the steps. When I say things like 'remove contents' I don't know what you have in there. But, if you don't have jetty or solr yet, you'll probably be safe 1) wget http://www.trieuvan.com/apache/lucene/solr/1.3.0/apache-solr-1.3.0.tgz 2) wget http://dist.codehaus.org/jetty/jetty-6.1.11/jetty-6.1.11.zip (possibily something newer will work, but so does this.) 3) Get Java 1.6 4) tar -zxvf jetty-6.1.11.zip into /opt 5) ln -s /opt/jetty-6.1.11 jetty 6) remove contents of /opt/jetty/context 7) mkdir /opt/jetty/context/solr 8) tar -zxvf apache-solr-1.3.0.tgz into /opt/jetty/context/solr 9) create a file called /opt/jetty/contexts/jetty-solr.xml with the following contents Change the path to jetty home to you're jetty home. My example here shows /var/solr/home http://jetty.mortbay.org/configure.dtd";> org.mortbay.jetty.webapp.WebInfConfiguration org.mortbay.jetty.plus.webapp.EnvConfiguration org.mortbay.jetty.plus.webapp.Configuration org.mortbay.jetty.webapp.JettyWebXmlConfiguration org.mortbay.jetty.webapp.TagLibConfiguration /solr /opt/jetty/contexts/solr/ false false /opt/jetty/contexts/solr/WEB-INF/web.xml solr/home /var/solr/home 10) start jetty a) java -jar start.jar etc/jetty.xml while developing b) nohup java -jar start.jar etc/jetty.xml > /dev/null 2>&1 & to make it quiet and headless. I know this doesn't help you get it going on Tomcat. But, perhaps you should considering Jetty. It works under Ubuntu fine. (Note this is a developoment setup. I don't have a production tested setup yet. I hope it's not too different, but I thought I shoudl mention that.) Hope this helps someone cheers gene On Fri, Oct 3, 2008 at 4:48 AM, Tricia Williams <[EMAIL PROTECTED]> wrote: > I haven't tried installing the ubuntu package, but the releases from > apache.org come with an example that contains a directory called "solr" > which contains a directory called "conf" where schema.xml and solrconfig.xml > are important. Is it possible these files do not exist in the path? > > Tricia > > Jack Bates wrote: >> >> No sweat - did you install the Ubuntu solr package or the solr.war from >> http://lucene.apache.org/solr/? >> >> When you say it doesn't work, what exactly do you mean? >> >> On Thu, 2008-10-02 at 07:43 -0700, [EMAIL PROTECTED] wrote: >> >>> >>> Hi Jack, >>> Really I would love if you could help me about it ... and tell me what >>> you have in your file >>> ./var/lib/tomcat5.5/webapps >>> ./usr/share/tomcat5.5/webapps >>> >>> It doesn't work I dont know why :( >>> Thanks a lot >>> Johanna >>> >>> Jack Bates-2 wrote: >>> Thanks for your suggestions. I have now tried installing Solr on two different machines. On one machine I installed the Ubuntu solr-tomcat5.5 package, and on the other I simply dropped "solr.war" into /var/lib/tomcat5.5/webapps Both machines are running Tomcat 5.5 I get the same error message on both machines: SEVERE: Exception starting filter SolrRequestFilter java.lang.NoClassDefFoundError: Could not initialize class org.apache.solr.core.SolrConfig The full error message is attached. I can confirm that the /usr/share/solr/WEB-INF/lib/apache-solr-1.2.0.jar jar file contains: org/apache/solr/core/SolrConfig.class - however I do not know why Tomcat does not find it. Thanks again, Jack > > Hardy has solr packages already. You might want to look how they > packaged > solr if you cannot move to that version. > Did you just drop the war file? Or did you use JNDI? You probably need > to > configure solr/home, and maybe fiddle with > securitymanager stuff. > > Albert > > On Thu, May 1, 2008 at 6:46 PM, Jack Bates freezone.co.uk> > wrote: > > >> >> I am trying to evaluate Solr for an open source records management >> project to which I contribute: http://code.google.com/p/qubit-toolkit/ >> >> I installed the Ubuntu solr-tomcat5.5 package: >> http://packages.ubuntu.com/hardy/solr-tomcat5.5 >> >> - and pointed my browser at: http://localhost:8180/solr/admin (The >> Ubuntu and Debian Tomcat packages run on port 8180) >> >> However, in response I get a Tomcat 404: The requested >> resource(/solr/admin) is not available. >> >> This differs from the response I get accessing a random URL: >> http://localhost:8180/foo/bar >> >> - which displays a blank page. >> >> From this I gather that the solr-tomcat5.5 package installed >> *something*, but that it's misconfigur
Re: solr filesystem dependencies
Anyone? On Thu, Sep 25, 2008 at 2:58 PM, Erlend Hamnaberg <[EMAIL PROTECTED]> wrote: > Hi list. > I am using the EmbeddedSolrServer to embed solr in my application, however > I have run into a snag. > > The only filesystem dependency that I want is the index itself. > > The current implementation of the SolrResource seems to suggest that i need > a filesystem dependency to keep my configuration in. > I manged to work around this using the code below, but it feels kind of > wrong. > > > SolrConfig config = new SolrConfig(null, null, > getClass().getResourceAsStream(SOLR_CONFIG)); > IndexSchema schema = new IndexSchema(config, null, > getClass().getResourceAsStream(SOLR_SCHEMA)); > > CoreContainer coreContainer = new CoreContainer(); > > SolrCore core = new SolrCore("EMS", indexPath.getAbsolutePath(), > config, schema, new CoreDescriptor(coreContainer, "EMS", SOLR_BASE)); > coreContainer.register("EMS", core, false); > SolrServer solrServer = new EmbeddedSolrServer(coreContainer, > "EMS"); > > > Is there a recommended way of embedding the solr server? > > > Thanks > > - Erlend >
Re: solr on ubuntu 8.04
SEVERE: Exception starting filter SolrRequestFilter > java.lang.NoClassDefFoundError: Could not initialize class > org.apache.solr.core.SolrConfig btw, this looks like you are you using current 1.3 or head versions of classes in Schema.xml or solrconfig.xml, but you are running on a 1.2 version of solr. Perhaps if you look up the output a bit you'll see it finding and loading these files. And then blowing out on one of them. You basically need to get the jar files with the support you have. The easiest way I know is get 1.3. I'm still quite novice with solr admin, but this one I've seen before. Hope this helps. gene On Fri, Oct 3, 2008 at 10:14 AM, ristretto. rb <[EMAIL PROTECTED]> wrote: > I had absolutely not luck with the jetty-solr package on Ubuntu 8.04. > I haven't tried Tomcat for solr. > I do have it running on Ubuntu though. Here's what I did. Hope this > helps. Don't do this unless you > understand the steps. When I say things like 'remove contents' I > don't know what you have in there. > But, if you don't have jetty or solr yet, you'll probably be safe > > 1) wget > http://www.trieuvan.com/apache/lucene/solr/1.3.0/apache-solr-1.3.0.tgz > 2) wget http://dist.codehaus.org/jetty/jetty-6.1.11/jetty-6.1.11.zip > (possibily something newer will work, but so does this.) > 3) Get Java 1.6 > 4) tar -zxvf jetty-6.1.11.zip into /opt > 5) ln -s /opt/jetty-6.1.11 jetty > 6) remove contents of /opt/jetty/context > 7) mkdir /opt/jetty/context/solr > 8) tar -zxvf apache-solr-1.3.0.tgz into /opt/jetty/context/solr > 9) create a file called /opt/jetty/contexts/jetty-solr.xml with the > following contents > > Change the path to jetty home to you're jetty home. My example here > shows /var/solr/home > > > "http://jetty.mortbay.org/configure.dtd";> > > > > > > > > > > org.mortbay.jetty.webapp.WebInfConfiguration > org.mortbay.jetty.plus.webapp.EnvConfiguration > org.mortbay.jetty.plus.webapp.Configuration > org.mortbay.jetty.webapp.JettyWebXmlConfiguration > org.mortbay.jetty.webapp.TagLibConfiguration > > > >/solr >/opt/jetty/contexts/solr/ >false >false >/opt/jetty/contexts/solr/WEB-INF/web.xml > > > >solr/home >/var/solr/home > > > > > > > > 10) start jetty > a) java -jar start.jar etc/jetty.xml while developing > b) nohup java -jar start.jar etc/jetty.xml > /dev/null 2>&1 & to > make it quiet and headless. > > I know this doesn't help you get it going on Tomcat. But, perhaps you > should considering Jetty. It works under Ubuntu fine. (Note this is > a developoment setup. I don't have a production tested setup yet. I > hope it's not too different, but I thought I shoudl mention that.) > > Hope this helps someone > > cheers > gene > > On Fri, Oct 3, 2008 at 4:48 AM, Tricia Williams > <[EMAIL PROTECTED]> wrote: >> I haven't tried installing the ubuntu package, but the releases from >> apache.org come with an example that contains a directory called "solr" >> which contains a directory called "conf" where schema.xml and solrconfig.xml >> are important. Is it possible these files do not exist in the path? >> >> Tricia >> >> Jack Bates wrote: >>> >>> No sweat - did you install the Ubuntu solr package or the solr.war from >>> http://lucene.apache.org/solr/? >>> >>> When you say it doesn't work, what exactly do you mean? >>> >>> On Thu, 2008-10-02 at 07:43 -0700, [EMAIL PROTECTED] wrote: >>> Hi Jack, Really I would love if you could help me about it ... and tell me what you have in your file ./var/lib/tomcat5.5/webapps ./usr/share/tomcat5.5/webapps It doesn't work I dont know why :( Thanks a lot Johanna Jack Bates-2 wrote: > > Thanks for your suggestions. I have now tried installing Solr on two > different machines. On one machine I installed the Ubuntu solr-tomcat5.5 > package, and on the other I simply dropped "solr.war" > into /var/lib/tomcat5.5/webapps > > Both machines are running Tomcat 5.5 > > I get the same error message on both machines: > > SEVERE: Exception starting filter SolrRequestFilter > java.lang.NoClassDefFoundError: Could not initialize class > org.apache.solr.core.SolrConfig > > The full error message is attached. > > I can confirm that the /usr/share/solr/WEB-INF/lib/apache-solr-1.2.0.jar > jar file contains: org/apache/solr/core/SolrConfig.class > - however I do not know why Tomcat does not find it. > Thanks again, Jack > > >> >> Hardy has solr packages already. You might want to look how they >> packaged >> solr if you cannot move to that version. >> Did you just drop the war file? Or did you use JNDI? You probably need >> to >> configure solr/home, and maybe fiddle with >> securitymanager stuff. >> >> Albert >> >> On
Re: delta-import ??
: But now I just restart tomcat and it stay stuck on this : and minute just : increase and nothing is hit anymore every 5 minutes, where does that come : from ? Nothing change except minute are you sure your cron is still running? does hte access log for tomcat indicate that the path /solr/books/dataimport is getting a request every 5 minutes? what is showing up in your error logs? : : : 0:31:32.139 : 246 : 70049 : 34 : 0 : 2008-10-01 10:00:01 : 2008-10-01 10:00:01 : 2008-10-01 10:00:38 : 2008-10-01 10:00:38 : 69012 : : : What should i do ? leave it or stop it or ?? : Thanks a lot guys for your help, : : : -- : View this message in context: http://www.nabble.com/delta-importtp19756538p19756538.html : Sent from the Solr - User mailing list archive at Nabble.com. : -Hoss
Re: Check solr/home property
: : I don't know why in my logs I've this error: : : Could not start SOLR. Check solr/home property java.lang.RuntimeException: : Can't find resource 'solrconfig.xml' in classpath or 'solr/conf/', what get's logged before and that? what is the full error with the full stack trace? ... it's possible that you aren't specifyingthe Solr Home properly -- or it's possible you are specifying it properly, but there is a severe error in your configs ... we can't know without seeing hte full error. -Hoss
Re: Dismax , "query phrases"
: > how would it fit c:"some phrase" into that structure? : : does this make sense? : : ( (a:some | b:some ) (a:phrase | b:phrase) ( c:"some phrase") ) that's pretty much exactly what pf does, the only distinction is you get... +( (a:some | b:some ) (a:phrase | b:phrase) ) ( c:"some phrase" ) ...where the "mm" param only applies to the (mandatory) boolean built using the qf. -Hoss
DIH - Full imports + ?entity=param
Just curious, Currently a full-import call does a delete all even when appending an entity param ... wouldn't it be possible to pick up the param and just delete on that entity somehow? It would be nice if there was something involved w/ having an entity field name that worked w/ DIH to do some better introspection like that ... Is that something which is currently doable? Thanks. - Jon
Re: Luke not working with Solr 1.3 index
: Subject: Luke not working with Solr 1.3 index : In-Reply-To: <[EMAIL PROTECTED]> http://people.apache.org/~hossman/#threadhijack Thread Hijacking on Mailing Lists When starting a new discussion on a mailing list, please do not reply to an existing message, instead start a fresh email. Even if you change the subject line of your email, other mail headers still track which thread you replied to and your question is "hidden" in that thread and gets less attention. It makes following discussions in the mailing list archives particularly difficult. See Also: http://en.wikipedia.org/wiki/Thread_hijacking -Hoss
Re: termFreq always = 1 ?
: Yes this may be my problem, : : But is there any solution to have only one "men" keyword indexed when i''ve : got something like this : SOLR-739 is working towards a new omitTf option for fields (taking advantage of a Lucene optimization for this case) but in the mean time the best options i can think of are 1) a custom TokenFilter that keeps track of every token it's ever seen and removes *all* dups 2) a custom Similarity with a tf() func that returns a constant value regardless of the input. (the termFreq stored in the index will be the same, but the scores will be equivilent) -Hoss
Re: Searching with Wildcards
: What would be the scope of the work to implement Erik's suggestion, I : would have to ask my boss, but I think we would then contribute the code : back to Solr. The QParser modifications would be fairly straight forward -- adding some setters to SolrQueryParser to set booleans telling when to use it's superclasses behavior by default, and when to use it's "filter" based approaches for prefixes, wildcards, and range queries. then add request param checks to LuceneQParserPlugin to decide when/if to call those setters... : This should probably be continued on solr-dev, right? ...or in this issue... https://issues.apache.org/jira/browse/SOLR-218 ...but this still wouldn't address the core risk that using the non-Filter based versions might cause TooManyClauses (or OOM if you set the clause limit really high) ... hence Mark's comment that ultimately the *right* thing to do is find a way to make all of these ConstantScore queries play nicely with the highlighter ... which is a discussion for [EMAIL PROTECTED] : Am Mittwoch, den 17.09.2008, 17:19 -0400 schrieb Mark Miller: : > Alas no, the queryparser now uses an unhighlightable constantscore : > query. I'd personally like to make it work at the Lucene level, but not : > sure how thats going to proceed. The tradeoff is that you won't have max : > boolean clause issues and wildcard searches should be faster. It is a : > bummer though. -Hoss
Re: using DataImportHandler instead of POST?
: I chugg away at 1.5 million records in a single file, but solr never : commits. specifically, it ignores my settings. (I can : commit separately at the end, of course :) the way the autocommit settings work is soemthing i always get confused by -- the autocommit logic may not kick in untill the is finished, regardless of how many docs are in it -- but i'm not certain 9and if i'm correct, i'm not sure if that's a bug or a feature) this may be a motivating reason to use DIH in your use case even though you've already got it in the XmlUpdateRequestHandler format. : but I might be misunderstanding autocommit. I have it set as the : default solrconfig.xml does, in the updateHandler section (mapped to : UpdateHandler2) but /update is mapped to XmlUpdateRequestHandler. : should I be shuffling some things around? due to some unfortunately naming decisions several years ago an "update Handler" and a "Request handler" that does updates aren't the same thing ... (which whould always be DirectUpdateHandler2) is the low level internal code that is responsible for actually making the index modiciations -- XmlUpdateRequestHandler (or DataImportHandler) parses the raw input and hands off to DirectUpdateHandler2 to make the changes. -Hoss
Re: SolrPluginRepository
On Oct 2, 2008, at 3:38 PM, Ryan McKinley wrote: Yes, contrib should be for anything general and fits within apache guidelines. SOLR-380 may belong as a contrib (or core) -- i have not looked at it. Just throwing it out there as an option with fewer restrictions. In particular it would be nice to have off the shelf plugins that can work with: * hibernate (LGPL) * geotools (LGPL) There are some options we have for dealing w/ LGPL. For instance, I believe Lucene has some contribs that rely on LGPL, and it just provides the means for obtaining those libraries via the build script. So, we could do that if needed, but obviously, it's a case-by- case basis. * perhaps LingPipe * ... ryan On Oct 2, 2008, at 2:57 PM, Otis Gospodnetic wrote: Nice, nice. I think that's what contrib/ is for, among other things, couldn't we use that? Otis -- Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch - Original Message From: Ryan McKinley <[EMAIL PROTECTED]> To: solr-user@lucene.apache.org Sent: Thursday, October 2, 2008 2:46:20 PM Subject: Re: SolrPluginRepository Thanks! If there is interest, we could start a non-apache project for plugins that don't make sense in core or contrib... Apache Wicket has a project called "Wicket Stuff" on sourceforge that is a repository for non-core components. This is where components linking to non-Apache compatible libraries live and also has very low barrier to entry (anyone can have commit rights). I can also run live demos on solrstuff.org... for solrjs, we are running: http://example.solrstuff.org/solrjs/ (a solrjs example page should be up shortly) ryan On Oct 2, 2008, at 2:02 PM, Tricia Williams wrote: Hi All, I didn't see anywhere to share the plugin I created for my multipart work (see https://issues.apache.org/jira/browse/SOLR-380 for more). So I created one here: http://wiki.apache.org/solr/SolrPluginRepository . I'm open to other ways of sharing plugins. Tricia -- Grant Ingersoll Lucene Helpful Hints: http://wiki.apache.org/lucene-java/BasicsOfPerformance http://wiki.apache.org/lucene-java/LuceneFAQ
Re: SolrPluginRepository
On Oct 2, 2008, at 11:03 PM, Grant Ingersoll wrote: On Oct 2, 2008, at 3:38 PM, Ryan McKinley wrote: Yes, contrib should be for anything general and fits within apache guidelines. SOLR-380 may belong as a contrib (or core) -- i have not looked at it. Just throwing it out there as an option with fewer restrictions. In particular it would be nice to have off the shelf plugins that can work with: * hibernate (LGPL) * geotools (LGPL) There are some options we have for dealing w/ LGPL. For instance, I believe Lucene has some contribs that rely on LGPL, and it just provides the means for obtaining those libraries via the build script. So, we could do that if needed, but obviously, it's a case- by-case basis. ivy may be a good thing to look at... it will download dependancies for you and fits in with ant quite nicely. ryan
Re: DIH - Full imports + ?entity=param
DIH does not know the rows created by that entity. So we do not really have any knowledge on how to delete specific rows. how about passing a deleteQuery=type:x in the request params or having a deleteByQuery on each top level entitywhich can be used when that entity is doing a full-import --Noble On Fri, Oct 3, 2008 at 4:32 AM, Jon Baer <[EMAIL PROTECTED]> wrote: > Just curious, > > Currently a full-import call does a delete all even when appending an entity > param ... wouldn't it be possible to pick up the param and just delete on > that entity somehow? It would be nice if there was something involved w/ > having an entity field name that worked w/ DIH to do some better > introspection like that ... > > Is that something which is currently doable? > > Thanks. > > - Jon > > > -- --Noble Paul
RE: How to select one entity at a time?
Hi Burnell As we know in the real enterprise application the queries will be always complex than what I have posted here. That time I fear, this approach may not be sufficient. Especially when the query has to handle multiple conditions or joins or more complex operations like that. So I suppose if there is a way by which we can handle "both" just like "user", that will be a more stable approach. Thanks for your suggestion con Neville Burnell wrote: > > Hi Con, > > I'm not sure if you need the 'both' entity. > > For example, perhaps the following query will work for you to retrieve > both? > > http://localhost:8983/solr/select/?q=bob AND (rowtype:user OR > rowtype:manager)&version=2.2&start=0&rows=10&indent=on > > > >> -Original Message- >> From: con [mailto:[EMAIL PROTECTED] >> Sent: Wednesday, 1 October 2008 7:54 PM >> To: solr-user@lucene.apache.org >> Subject: RE: How to select one entity at a time? >> >> >> And finally its almost fine.. :jumping: :jumping: :jumping: >> Thanks Burnell. >> >> My tables are in Oracle DB. >> So based on your suggestions the changes made are: >> 1) In the data-config.xml, >> > query="select * >> from USER"> >> >> >>> query="select * >> from MANAGER"> >> >> >> >> 2) In schema.xml >> > required="true" /> >> >> And when I call http://localhost:8983/solr/select/?q=(bob AND >> rowtype:user)&version=2.2&start=0&rows=10&indent=on >> only the user entity's values are returned and if I use manager, only >> the >> manager's values will be returned... >> It almost solved my issues.(95%). >> >> But I am struck at one point. >> In the data-config.xml I have an entry like, >> >> > query="select * >> from USER , MANAGER where USER.userID = MANAGER.userID "> >> >> >> when i try to search like, http://localhost:8983/solr/select/?q=(150 >> AND >> rowtype:both)&version=2.2&start=0&rows=10&indent=on >> will return zero responces. >> At the same time if I run >> http://localhost:8983/solr/select/?q=150&version=2.2&start=0&rows=10&in >> dent=on >> It will get values from both USER and MANAGER satisfying the condition. >> Is there any difference in applying transformer in such type of >> queries. >> >> Once again, >> Thanks a lot:handshake: >> >> Expecting reply >> con >> >> >> >> >> >> >> >> >> >> Neville Burnell wrote: >> > >> > BTW, You will also need to configure your schema.xml to index [and >> store?] >> > the rowtype attribute: >> > >> >> > required="true" /> >> > >> > Or alternatively change rowtype to be say rowtype_s to take advantage >> of >> > Solr's dynamic field definitions. >> > >> >> -Original Message- >> >> From: Neville Burnell [mailto:[EMAIL PROTECTED] >> >> Sent: Wednesday, 1 October 2008 6:06 PM >> >> To: solr-user@lucene.apache.org >> >> Subject: RE: How to select one entity at a time? >> >> >> >> Hi Con, >> >> >> >> what RDBMS are you using? >> >> >> >> This looks like a SQL syntax problem, perhaps the 'literal as >> column' >> >> is not right for your setup [while it works for my MS SQL Server]. >> >> >> >> An alternative to supplying the "rowtype" attribute as a literal in >> the >> >> SQL clause is to use a Solr DIH Template Transformer >> >> http://wiki.apache.org/solr/DataImportHandler#transformer >> >> >> >> This should allow you to keep the working SQL. For example >> >> >> >> >> >> >> >> >> >> >> >> >> >> > -Original Message- >> >> > From: con [mailto:[EMAIL PROTECTED] >> >> > Sent: Wednesday, 1 October 2008 5:48 PM >> >> > To: solr-user@lucene.apache.org >> >> > Subject: RE: How to select one entity at a time? >> >> > >> >> > >> >> > That is exactly what my problem is.:handshake: >> >> > Thanks for you reply. >> >> > >> >> > But I tried your suggestion: >> >> > Updated the data-config.xml as; >> >> > >> >> > >> >> > >> >> > >> >> > But when I perform the full import itself, it is throwing >> exception, >> >> > >> >> >SEVERE: Exception while processing: user document : >> >> > SolrInputDocumnt[{}] >> >> > org.apache.solr.handler.dataimport.DataImportHandlerException: >> >> > Unable to >> >> > execute >> >> > query: select 'user' as rowtype,* from USER Processing >> Document # >> >> 1 >> >> > ... >> >> > So, as expected, when I go to search it is giving- undefined field >> >> > rowtype- >> >> > error.!!! >> >> > Do I need to update any other files or fields? >> >> > >> >> > I am happy that it worked for you...:jumping::jumping: >> >> > Looking forward for your reply >> >> > Thanks >> >> > con >> >> > >> >> > >> >> > >> >> > >> >> > Neville Burnell wrote: >> >> > > >> >> > > Hi, >> >> > > >> >> > >> But while performing a search, if I want to search only the >> data >> >> > from >> >> > >> USER table, how can I acheive it. >> >> > > >> >> > > In my app+solr index, we solved this problem by "tagging" >> entities >> >> > with a >> >> > > "rowtype" attribute, something like this: >> >> > > >> >> > > >> >> > > >> >> > > >> >> >
More than one Tokenizer
Hi, Can we use more than one tokenizer factory for single field?.If we use like that what will happen?In what situation this may need? Thanks in advance -sanraj -- View this message in context: http://www.nabble.com/More-than-one-Tokenizer-tp19792894p19792894.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Check solr/home property
Thanks Hoss, I thought it might come from tomcat if it doesn't find it : [EMAIL PROTECTED]:/etc/tomcat5.5/Catalina/localhost# ls solr.xml [EMAIL PROTECTED]:/var/lib/tomcat5.5/webapps# ls solr solr.war I have solrconfig.xml in my folder /data/solr/books/conf/ and I've multicore.xml in /data/solr/ solr.xml 288K(298688K)] 5121K->288K(2005376K), 0.0021570 secs] [Times: user=0.00 sys=0.00, real=0.01 secs] 0.060: [Full GC (System) [PSYoungGen: 288K->0K(298688K)] [PSOldGen: 0K->180K(1706688K)] 288K->180K(2005376K) [PSPermGen: 3002K->3002K(21248K)], 0.0067110 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] Oct 3 08:53:42 solr-test jsvc.exec[24013]: Oct 3, 2008 8:53:42 AM org.apache.catalina.core.AprLifecycleListener lifecycleEvent INFO: The Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: /usr/java/packages/lib/amd64:/lib:/usr/lib Oct 3 08:53:42 solr-test jsvc.exec[24013]: Oct 3, 2008 8:53:42 AM org.apache.coyote.http11.Http11BaseProtocol init INFO: Initializing Coyote HTTP/1.1 on http-8180 Oct 3 08:53:42 solr-test jsvc.exec[24013]: Oct 3, 2008 8:53:42 AM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 350 ms Oct 3 08:53:42 solr-test jsvc.exec[24013]: Oct 3, 2008 8:53:42 AM org.apache.catalina.core.StandardService start INFO: Starting service Catalina Oct 3 08:53:42 solr-test jsvc.exec[24013]: Oct 3, 2008 8:53:42 AM org.apache.catalina.core.StandardEngine start INFO: Starting Servlet Engine: Apache Tomcat/5.5 Oct 3 08:53:42 solr-test jsvc.exec[24013]: Oct 3, 2008 8:53:42 AM org.apache.catalina.core.StandardHost start INFO: XML validation disabled Oct 3 08:53:43 solr-test jsvc.exec[24013]: Oct 3, 2008 8:53:43 AM org.apache.solr.servlet.SolrDispatchFilter init INFO: SolrDispatchFilter.init() Oct 3 08:53:43 solr-test jsvc.exec[24013]: Oct 3, 2008 8:53:43 AM org.apache.solr.core.SolrResourceLoader locateInstanceDir INFO: Using JNDI solr.home: /data/solr Oct 3 08:53:43 solr-test jsvc.exec[24013]: Oct 3, 2008 8:53:43 AM org.apache.solr.servlet.SolrDispatchFilter initMultiCore INFO: looking for multicore.xml: /data/solr/multicore.xml Oct 3 08:53:43 solr-test jsvc.exec[24013]: Oct 3, 2008 8:53:43 AM org.apache.solr.core.SolrResourceLoader locateInstanceDir INFO: Using JNDI solr.home: /data/solr Oct 3 08:53:43 solr-test jsvc.exec[24013]: Oct 3, 2008 8:53:43 AM org.apache.solr.core.SolrResourceLoader INFO: Solr home set to '/data/solr/' Oct 3 08:53:43 solr-test jsvc.exec[24013]: Oct 3, 2008 8:53:43 AM org.apache.solr.core.SolrResourceLoader createClassLoader INFO: Reusing parent classloader Oct 3 08:53:43 solr-test jsvc.exec[24013]: Oct 3, 2008 8:53:43 AM org.apache.solr.servlet.SolrDispatchFilter init SEVERE: Could not start SOLR. Check solr/home property java.lang.RuntimeException: Can't find resource 'solrconfig.xml' in classpath or '/data/solr/conf/', cwd=/ ^Iat org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:168) ^Iat org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:136) ^Iat org.apache.solr.core.Config.(Config.java:97) ^Iat org.apache.solr.core.SolrConfig.(SolrConfig.java:108) ^Iat org.apache.solr.core.SolrConfig.(SolrConfig.java:65) ^Iat org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:89) ^Iat org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:221) ^Iat org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:302) ^Iat org.apache.catalina.core.ApplicationFilterConfig.(ApplicationFilterConfig.java:78) ^Iat org.apache.catalin Oct 3 08:53:43 solr-test jsvc.exec[24013]: text.java:3635) ^Iat org.apache.catalina.core.StandardContext.start(StandardContext.java:4222) ^Iat org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:760) ^Iat org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:740) ^Iat org.apache.catalina.core.StandardHost.addChild(StandardHost.java:544) ^Iat org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:626) ^Iat org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:553) ^Iat org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:488) ^Iat org.apache.catalina.startup.HostConfig.start(HostConfig.java:1138) ^Iat org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:311) ^Iat org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:120) ^Iat org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1022) ^Iat org.apache.catalina.core.StandardHost.start(StandardHost.java:736) ^Iat org.apache.catali Oct 3 08:53:43 solr-test jsvc.exec[24013]: 14) ^Iat org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443) ^Iat org.apache.catalina.core.StandardService.start(StandardService.java:448) ^Iat org.apache.catalina.core.StandardServer.start(StandardServer.java:700) ^Iat org.apache.