Re: Any way to view lucene files
I also tried with solr 4.2 and with luke version Luke 4.0.0-ALPHA but got this error: java.lang.IllegalArgumentException: A SPI class of type org.apache.lucene.codecs.Codec with name 'Lucene42' does not exist. You need to add the corresponding JAR file supporting this SPI to your classpath.The current classpath supports the following names: [Lucene40, Lucene3x, SimpleText, Appending] With Regards Aman Tandon On Sat, Jun 7, 2014 at 12:22 PM, Aman Tandon wrote: > My solr version is 4.8.1 and luke is 3.5 > > With Regards > Aman Tandon > > > On Sat, Jun 7, 2014 at 12:21 PM, Chris Collins > wrote: > >> What version of Solr / Lucene are you using? You have to match the Luke >> version to the same version of Lucene. >> >> C >> On Jun 6, 2014, at 11:42 PM, Aman Tandon wrote: >> >> > Yes tried, but it not working at all every time i choose my index >> > directory it shows me EOF past >> > >> > With Regards >> > Aman Tandon >> > >> > >> > On Sat, Jun 7, 2014 at 12:01 PM, Chris Collins >> wrote: >> > >> >> Have you tried: >> >> >> >> https://code.google.com/p/luke/ >> >> >> >> Best >> >> >> >> Chris >> >> On Jun 6, 2014, at 11:24 PM, Aman Tandon >> wrote: >> >> >> >>> Hi, >> >>> >> >>> Is there any way so that i can view what information and which is >> there >> >> in >> >>> my _e.fnm, etc files. may be with the help of any application or any >> >> viewer >> >>> tool. >> >>> >> >>> With Regards >> >>> Aman Tandon >> >> >> >> >> >> >
Re: Deepy nested structure
You can search for block-join to see how to do this. But you may also find you do need to flatter structures for efficient search. Regards, Alex On 06/06/2014 8:37 pm, "harikrishna" wrote: > we need to have the nested structure for the index, and the requirement is > as > follows > > we have application at root, then customer location, and then we have some > entities data > > > > > > > > > > > > > > i want to index the data in the above formate, and wanted to retrieve in > the > same way. > please help on this > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/Deepy-nested-structure-tp4140397.html > Sent from the Solr - User mailing list archive at Nabble.com. >
Re: timeout when create alias
Thank you for your reply. -- View this message in context: http://lucene.472066.n3.nabble.com/timeout-when-create-alias-tp4140437p4140568.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Deepy nested structure
Solr has only limited and clunky and quirky support for structure - it works best for flat or denormalized documents. As a starting point, focus on your queries - what do your users really want to do, what do they expect to do, how do they expect to accomplish it, and how would they like to ask for it. Understand the query side of the equation better, and then the indexing side will make a lot more sense. For a starter, give us a few example queries that concisely express the use cases your users will have. A couple of the simpler queries, some medium complexity queries, and what you believe are the most complex queries your users are likely to need. Start by expressing them clearly in simple, plain English, unless the structured query is quite obvious. -- Jack Krupansky -Original Message- From: harikrishna Sent: Friday, June 6, 2014 9:35 AM To: solr-user@lucene.apache.org Subject: Deepy nested structure we need to have the nested structure for the index, and the requirement is as follows we have application at root, then customer location, and then we have some entities data i want to index the data in the above formate, and wanted to retrieve in the same way. please help on this -- View this message in context: http://lucene.472066.n3.nabble.com/Deepy-nested-structure-tp4140397.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Any way to view lucene files
Did u try luke 47 > On Jun 6, 2014, at 11:59 PM, Aman Tandon wrote: > > I also tried with solr 4.2 and with luke version Luke 4.0.0-ALPHA > > but got this error: > java.lang.IllegalArgumentException: A SPI class of type > org.apache.lucene.codecs.Codec with name 'Lucene42' does not exist. You > need to add the corresponding JAR file supporting this SPI to your > classpath.The current classpath supports the following names: [Lucene40, > Lucene3x, SimpleText, Appending] > > With Regards > Aman Tandon > > > On Sat, Jun 7, 2014 at 12:22 PM, Aman Tandon > wrote: > >> My solr version is 4.8.1 and luke is 3.5 >> >> With Regards >> Aman Tandon >> >> >> On Sat, Jun 7, 2014 at 12:21 PM, Chris Collins >> wrote: >> >>> What version of Solr / Lucene are you using? You have to match the Luke >>> version to the same version of Lucene. >>> >>> C On Jun 6, 2014, at 11:42 PM, Aman Tandon wrote: Yes tried, but it not working at all every time i choose my index directory it shows me EOF past With Regards Aman Tandon > On Sat, Jun 7, 2014 at 12:01 PM, Chris Collins wrote: > Have you tried: > > https://code.google.com/p/luke/ > > Best > > Chris > On Jun 6, 2014, at 11:24 PM, Aman Tandon >>> wrote: > >> Hi, >> >> Is there any way so that i can view what information and which is >>> there > in >> my _e.fnm, etc files. may be with the help of any application or any > viewer >> tool. >> >> With Regards >> Aman Tandon >>
Re: Strange Behavior with Solr in Tomcat.
Thanks, Meraj, that was exactly the issue , setting true worked like a charm and the server starts up as usual. Thanks again! On Fri, Jun 6, 2014 at 2:42 PM, Meraj A. Khan wrote: > This looks distinctly related to > https://issues.apache.org/jira/browse/SOLR-4408 , try coldSearcher = true > as being suggested in JIRA and let us know . > > > On Fri, Jun 6, 2014 at 2:39 PM, Jean-Sebastien Vachon < > jean-sebastien.vac...@wantedanalytics.com> wrote: > > > I would try a thread dump and check the output to see what`s going on. > > You could also strace the process if you`re running on Unix or changed > the > > log level in Solr to get more information logged > > > > > -Original Message- > > > From: S.L [mailto:simpleliving...@gmail.com] > > > Sent: June-06-14 2:33 PM > > > To: solr-user@lucene.apache.org > > > Subject: Re: Strange Behavior with Solr in Tomcat. > > > > > > Anyone folks? > > > > > > > > > On Wed, Jun 4, 2014 at 10:25 AM, S.L > wrote: > > > > > > > Hi Folks, > > > > > > > > I recently started using the spellchecker in my solrconfig.xml. I am > > > > able to build up an index in Solr. > > > > > > > > But,if I ever shutdown tomcat I am not able to restart it.The server > > > > never spits out the server startup time in seconds in the logs,nor > > > > does it print any error messages in the catalina.out file. > > > > > > > > The only way for me to get around this is by delete the data > directory > > > > of the index and then start the server,obviously this makes me loose > my > > > index. > > > > > > > > Just wondering if anyone faced a similar issue and if they were able > > > > to solve this. > > > > > > > > Thanks. > > > > > > > > > > > > > > - > > > Aucun virus trouvé dans ce message. > > > Analyse effectuée par AVG - www.avg.fr > > > Version: 2014.0.4570 / Base de données virale: 3950/7571 - Date: > > > 27/05/2014 La Base de données des virus a expiré. > > >
Re: Strange Behavior with Solr in Tomcat.
Interesting, thanks for reporting back. I've re-opened SOLR-4408. On Sat, Jun 7, 2014 at 10:50 PM, S.L wrote: > Thanks, Meraj, that was exactly the issue , setting > true worked like a charm and the server > starts up as usual. > > Thanks again! > > > On Fri, Jun 6, 2014 at 2:42 PM, Meraj A. Khan wrote: > > > This looks distinctly related to > > https://issues.apache.org/jira/browse/SOLR-4408 , try coldSearcher = > true > > as being suggested in JIRA and let us know . > > > > > > On Fri, Jun 6, 2014 at 2:39 PM, Jean-Sebastien Vachon < > > jean-sebastien.vac...@wantedanalytics.com> wrote: > > > > > I would try a thread dump and check the output to see what`s going on. > > > You could also strace the process if you`re running on Unix or changed > > the > > > log level in Solr to get more information logged > > > > > > > -Original Message- > > > > From: S.L [mailto:simpleliving...@gmail.com] > > > > Sent: June-06-14 2:33 PM > > > > To: solr-user@lucene.apache.org > > > > Subject: Re: Strange Behavior with Solr in Tomcat. > > > > > > > > Anyone folks? > > > > > > > > > > > > On Wed, Jun 4, 2014 at 10:25 AM, S.L > > wrote: > > > > > > > > > Hi Folks, > > > > > > > > > > I recently started using the spellchecker in my solrconfig.xml. I > am > > > > > able to build up an index in Solr. > > > > > > > > > > But,if I ever shutdown tomcat I am not able to restart it.The > server > > > > > never spits out the server startup time in seconds in the logs,nor > > > > > does it print any error messages in the catalina.out file. > > > > > > > > > > The only way for me to get around this is by delete the data > > directory > > > > > of the index and then start the server,obviously this makes me > loose > > my > > > > index. > > > > > > > > > > Just wondering if anyone faced a similar issue and if they were > able > > > > > to solve this. > > > > > > > > > > Thanks. > > > > > > > > > > > > > > > > > > - > > > > Aucun virus trouvé dans ce message. > > > > Analyse effectuée par AVG - www.avg.fr > > > > Version: 2014.0.4570 / Base de données virale: 3950/7571 - Date: > > > > 27/05/2014 La Base de données des virus a expiré. > > > > > > -- Regards, Shalin Shekhar Mangar.
SOLR java.io.IOException: cannot uncache file
I get this error while im trying to insert the row..Any tips on how to fix this issue? [qtp191908836-19] ERROR org.apache.solr.core.SolrCore – java.io.IOException: cannot uncache file="_t0_Lucene41_0.pos": it was separately also created in the delegate directory at org.apache.lucene.store.NRTCachingDirectory.unCache(NRTCachingDirectory.java:297) at org.apache.lucene.store.NRTCachingDirectory.sync(NRTCachingDirectory.java:216) at org.apache.lucene.index.IndexWriter.startCommit(IndexWriter.java:4109) at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2809) at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2897) at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2872) at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:549) at org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95) at org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64) at org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1240) at org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1219) at org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:157) at org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:266) at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:173) -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR-java-io-IOException-cannot-uncache-file-tp4140597.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Shard splitting error: cannot uncache file="_1.nvm"
Did you guys were able to fix this issue? -- View this message in context: http://lucene.472066.n3.nabble.com/Shard-splitting-error-cannot-uncache-file-1-nvm-tp4086863p4140598.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr Scale Toolkit Access Denied Error
Hi Mark, Sorry for the trouble! I've now made the ami-1e6b9d76 AMI public; total oversight on my part :-(. Please try again. Thanks Hoss for trying to help out on this one. Cheers, Tim On Fri, Jun 6, 2014 at 6:46 PM, Mark Gershman wrote: > Thanks, Hoss. > > I did substitute the previous AMI ID from the mid-May release of the > toolkit and the build process does proceed further; however, it appears the > the AMI changed enough that it is not compatible with the new toolkit > release. In doing a little more research, I'm inclined to believe that the > permissions on the AMI may be the source of the problem and will post to > the issue tracker per your suggestion. > > > Mark Gershman > > > On Fri, Jun 6, 2014 at 7:41 PM, Chris Hostetter > wrote: > >> >> : My guess is that the customized toolkit AMI (ami-1e6b9d76) at AWS is not >> : accessible by my AWS credentials. Is this an AMI permissioning issue or >> is >> : it a problem with my particular account or how it is configured at AWS. >> I >> : did not experience this specific problem when working with the previous >> : iteration of the Solr Scale Toolkit back toward the latter part of May. >> It >> : appears that the AMI was updated from ami-96779efe to ami-1e6b9d76 with >> the >> : newest version of the toolkit. >> >> I'm not much of an AWS expert, but i seem to recall that if you don't >> have your AWS security group setup properly this type of error can >> happen? is it possible that when you were trying out solr-scale-tk before >> you had this setup, but now you don't? >> >> https://github.com/LucidWorks/solr-scale-tk >> >> > You'll need to setup a security group named solr-scale-tk (or update the >> > fabfile.py to change the name). >> > >> > At a minimum you should allow TCP traffic to ports: 8983, 8984-8989, >> > SSH, and 2181 (ZooKeeper). However, it is your responsibility to review >> > the security configuration of your cluster and lock it down >> appropriately. >> > >> > You'll also need to create an keypair (using the Amazon console) named >> > solr-scale-tk (you can rename the key used by the framework, see: >> > AWS_KEY_NAME). After downloading the keypair file (solr-scale-tk.pem), >> > save it to ~/.ssh/ and change permissions: chmod 600 >> > ~/.ssh/solr-scale-tk.pem >> >> ...if I'm wrong, and there really is a problem with the security on the >> AMI, the best place to report that would be in the project's issue >> tracker... >> >> https://github.com/LucidWorks/solr-scale-tk/issues >> >> >> >> -Hoss >> http://www.lucidworks.com/ >>
Re: Mapping a field name before queryParser
Could a custom Search Components chain be used to modify the request before it hits the Query? I never used them myself, but it seems like potentially the right place. Regards, Alex. Personal website: http://www.outerthoughts.com/ Current project: http://www.solr-start.com/ - Accelerating your Solr proficiency On Sat, Jun 7, 2014 at 5:10 AM, Jack Krupansky wrote: > Edismax has field aliasing: > http://wiki.apache.org/solr/ExtendedDisMax#Field_aliasing_.2F_renaming > > f.my_alias.qf=actual_field > > f.brand.qf=brand_name > > -- Jack Krupansky > > -Original Message- From: Antoine LE FLOC'H Sent: Friday, June 6, > 2014 5:56 PM To: solr-user@lucene.apache.org Subject: Mapping a field name > before queryParser > Hello, > > I have a query like the following where "brand" is a field in my schema: > > select?rows=1&start=0&sort=price+asc&q=brand:sony&qt=for-search&wt=xml > > > But I want to do this instead: > > select?rows=1&start=0&sort=price+asc&q=brand_name:sony&qt=for-search&wt=xml > > and define something like "brand_name:brand" in my Solr config to change > the field before or during the QueryParsing. Is there a way to do that ? > > > Ideally I would not want to do a copyField since it would grow my index and > would require re-indexing. > > > Thank you
Re: Error when using URLDataSource to index RSS items
It sounds like maybe when you run this from code, you are getting an error page instead of the RSS feed and that error page is a malformed HTML. Do you have a proxy where you run the code? If so, your browser may be using proxy and your DIH code does not. You could try running something like WireShark, Fiddler or similar t inspect the request/response you are actually getting. Regards, Alex. Personal website: http://www.outerthoughts.com/ Current project: http://www.solr-start.com/ - Accelerating your Solr proficiency On Sat, Jun 7, 2014 at 10:52 AM, ienjreny wrote: > Hello, > > I am using the following script to index RSS items > > > > pk="link" > url="http://www.alarabiya.net/.mrss/ar.xml"; > processor="XPathEntityProcessor" > forEach="/rss/channel/item"> > >xpath="/rss/channel/item/title" /> > > > > > > But I am facing the following error > > Caused by: com.ctc.wstx.exc.WstxParsingException: Unexpected close tag > ; expected . > > Can any body help? > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/Error-when-using-URLDataSource-to-index-RSS-items-tp4140548.html > Sent from the Solr - User mailing list archive at Nabble.com.
Code that handles merging results from a distributed query
Hi, In Solr in Action book, I read how the distributed queries work. Looks like the node that receives the request executes a search, sends queries to other shards in parallel, and then finally merges the results. I've been trying to find where that piece of code exists. 1) Does the distributed functionality handled in the Solr web-layer or at the core-level? 2) It'd be great if you could also provide me the exact class(es) that take care of making distributed queries and merging the results. Thanks.
Re: Code that handles merging results from a distributed query
The "mergeIds" method of the QueryComponent does the actual merging of the docs from the shards. Joel Bernstein Search Engineer at Heliosearch On Sun, Jun 8, 2014 at 1:31 AM, Phanindra R wrote: > Hi, > > In Solr in Action book, I read how the distributed queries work. Looks > like the node that receives the request executes a search, sends queries to > other shards in parallel, and then finally merges the results. > > I've been trying to find where that piece of code exists. > > 1) Does the distributed functionality handled in the Solr web-layer or at > the core-level? > > 2) It'd be great if you could also provide me the exact class(es) that take > care of making distributed queries and merging the results. > > Thanks. >