Re: Whys this query not working?
Aaah thanks that makes sense. I wasn't aware of that bug. Doing the suggested changes manually in the query works perfectly (just gotta work out how to edit the Perl module I'm using to generate that query, so that I can use this slightly different syntax compared to what it outputs normally) Thanks :) Andy -- View this message in context: http://lucene.472066.n3.nabble.com/Whys-this-query-not-working-tp4000598p4000683.html Sent from the Solr - User mailing list archive at Nabble.com.
test
test -- View this message in context: http://lucene.472066.n3.nabble.com/test-tp4000660.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solritas in production
Apologies to drag this conversation up. I want to share my experience. Im a code hacker, I have written about 10 lines of code in my life, but can figure out most things and how they work, or how to get them to work. Im looking at deploying a vertical search engine. The data is a bit messy, but im normalising it as best I can and solr just seems to handle it so well. Who can argue with results in miliseconds. Now, I need to develop a front end. This is proving THE most difficult part of my solr experience. My options are. 1. Defailt Solritas/velocity 2. Wrap this into a drupal or similar cms with solr functionality. 3. Code this in php. I dont have the knowledge to code in php. I cant use a cms for a variety of reasons, most of which are related to how it is handled in the database and the format of my data not fitting to a cms's framework, so im left to look at point 1 again. Im sure the knowledge in this forum is without question, but when I see things like "not production ready" that doesnt mean much to me, someone hacking out a website. Im bootstrapping so finding someone to code an option 3 for me is a bit hard, even with offshore freelance help. If there was a set of steps to get Velocity working with my system that would give me a solution. Alternatively, if the php (or other) code clients coded a basic frontend template that guys like me could hack out that would open up the solr framework to a wider range of people. Anyway, after reading this post, and the one on this blog http://thoughtsasaservice.wordpress.com/2012/05/10/should-you-use-solritas-on-production/ I think I am going to look down the path of this further. The biggest drawback to this I can see is that Velocity consumes more memory, for me in my situation thats actually ok. Using host side tools like apache httpd or varnish as has been suggested is something that is possible in my situation - but a bolt on solution would be ideal. Hopefully my experience helps others. I wonder how many people like me (wordpress/joomla template hackers at best) go to the apache solr project and turn away because they cant figure out aspects, or they cant just install a simple solution. Dont dump velocity, it ticks so many boxes. Please add to it! -- View this message in context: http://lucene.472066.n3.nabble.com/Solritas-in-production-tp3966191p4000661.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solritas in production
Just make really, really sure you don't allow queries like: .../solr/update?stream.body=*:* Best Erick On Sun, Aug 12, 2012 at 8:42 AM, george123 wrote: > Apologies to drag this conversation up. > > I want to share my experience. > > Im a code hacker, I have written about 10 lines of code in my life, but can > figure out most things and how they work, or how to get them to work. > > Im looking at deploying a vertical search engine. The data is a bit messy, > but im normalising it as best I can and solr just seems to handle it so > well. Who can argue with results in miliseconds. > > Now, I need to develop a front end. This is proving THE most difficult part > of my solr experience. > > My options are. > 1. Defailt Solritas/velocity > 2. Wrap this into a drupal or similar cms with solr functionality. > 3. Code this in php. > > I dont have the knowledge to code in php. I cant use a cms for a variety of > reasons, most of which are related to how it is handled in the database and > the format of my data not fitting to a cms's framework, so im left to look > at point 1 again. > > Im sure the knowledge in this forum is without question, but when I see > things like "not production ready" that doesnt mean much to me, someone > hacking out a website. Im bootstrapping so finding someone to code an option > 3 for me is a bit hard, even with offshore freelance help. > > If there was a set of steps to get Velocity working with my system that > would give me a solution. Alternatively, if the php (or other) code clients > coded a basic frontend template that guys like me could hack out that would > open up the solr framework to a wider range of people. > > Anyway, after reading this post, and the one on this blog > http://thoughtsasaservice.wordpress.com/2012/05/10/should-you-use-solritas-on-production/ > I think I am going to look down the path of this further. > > The biggest drawback to this I can see is that Velocity consumes more > memory, for me in my situation thats actually ok. Using host side tools like > apache httpd or varnish as has been suggested is something that is possible > in my situation - but a bolt on solution would be ideal. > > Hopefully my experience helps others. I wonder how many people like me > (wordpress/joomla template hackers at best) go to the apache solr project > and turn away because they cant figure out aspects, or they cant just > install a simple solution. > > Dont dump velocity, it ticks so many boxes. Please add to it! > > > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/Solritas-in-production-tp3966191p4000661.html > Sent from the Solr - User mailing list archive at Nabble.com.
Re: Running out of memory
> It would be vastly preferable if Solr could just exit when it gets a memory > error, because we have it running under daemontools, and that would cause > an automatic restart. -XX:OnOutOfMemoryError="; " Run user-defined commands when an OutOfMemoryError is first thrown. > Does Solr require the entire index to fit in memory at all times? No. But it's hard to say about your particular problem without additional information. How often do you commit? Do you use faceting? Do you sort by Solr fields and if yes what are those fields? And you should also check caches.
Solr becomes unresponsive while merging
Hi, We have been using Solr for last three years without any problem. We have recently migrated our servers to a new datacenter, and our servers have been upgraded as well. We also used this opportunity to upgrade the software we use in our infrastructure. As a result of this, Solr is upgraded to version 3.6. However, we have been experiencing issues while merging. We have about 130.000 documents, and average page count is something about 80 pages. We are running a delta indexing every hour, and about 20.000-30.000 documents get re-indexed. The reason is that we have about 80k unique visitors, and we need to re-index deleted, added and viewed documents (for view count). We are considering to create a new indexing format to separate viewed documents, and run it daily to reduce number of documents for the hourly operation. However it is not going to solve the problem, it will just make things better. (We have workarounds right now to prevent the issue, but no real solution) We have no problems while indexing. However, when the merging starts, we simply lose the connection to the server, and solr becomes unresponsive for 7-12 minutes. Average load in the server also increases. So far; - We tried changing compound file format - We tried TieredMergePolicy, however we haven't changed maxMergeAtOnce and segmentsPerTier. If they are going to help us, what should be the ideal value? - We switched back to LogByteSizeMergePolicy since it was the default value in our old configuration but it didn't help either. What do you suggest? How can we prevent the unresponsiveness during merging. The reason I am saying merging is the problem is that we experience the issue right after indexing, and it takes only 7-12 minutes. Could be something internal I do not know though. However, the new server is quite powerful, way powerful than old server. So, it was a bit of surprise since we haven't had any issues in the old setup. Thanks,
EmbeddedSolrServer and missing/unfound core
Hi all, I have been playing with Solr 4.0 and was trying to run some tutorials. >From the EmbeddedSolrServer example here http://wiki.apache.org/solr/Solrj, I was trying to get something similar Below is my code for that: package solrj.embedded; import org.apache.solr.client.solrj.embedded.EmbeddedSolrServer; import org.apache.solr.client.solrj.response.UpdateResponse; import org.apache.solr.common.SolrInputDocument; import org.apache.solr.core.CoreContainer; public class Embedding { public static void main(String[] args) throws Exception{ System.setProperty("solr.solr.home", "/Users/deniz/ServiceTeam/UserSearch/solr-4.0.0-ALPHA/example/solr"); CoreContainer.Initializer initializer = new CoreContainer.Initializer(); CoreContainer coreContainer = initializer.initialize(); EmbeddedSolrServer server = new EmbeddedSolrServer(coreContainer,""); SolrInputDocument doc = new SolrInputDocument(); String docID = "111221"; doc.addField("id", docID, 1.0f); doc.addField("name","my name1", 1.0f); System.out.println(server.getCoreContainer().getDefaultCoreName()); UpdateResponse upres = server.add(doc); System.out.println(upres.getStatus()); } } and this is the stack trace: Exception in thread "main" org.apache.solr.common.SolrException: No such core: at org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:118) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:117) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:122) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:107) at solrj.embedded.Embedding.main(Embedding.java:22) I dont know which part i am ruining... anyone has any ideas? p.s I am running the default configs, on my local for the solr 4.0 alpha - Zeki ama calismiyor... Calissa yapar... -- View this message in context: http://lucene.472066.n3.nabble.com/EmbeddedSolrServer-and-missing-unfound-core-tp4000744.html Sent from the Solr - User mailing list archive at Nabble.com.
RE: multi-searching problem
I added this in schema.xml ... &defType = edismax &qf = article_id article_nom article_id But i have this error: ### org.xml.sax.SAXParseException: The reference to entity "defType" must end with the ';' delimiter. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(Unknown Source) at ### Is this a syntax problem? Am I writing defType wrong? Thank you -Message d'origine- De : Ahmet Arslan [mailto:iori...@yahoo.com] Envoyé : vendredi 10 août 2012 16:22 À : solr-user@lucene.apache.org Objet : RE: multi-searching problem > It seems more complicate than i > need. > I just want, if the user specify nothing, to search in all my fields > that I declared in my schema.xml like that : > article_nom > but not only article_nom but all fields. > There should be some simple way to do that without using all of > this..? > Or am I wrong? It is not that complicated. Just list your fields in qf parameter, that's all. defType=edismax&qf=field1 field2 field3 ... Think green - keep it on the screen. This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential information and/or be subject to legal privilege. It should not be copied, disclosed to, retained or used by, any other party. If you are not an intended recipient then please promptly delete this e-mail and any attachment and all copies and inform the sender. Thank you.
RE: multi-searching problem
well i dont know much stuff about dismax, but for making a search as default on multiple fields, you can use copyField which is simpler than dismax (though performance could be effected, I am not so sure) basically, you can copy the other fields into one field and make it your default search field and you are done... I have done a similar thing for providing a "universal search", where all of the fields on a document are checked by default - Zeki ama calismiyor... Calissa yapar... -- View this message in context: http://lucene.472066.n3.nabble.com/multi-searching-problem-tp4000433p4000748.html Sent from the Solr - User mailing list archive at Nabble.com.