Re: HTTP ERROR: 500 - java.lang.ArrayIndexOutOfBoundsException
Hi Lance, Thanks for the reply! I checked the settings and I don't think it has multivalue setting. Here is the current field configuration: * ** * > * > *Lance Norskog wrote: > > This can happen when there are multiple values in a field. Is 'first' > a multi-valued field? > > Sorting only works on single-valued fields. After all, if there are > multiple values, it can only sort on one field and there is no way to > decide which one. So, make sure that 'field' has multiValued='false' > in the field declaration. If this is the problem, you will have to fix > your data and re-index. > > Is 'field' an analyzed text field? Then sorting definitely will not work. > > On Fri, Jul 16, 2010 at 6:54 PM, Girish Pandit > wrote: > > > Hi, > > As soon as I add "sort=first+desc" parameter to the select clause, it throws > ArrayIndexOutOfBound exception. Please suggest if I am missing anything. > http://localhost:8983/solr/select?q=girish&start=0&indent=on&wt=json&sort=first+desc > > I have close to 1 million records indexed. > > Thanks > Girish > > > > > > > > -- Girish Pandit 610-517-5888 http://www.jiyasoft.com http://www.photographypleasure.com/ http://www.photographypleasure.com/girish/
DIH - Insert another record After first load
Hi, I did load of the data with DIH and now once the data is loaded. I want to load the records dynamically as an when I received. Use cases: 1. I did load of 7MM records and now everything is working fine. 2. A new record is received, now I want to add this new record into the indexed data. Here is difference in the processing and the logic: * Initial data Load is done from a Oracle Materialized view * The new record is added into the tables from where view is created and not available in the view now * now I want to add this new record into the index. I have a Java bean loaded with the data including the index column. * I looked at the indexed file and it is all encoded. 3. How do I load above loaded Java bean to the index? An example would really help. Thanks Girish
Solr integration with Oracle Coherence caching
is it possible? if so then how? any steps would be good! By the way I have Java version of both available for integration, just need to push the plug in!
Bad type on operand stack: SolrInputDocument not assignable to SolrDocumentBase
I have been facing the below issue since yesterday, I get this error when starting spring boot application using version 2.1.1 release and apache solr-common 1.3.0. If anyone else has faced this issue please help me out. Thanks in Advance. Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'solrTemplate' defined in class path resource [com/myapp/config/SolrConfig.class]: Invocation of init method failed; nested exception is java.lang.VerifyError: Bad type on operand stack Exception Details: Location: org/springframework/data/solr/core/convert/MappingSolrConverter.lambda$write$1(Lorg/springframework/data/mapping/PersistentPropertyAccessor;Lorg/apache/solr/common/SolrDocumentBase;Lorg/springframework/data/solr/core/mapping/SolrPersistentProperty;)V @215: invokevirtual Reason: Type 'org/apache/solr/common/SolrInputDocument' (current frame, stack[2]) is not assignable to 'org/apache/solr/common/SolrDocumentBase' Current Frame: bci: @215 flags: { } locals: { 'org/springframework/data/solr/core/convert/MappingSolrConverter', 'org/springframework/data/mapping/PersistentPropertyAccessor', 'org/apache/solr/common/SolrDocumentBase', 'org/springframework/data/solr/core/mapping/SolrPersistentProperty', 'java/lang/Object', 'java/util/List', 'java/util/Iterator', 'java/lang/Object', 'org/apache/solr/common/SolrInputDocument' } stack: { 'org/springframework/data/solr/core/convert/MappingSolrConverter', 'java/lang/Object', 'org/apache/solr/common/SolrInputDocument', 'org/springframework/data/solr/core/mapping/SolrPersistentEntity' } Bytecode: 0x000: 2b2d b900 6302 003a 0419 04c6 000c 2db9 0x010: 0064 0100 9900 04b1 2db9 0065 0100 9900 0x020: 312d b900 6601 009a 0028 bb00 6759 bb00 0x030: 6859 b700 6912 6ab6 006b 2db9 0035 0100 0x040: b600 6b12 6cb6 006b b600 6db7 006e bf2d 0x050: b900 6601 0099 001b 2db9 0065 0100 9900 0x060: 122a 2c2d 1904 c000 6fb7 0070 57a7 00c0 0x070: 2db9 0071 0100 9900 ae2d b900 2501 0099 0x080: 00a5 bb00 0a59 b700 363a 052d b900 4a01 0x090: 0099 0059 1904 b800 4bb9 004c 0100 3a06 0x0a0: 1906 b900 0f01 0099 0040 1906 b900 1001 0x0b0: 003a 07bb 002d 5903 bd00 4eb7 0072 3a08 0x0c0: 2a19 0719 082a b400 082d b900 7301 00b9 0x0d0: 0019 0200 c000 1ab6 0031 1905 1908 b900 0x0e0: 1302 0057 a7ff bca7 0034 bb00 2d59 03bd 0x0f0: 004e b700 723a 062a 1904 1906 2ab4 0008 0x100: 2db9 0073 0100 b900 1902 00c0 001a b600 0x110: 3119 0519 06b9 0013 0200 572c 1905 b600 0x120: 74a7 000c 2a2c 2d19 04b7 0075 57b1 Stackmap Table: append_frame(@23,Object[#259]) same_frame(@24) same_frame(@79) same_frame(@112) append_frame(@160,Object[#179],Object[#181]) chop_frame(@231,1) same_frame(@234) same_frame(@283) chop_frame(@292,1) same_frame(@301) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1745) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:576) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:498) at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:367) ... 58 more Caused by: java.lang.VerifyError: Bad type on operand stack Exception Details: Location: org/springframework/data/solr/core/convert/MappingSolrConverter.lambda$write$1(Lorg/springframework/data/mapping/PersistentPropertyAccessor;Lorg/apache/solr/common/SolrDocumentBase;Lorg/springframework/data/solr/core/mapping/SolrPersistentProperty;)V @215: invokevirtual Reason: Type 'org/apache/solr/common/SolrInputDocument' (current frame, stack[2]) is not assignable to 'org/apache/solr/common/SolrDocumentBase' Current Frame: bci: @215 flags: { } locals: { 'org/springframework/data/solr/core/convert/MappingSolrConverter', 'org/springframework/data/mapping/PersistentPropertyAccessor', 'org/apache/solr/common/SolrDocumentBase', 'org/springframework/data/solr/core/mapping/SolrPersistentProperty', 'java/lang/Object', 'java/util/List', 'java/util/Iterator', 'java/lang/Object', 'org/apache/solr/common/SolrInputDocument' } stack: { 'org/springframe
Re: Bad type on operand stack: SolrInputDocument not assignable to SolrDocumentBase
Thanks Shawn, it worked like charm removing the solr-common dependency. As part of one the sample tutorial I referred for integrating with the application it had the jar. -- Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Slor 5.5.0 : SolrException: fieldType 'booleans' not found in the schema
Hi, I am new to solr, I started using this only from today, when I wanted to create dih, i'm getting the below error. SolrException: fieldType 'booleans' not found in the schema What does this mean? and How to resolve this. Regards, GNT
Re: Slor 5.5.0 : SolrException: fieldType 'booleans' not found in the schema
Hi Binoy, I copied the entire file schema.xml from the working example provided by solr itself. Solr provided dih example i'm able to run successfully .How could this be a problem? On Fri, Apr 1, 2016 at 12:39 AM, Binoy Dalal wrote: > Somewhere in your schema you've defined a field with type as "booleans". > You should check if you've made a typo somewhere by adding that extra s > after boolean. > Else if it is a separate field that you're looking to add, define a new > fieldtype called booleans. > > All the info to help you with this can be found here: > > https://cwiki.apache.org/confluence/display/solr/Documents,+Fields,+and+Schema+Design > > I higly recommend that you go through the documentation before starting. > > On Fri, 1 Apr 2016, 00:34 Girish Tavag, wrote: > > > Hi, > > > > I am new to solr, I started using this only from today, when I wanted to > > create dih, i'm getting the below error. > > > > SolrException: fieldType 'booleans' not found in the schema > > > > What does this mean? and How to resolve this. > > > > Regards, > > GNT > > > -- > Regards, > Binoy Dalal >
Re: Slor 5.5.0 : SolrException: fieldType 'booleans' not found in the schema
Hi Jack, I copied schema.xml from solr-5.5.0\example\example-DIH\solr\db\conf\ to \solr-5.5.0\server\solr\myDatabase\conf\ I've attached the file too. @Shawn The file does not have any field which defined as On Fri, Apr 1, 2016 at 9:13 AM, Jack Krupansky wrote: > Exactly which file did you copy? Please give the specific directory. > > -- Jack Krupansky > > On Thu, Mar 31, 2016 at 3:24 PM, Girish Tavag > wrote: > > > Hi Binoy, > > > > I copied the entire file schema.xml from the working example provided by > > solr itself. Solr provided dih example i'm able to run successfully .How > > could this be a problem? > > > > On Fri, Apr 1, 2016 at 12:39 AM, Binoy Dalal > > wrote: > > > > > Somewhere in your schema you've defined a field with type as > "booleans". > > > You should check if you've made a typo somewhere by adding that extra s > > > after boolean. > > > Else if it is a separate field that you're looking to add, define a new > > > fieldtype called booleans. > > > > > > All the info to help you with this can be found here: > > > > > > > > > https://cwiki.apache.org/confluence/display/solr/Documents,+Fields,+and+Schema+Design > > > > > > I higly recommend that you go through the documentation before > starting. > > > > > > On Fri, 1 Apr 2016, 00:34 Girish Tavag, > > wrote: > > > > > > > Hi, > > > > > > > > I am new to solr, I started using this only from today, when I > wanted > > to > > > > create dih, i'm getting the below error. > > > > > > > > SolrException: fieldType 'booleans' not found in the schema > > > > > > > > What does this mean? and How to resolve this. > > > > > > > > Regards, > > > > GNT > > > > > > > -- > > > Regards, > > > Binoy Dalal > > > > > > id
Re: Slor 5.5.0 : SolrException: fieldType 'booleans' not found in the schema
Here is the error message "*myDatabase:* org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: fieldType 'booleans' not found in the schema" On Fri, Apr 1, 2016 at 11:41 PM, Girish Tavag wrote: > Hi Jack, > > I copied schema.xml from solr-5.5.0\example\example-DIH\solr\db\conf\ > to \solr-5.5.0\server\solr\myDatabase\conf\ > I've attached the file too. > > @Shawn > The file does not have any field which defined as type="booleans" indexed="true" stored="true"/> > > > On Fri, Apr 1, 2016 at 9:13 AM, Jack Krupansky > wrote: > >> Exactly which file did you copy? Please give the specific directory. >> >> -- Jack Krupansky >> >> On Thu, Mar 31, 2016 at 3:24 PM, Girish Tavag >> wrote: >> >> > Hi Binoy, >> > >> > I copied the entire file schema.xml from the working example provided >> by >> > solr itself. Solr provided dih example i'm able to run successfully .How >> > could this be a problem? >> > >> > On Fri, Apr 1, 2016 at 12:39 AM, Binoy Dalal >> > wrote: >> > >> > > Somewhere in your schema you've defined a field with type as >> "booleans". >> > > You should check if you've made a typo somewhere by adding that extra >> s >> > > after boolean. >> > > Else if it is a separate field that you're looking to add, define a >> new >> > > fieldtype called booleans. >> > > >> > > All the info to help you with this can be found here: >> > > >> > > >> > >> https://cwiki.apache.org/confluence/display/solr/Documents,+Fields,+and+Schema+Design >> > > >> > > I higly recommend that you go through the documentation before >> starting. >> > > >> > > On Fri, 1 Apr 2016, 00:34 Girish Tavag, >> > wrote: >> > > >> > > > Hi, >> > > > >> > > > I am new to solr, I started using this only from today, when I >> wanted >> > to >> > > > create dih, i'm getting the below error. >> > > > >> > > > SolrException: fieldType 'booleans' not found in the schema >> > > > >> > > > What does this mean? and How to resolve this. >> > > > >> > > > Regards, >> > > > GNT >> > > > >> > > -- >> > > Regards, >> > > Binoy Dalal >> > > >> > >> > >
Re: Slor 5.5.0 : SolrException: fieldType 'booleans' not found in the schema
6-04-01 18:24:17.494 INFO (qtp7980742-16) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={wt=json&_=1459535057440} status=0 QTime=41 Regards, GNT On Sat, Apr 2, 2016 at 12:01 AM, Shawn Heisey wrote: > On 4/1/2016 12:11 PM, Girish Tavag wrote: > > I copied schema.xml from solr-5.5.0\example\example-DIH\solr\db\conf\ > > to \solr-5.5.0\server\solr\myDatabase\conf\ > > I've attached the file too. > > > > @Shawn > > The file does not have any field which defined as > name="somefield" type="booleans" indexed="true" stored="true"/> > > You are correct, "booleans" does not show up in that file. > > Either the schema you sent is not the schema that's actually being used, > or the error message that you sent is not a precise copy/paste of what > you are seeing. You may need to let us see the entire stacktrace for > any error messages in your logfile, as well as several lines before and > after each error. If you can share the entire logfile, that would be > helpful. > > FYI -- attaching files often does not work. You got lucky -- typically > such attachments do not make it to the list. > > Thanks, > Shawn > >
Re: Slor 5.5.0 : SolrException: fieldType 'booleans' not found in the schema
Hi Shawn, Finally i'm able to figure out the problem. The issue was in solrconfig.xml where the booleans was defined. I replaced booleans with boolean and other similar fileds and it worked correctly :) Regards, GNT.
Re: Slor 5.5.0 : SolrException: fieldType 'booleans' not found in the schema
Hi Shawn and Jack, Yes that is true. I was referring the tutorial and practicing and ended up in this. By the way, one more thing I would like to know, is it possible to schedule the full data import? or the delta import? i.e. on regular intervals the data should be updated. Regards, GNT On Sat, Apr 2, 2016 at 2:21 AM, Shawn Heisey wrote: > On 4/1/2016 1:24 PM, Girish Tavag wrote: > > Finally i'm able to figure out the problem. The issue was in > > solrconfig.xml where the booleans was defined. I replaced booleans with > > boolean and other similar fileds and it worked correctly :) > > This has happened because you mixed the solrconfig.xml file from > data_driven_schema_configs with the schema from a completely different > example. > > The config and schema in each example are intended to be used as a > matched pair. If you mix pieces from examples without checking to make > sure they are compatible and fixing anything you find, problems *will* > happen. > > Thanks, > Shawn > >
How to unload solr collections?
We are using Solr 5.2.1 with SolrJ API. To improve/minimize the Solr heap utilization we would like to explicitly unload Solr collections after completing the search queries.Is there an API to unload Solr Collections for SolrCloud? The real issue we are trying to solve is Solr running into out of memory errors on searching large amount of data for a given heap setting. Keeping fixed heap size, we plan to load/unload collections so that we do not let Solr run out of memory. Any suggestions/help on this is highly appreciated. Thanks
HTTP ERROR: 500 - java.lang.ArrayIndexOutOfBoundsException
Hi, As soon as I add "sort=first+desc" parameter to the select clause, it throws ArrayIndexOutOfBound exception. Please suggest if I am missing anything. http://localhost:8983/solr/select?q=girish&start=0&indent=on&wt=json&sort=first+desc I have close to 1 million records indexed. Thanks Girish
Re: how to change the default path of Solr Tomcat
it seems like you are using Default server (Jetty with port 8983), also it looks like you are trying to run it with command "java -jar start.jar" if so then under same directory there is another directory called "webapps" go in there, rename "solr.war" to "search.war" bounce server and you should be good to go! Eben wrote: firstly, I really appreciate your respond to my question Ken I'm using Tomcat on Linux Debian I can't find the solr.xml in \program files\apache...\Tomcat\conf\catalina\localhost there are only 2 files in localhost folder: host-manager.xml and manager.xml any solutions? On 7/22/2010 10:41 AM, kenf_nc wrote: Your environment may be different, but this is how I did it. (Apache Tomcat on Windows 2008) go to \program files\apache...\Tomcat\conf\catalina\localhost rename solr.xml to search.xml recycle Tomcat service
how to Protect data
Hi, I was being ask about protecting data, means that the search index data is stored in the some indexed files and when you open those indexed files, I can clearly see the data, means some texts, e.g. name, address, postal code etc. is there anyway I can hide the data? means some kind of data encoding to not even see any text raw data. -Girish
"SELECT" on a Rich Document to download/display content
Hi, I indexed a word document, when I do select, it shows the file name. How can I display content? also if I add "hl=true", is this going to show me the line with the highlight from the word document? I am using below URL to do select: http://localhost:8983/solr/select/?q=Management it shows Response like below: 0name="QTime">1name="q">ManagementnumFound="1" start="0">Mgmt.doc Indexing was done with below Java code: public void SolrCellRequestDemo() throws IOException, SolrServerException { SolrServer server = new CommonsHttpSolrServer("http://localhost:8983/solr";); ContentStreamUpdateRequest req = new ContentStreamUpdateRequest("/update/extract"); req.addFile(new File("/Users/Girish/Development/Web Server/apache-solr-1.4.1/example/exampledocs/Mgmt.doc")); req.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true); req.setParam("literal.id", "Mgmt.doc"); NamedList result = server.request(req); System.out.println("Result: " + result); }
Re: DIH transformer script size limitations with Jetty?
Have you tried changing the -Xmx value to bump to -Xmx1300m? I had some problem with DIH loading the data and when I bumped the memory everything worked fine! harrysmith wrote: To follow up on my own question, it appears this is only an issue when using the DataImport console debugging tools. It looks like when submitting the debugging request, the data-config.xml is sent via a GET request, which would fail. However, using the exact same data-config.xml via a full-import operation (ie not a dry run debug), it looks like the request is sent POST and the import works fine.
how are you using Solr?
I am trying to understand the width of its usage! I am from Finance and I am using for content/material search, initially we were storing these in the database but we had performance issues with the search. so later on we moved to Solr. How about you? why did you choose Solr and what business stream you are using it in?
Index time boosts, payloads, and long query strings
Hi , I'm relatively new to Solr/Lucene, and am using Solr (and not lucene directly) primarily because I can use it without writing java code (rest of my project is python coded). My application has the following requirements: (a) ability to search over multiple fields, each with different weight (b) If possible, I'd like to have the ability to add extra/diminished weights to particular tokens within a field (c) My query strings have large lengths (50-100 words) (d) My index is 500K+ documents 1) The way to (a) is field boosting (right?). My question is: Is all field boosting done at query time? Even if I give index time boosts to fields? Is there a performance advantage in boosting fields at index time vs at using something like fieldname:querystring^boost. 2) From what I've read, it seems that I can do (b) using payloads. However, as this link ( http://www.lucidimagination.com/blog/2009/08/05/getting-started-with-payloads/) suggests, I will have to write a payload aware Query Parser. Wanted to confirm if this is indeed the case - or is there a out-of-box way to implement payloads (am using Solr1.4) 3) For my project, the user fills multiple text boxes (for each query). I combine these into a single query (with different treatment for contents of each text box). Consequently, my query looks something like (fieldname1: queryterm1 queryterm2^2.0 queryterm3^3.0 +queryterm4)^1.0 Are there any guidelines for improving performance of such a system (sorry, this bit is vague) Any help with this will be great ! Girish Redekar http://girishredekar.net
Re: Index time boosts, payloads, and long query strings
Hi Erick - Maybe I mis-wrote. My question is: would "title:any_query^4.0" be faster/slower than applying index time boost to the field title. Basically, if I take *every* user query and search for it in title with boost (say, 4.0) - is it different than saying field title has boost 4.0? Cheers, Girish Redekar http://girishredekar.net On Sun, Nov 22, 2009 at 2:02 AM, Erick Erickson wrote: > I'll take a whack at index .vs. query boosting. They are expressing very > different concepts. Let's claim we're interested in boosting the title > field > > Index time boosting is expressing "this document's title is X more > important > > than a normal document title". It doesn't matter *what* the title is, > any query that matches on anything in this document's title will give this > document a boost. I might use this to give preferential treatment to all > encyclopedia entries or something. > > Query time boosting, like "title:solr^4.0" expresses "Any document with > solr > in > it's title is more important than documents without solr in the title". > This > really > only makes sense if you have other clauses that might cause a document > *without* > solr the title to match.. > > Since they are doing different things, efficiency isn't really relevant. > > HTH > Erick > > > On Sat, Nov 21, 2009 at 2:13 AM, Girish Redekar > wrote: > > > Hi , > > > > I'm relatively new to Solr/Lucene, and am using Solr (and not lucene > > directly) primarily because I can use it without writing java code (rest > of > > my project is python coded). > > > > My application has the following requirements: > > (a) ability to search over multiple fields, each with different weight > > (b) If possible, I'd like to have the ability to add extra/diminished > > weights to particular tokens within a field > > (c) My query strings have large lengths (50-100 words) > > (d) My index is 500K+ documents > > > > 1) The way to (a) is field boosting (right?). My question is: Is all > field > > boosting done at query time? Even if I give index time boosts to fields? > Is > > there a performance advantage in boosting fields at index time vs at > using > > something like fieldname:querystring^boost. > > 2) From what I've read, it seems that I can do (b) using payloads. > However, > > as this link ( > > > > > http://www.lucidimagination.com/blog/2009/08/05/getting-started-with-payloads/ > > ) > > suggests, I will have to write a payload aware Query Parser. Wanted to > > confirm if this is indeed the case - or is there a out-of-box way to > > implement payloads (am using Solr1.4) > > 3) For my project, the user fills multiple text boxes (for each query). I > > combine these into a single query (with different treatment for contents > of > > each text box). Consequently, my query looks something like (fieldname1: > > queryterm1 queryterm2^2.0 queryterm3^3.0 +queryterm4)^1.0 Are there any > > guidelines for improving performance of such a system (sorry, this bit is > > vague) > > > > Any help with this will be great ! > > > > Girish Redekar > > http://girishredekar.net > > >
Re: Index time boosts, payloads, and long query strings
Thanks Erick! After reading your answer, and re-reading the Solr wiki, I realized my folly. I used to think that index-time boosts when applied on a per-field basis are equivalent to query time boosts to that field. To ensure that my new understanding is correct , I'll state it in my words. Index time boosts will determine boost for a *document* if it is counted as a hit. Query time boosts give you control on boosting the occurrence of a query in a specific field. Please correct me if I'm wrong (again) :-) Girish Redekar http://girishredekar.net On Sun, Nov 22, 2009 at 8:25 PM, Erick Erickson wrote: > I still think they are apples and oranges. If you boost *all* titles, > you're effectively boosting none of them. Index time boosting > expresses "this document's title is more important than other > document titles." What I think you're after is "titles are more > important than other parts of the document. > > For this latter, you're talking query-time boosting. Boosting only > really makes sense if there are multiple clauses, something > like title:important OR body:unimportant. If this is true, speed > is irrelevant, you need correct behavior. > > Not that I think you'd notice either way. Modern computers > can do a LOT of FLOPS/sec. Here's an experiment: time > some queries (but beware of timing the very first ones, see > the Wiki) with boosts and without boosts. I doubt you'll see > enough difference to matter (but please do report back if you > do, it'll further my education ). > > But, depending on your index structure, you may get this > anyway. Generally, matches on shorter fields weigh more > in the score calculations than on longer fields. If you have > fields like title and body and you are querying on title:term OR > body:term, documents with term in the title will tend toward > higher scores. > > But before putting too much effort into this, do you have any > evidence that the default behavior is unsatisfactory? Because > unless and until you do, I think this is a distraction ... > > Best > Erick > > On Sun, Nov 22, 2009 at 8:37 AM, Girish Redekar > wrote: > > > Hi Erick - > > > > Maybe I mis-wrote. > > > > My question is: would "title:any_query^4.0" be faster/slower than > applying > > index time boost to the field title. Basically, if I take *every* user > > query > > and search for it in title with boost (say, 4.0) - is it different than > > saying field title has boost 4.0? > > > > Cheers, > > Girish Redekar > > http://girishredekar.net > > > > > > On Sun, Nov 22, 2009 at 2:02 AM, Erick Erickson > >wrote: > > > > > I'll take a whack at index .vs. query boosting. They are expressing > very > > > different concepts. Let's claim we're interested in boosting the title > > > field > > > > > > Index time boosting is expressing "this document's title is X more > > > important > > > > > > than a normal document title". It doesn't matter *what* the title is, > > > any query that matches on anything in this document's title will give > > this > > > document a boost. I might use this to give preferential treatment to > all > > > encyclopedia entries or something. > > > > > > Query time boosting, like "title:solr^4.0" expresses "Any document with > > > solr > > > in > > > it's title is more important than documents without solr in the title". > > > This > > > really > > > only makes sense if you have other clauses that might cause a document > > > *without* > > > solr the title to match.. > > > > > > Since they are doing different things, efficiency isn't really > relevant. > > > > > > HTH > > > Erick > > > > > > > > > On Sat, Nov 21, 2009 at 2:13 AM, Girish Redekar > > > wrote: > > > > > > > Hi , > > > > > > > > I'm relatively new to Solr/Lucene, and am using Solr (and not lucene > > > > directly) primarily because I can use it without writing java code > > (rest > > > of > > > > my project is python coded). > > > > > > > > My application has the following requirements: > > > > (a) ability to search over multiple fields, each with different > weight > > > > (b) If possible, I'd like to have the ability to add extra/diminished > > > > weights to particular tokens within a field > > > &g
Solr CPU usage
Hi I'm testing my Solr instance with multiple simultaneous requests. Here's my test. For an index of ~200K docs, I query Solr with 10 simultaneous threads. Can someone help me explain/improve the following observations: 1) Solr doesn't seem to use all the available CPU to improve response times (query times are good, but the time required to return documents aren't so good). My CPU seems to be running at ~30% 2) As expected, time for response increases as the num of requested results increase. What's surprising (and perplexing) is that Solr seems to use *more* of the CPU when I'm requesting *fewer* docs. Consequently, its performance in returning a larger result set is very bad 3) To counter 1, is there a way to make two Solr instances search on the same index (so that concurrent requests are served faster) Any help in this regard would be very useful. Thanks ! Girish Redekar http://girishredekar.net
Re: Solr CPU usage
Yonik, Am running both my server and client on ubuntu machines. The client is on a different box. The server CPU and RAM are both well below 50%. Girish Redekar http://girishredekar.net On Fri, Nov 27, 2009 at 10:07 PM, Yonik Seeley wrote: > On Fri, Nov 27, 2009 at 9:30 AM, Girish Redekar > wrote: > > Hi > > > > I'm testing my Solr instance with multiple simultaneous requests. Here's > my > > test. > > > > For an index of ~200K docs, I query Solr with 10 simultaneous threads. > Can > > someone help me explain/improve the following observations: > > > > 1) Solr doesn't seem to use all the available CPU to improve response > times > > (query times are good, but the time required to return documents aren't > so > > good). My CPU seems to be running at ~30% > > 2) As expected, time for response increases as the num of requested > results > > increase. What's surprising (and perplexing) is that Solr seems to use > > *more* of the CPU when I'm requesting *fewer* docs. Consequently, its > > performance in returning a larger result set is very bad > > This may point to another bottleneck - if the OS is low on free RAM, > it could be disk IO. > If this is on Windows, you could have contention reading the index files. > Otherwise, you may have a bottleneck in network IO. Is the client you > are testing with on the same box? > Is this Solr 1.4? > > -Yonik > http://www.lucidimagination.com >
Query time boosting with dismax
Hi, Is it possible to weigh specific query terms with a Dismax query parser? Is it possible to write queries of the sort ... field1:(term1)^2.0 + (term2^3.0) with dismax? Thanks, Girish Redekar http://girishredekar.net
How to configure "setExcludeCipherSuites" for org.eclipse.jetty.util.ssl.SslContextFactory$Client
Hello there: I am using the Solr 8.4.0. I am trying to figure out how I can set setExcludeCipherSuites to filter the unsecure Ciphers from the org.eclipse.jetty.util.ssl.SslContextFactory$Client. I am able to do the same with org.eclipse.jetty.util.ssl.SslContextFactory$Server by adding the setExcludeCipherSuites in the /etc/jetty-ssl.xml file. .. .. But I am not sure whether the same can be done for org.eclipse.jetty.util.ssl.SslContextFactory$Client to add setExcludeCipherSuites. If I add similar config for Client its not taking effect. Any help is appreciated. Thanks and Regards Girish B Chandrasekhar