Need help with Nested docs situation
Hello, I have a situation and i'm a little bit stuck on the way how to fix it. For example the following data structure: *Deal* All Coca Cola 20% off *Products* Coca Cola light Coca Cola Zero 1L Coca Cola Zero 20CL Coca Cola 1L When somebody search to "Cola" discount i want the result of the deal with related products. Solution #1: I could index it with nested docs(solr 4.9). But the problem is when a product has some changes(let's say "Zero" gets a new name "Extra Light") i have to re-index every deal with these products. Solution #2: I could make 2 collections, one with deals and one with products. A Product will get a parentid(dealid). Then i have to do 2 queries to get the information? When i have a resultpage with 10 deals i want to preview the first 2 products. That means a lot of queries but it's doesn't have the update problem from solution #1. Does anyone have a good solution for this? Thanks, any help is appreciated. Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Need-help-with-Nested-docs-situation-tp4203190.html Sent from the Solr - User mailing list archive at Nabble.com.
Add core in solr.xml | Problem with starting SOLRcloud
Hello, Our platform has 4 solr instances and 3 zookeepers(solr 4.1.0). I want to add a new core in my solrcloud. I add the new core to the solr.xml file: I put the config files in the directory collection2. I uploaded the new config to zookeeper and start solr. Solr did not start up and gives the following error: Oct 16, 2014 4:57:06 PM org.apache.solr.cloud.ZkController publish INFO: publishing core=collection1 state=recovering Oct 16, 2014 4:57:06 PM org.apache.solr.cloud.ZkController publish INFO: numShards not found on descriptor - reading it from system property Oct 16, 2014 4:57:06 PM org.apache.solr.client.solrj.impl.HttpClientUtil createClient INFO: Creating new http client, config:maxConnections=128&maxConnectionsPerHost=32&followRedirects=false Oct 16, 2014 4:59:06 PM org.apache.solr.common.SolrException log SEVERE: Error while trying to recover. core=collection1:org.apache.solr.common.SolrException: I was asked to wait on state recovering for 31.114.2.237:8910_solr but I still do not see the requested state. I see state: active live:true at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:404) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:181) at org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:202) at org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:346) at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:223) Oct 16, 2014 4:59:06 PM org.apache.solr.cloud.RecoveryStrategy doRecovery SEVERE: Recovery failed - trying again... (0) core=collection1 Oct 16, 2014 4:59:06 PM org.apache.solr.cloud.RecoveryStrategy doRecovery INFO: Wait 2.0 seconds before trying to recover again (1) Oct 16, 2014 4:59:08 PM org.apache.solr.cloud.ZkController publish INFO: publishing core=collection1 state=recovering Oct 16, 2014 4:59:08 PM org.apache.solr.cloud.ZkController publish INFO: numShards not found on descriptor - reading it from system property Oct 16, 2014 4:59:08 PM org.apache.solr.client.solrj.impl.HttpClientUtil createClient INFO: Creating new http client, config:maxConnections=128&maxConnectionsPerHost=32&followRedirects=false What's wrong with my setup? Any help would be appreciated! Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Add-core-in-solr-xml-Problem-with-starting-SOLRcloud-tp4164524.html Sent from the Solr - User mailing list archive at Nabble.com.
SOLR4 Spatial sorting and query string
Hello, I use the following distance sorting of SOLR 4(solr.SpatialRecursivePrefixTreeFieldType): fl=*,score&sort=score asc&q={!geofilt score=distance filter=false sfield=coords pt=54.729696,-98.525391 d=10} (from the tutorial on http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4) Now i want to query on a searchstring and still want to sort on distance. How can i combine this in above solr request? When i add something to the "q=" it doesn't work. I tried _query_ subquery and other stuff but i don't get it working I appreciate any help, Thanks -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR4-Spatial-sorting-and-query-string-tp4084318.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR4 Spatial sorting and query string
Great, it works very well. In solr 4.5 i will use geodist() again! Thanks David -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR4-Spatial-sorting-and-query-string-tp4084318p4084487.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR4 Spatial sorting and query string
Hello, I have a question about performance with a lot of points and spatial search. First i will explain my situation: We have some products data and want to store every geo location of stores that sells this product. I use a multivalued coordinates field with the geo data: lat,long lat,long lat,long lat,long lat,long The config: * * When i search for a product term i want only the products that are nearby the given location. So i used to following query: fq=_query_:"{!geofilt sfield=store_coordinates pt=locatonlat,locationlong d=25}" It works great but my question is: Will it work quick and smooth with +1000 stores in my store_coordinates field? Any help is appreciated Thanks, Roy -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR4-Spatial-sorting-and-query-string-tp4084318p4084521.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SOLR4 Spatial sorting and query string
Hello David, The first months there will be not that many points in a doc, i will keep the topic in mind! The next step is that i want to now which location matched my query. Example: Product A is available in 3 stores, the doc looks like this: / Product A store1_geo store2_geo store3_geo London#store1_geo Amsterdam#store2_geo New York#store3_geo / I query the index with my location set to Berlin and a radius of 250km. I know that this result gets back on the first place because it's close to Amsterdam(store2_geo). But normally, How can i know which one matched my query as closest point? Is it possible to get this back? I can do it in my application but with 200 stores i don't think it's the best solution. Thanks, Roy -- View this message in context: http://lucene.472066.n3.nabble.com/SOLR4-Spatial-sorting-and-query-string-tp4084318p4084795.html Sent from the Solr - User mailing list archive at Nabble.com.
Realtime updates solrcloud
Hello Guys, I want to use the realtime updates mechanism of solrcloud. My setup is as follow: 3 solr engines, 3 zookeeper instances(ensemble) The setup works great, recovery, leader election etc. The problem is the realtime updates, it's slow after the servers gets some traffic. I try to explain it: I test the realtime update with the following command: *curl http://SOLRURL:SOLRPORT/solr/update -H "Content-Type: text/xml" --data-binary '3504811http://www.google.nl'* I see this in logs of solr server: *Mar 29, 2013 12:38:51 PM org.apache.solr.update.processor.LogUpdateProcessor finish INFO: [collection1] webapp=/solr path=/update params={} {add=[3504811 (1430841858290876416)]} 0 35 * The other solr servers get the following lines in the log: *INFO: [collection1] webapp=/solr path=/update params={distrib.from=http://SOLRIP:SOLRPORT/solr/collection1/&update.distrib=FROMLEADER&wt=javabin&version=2} {add=[3504811 (1430844456234385408)]} 0 14* This looks good, the doc is added and the leader send this doc to the other solr servers. First times it takes 1 sec to make the update visible:) When i send some traffic to the server(200q/s), the update takes +- 30 sec to make it visible. I stopped the traffic it's still takes 30 sec's to make the update visible. How is it possible? The solrconfig parts: * 60 false 2000 * Did i miss something? Best Regards, Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Realtime-updates-solrcloud-tp4052370.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Find results with or without whitespace
Frankie, Have you fixes this issue? I'm interested in your solution,, -- View this message in context: http://lucene.472066.n3.nabble.com/Find-results-with-or-without-whitespace-tp3117144p3298298.html Sent from the Solr - User mailing list archive at Nabble.com.
Synonyms problem
hello, I have some problems with synonyms. I will show some examples to descripe the problem: Data: High school Lissabon High school Barcelona University of applied science When a user search for IFD i want all the results back. So i want to use this synonyms at query time: IFD => high school lissabon, high school barcelona,University of applied science The data is stored in the field "schools". Schools type looks like this: AS you can see i use some pattern tokenizer which splits on whitespace. When i use the synonyms at query time the analytics show me this: high | school | lissabon| science high | school | barcelona | university | of | applied | When i search for IFD i get no results. I found this in debugQuery: schools:"(high high university) (school school of) (lissaban barcelona applied) (science)" With this i see the problem: solr tries a lot of combinations but not the right one. I thought i could escape the whitespaces in the synonyms(High\ school\ Lissabon). Then the analytics shows me better results: High school Lissabon High school Barcelona University of applied science Then SOLR search for "high school Lissabon" but in my index it is tokenized on whitespace, still no results. I'm stuck, can someone help me?? Thanks R -- View this message in context: http://lucene.472066.n3.nabble.com/Synonyms-problem-tp3316287p3316287.html Sent from the Solr - User mailing list archive at Nabble.com.
Slow autocomplete(terms)
Hello, I used the terms request for autocomplete. It works fine with 200.000 records but with 2 million docs it's very slow.. I use some regex to fix autocomplete in the middle of words, example: chest -> manchester. My call(pecl PHP solr): $query = new SolrQuery(); $query->setTermsLimit("10"); $query->setTerms(true); $query->setTermsField($field); $term = SolrUtils::escapeQueryChars ($term); $query->set("terms.regex","(.*)$term(.*)"); $query->set("terms.regex.flag","case_insensitive"); URL: /solr/terms?terms.fl=autocompletewhat&terms.regex=(.*)chest(.*)&terms.regex.flag=case_insensitive&terms=true I think the regex is the reason for the very high query time: Solr search between 2 million docs with a regex. The query takes 2 seconds, this is to much for the autocomplete. A user typed "manchester united" and solr needs to do 16 query's from 2 seconds. Are there some other options? Faster solutions? I use solr 3.1 -- View this message in context: http://lucene.472066.n3.nabble.com/Slow-autocomplete-terms-tp3351352p3351352.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Autocomplete(terms) performance problem
thanks Klein, If I understand correctly there is for now no solution for this problem. The best solution for me is to limit the count of suggestions. I still want to use the regex and with 100.000 docs it looks like it's no problem. -- View this message in context: http://lucene.472066.n3.nabble.com/Autocomplete-terms-performance-problem-tp3351352p3351621.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Slow autocomplete(terms)
Hello Erick, Thanks for your answer but i have some problems with the ngramfilter. My conf look like this: I see this in the analysis: "manchester" ma an nc ch he es st te er man anc nch che hes est ste ter mancanchnche cheshestestestermanch anche nches chest heste ester manche anches nchest cheste hester manches anchest ncheste chester manchestancheste nchester When i use terms i see all this results back in the response. So i type "ches" i got this: ches nches anches nchest ncheste I want one suggestion with a total keyword: "manchester". Is this possible? -- View this message in context: http://lucene.472066.n3.nabble.com/Autocomplete-terms-performance-problem-tp3351352p3358126.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Slow autocomplete(terms)
Thanks for helping me so far, Yes i have seen the edgeNGrams possiblity. Correct me if i'm wrong, but i thought it isn't possible to do infix searches with edgeNGrams? Like "chest" gives suggestion "manchester". -- View this message in context: http://lucene.472066.n3.nabble.com/Autocomplete-terms-performance-problem-tp3351352p3361155.html Sent from the Solr - User mailing list archive at Nabble.com.
Errors in requesthandler statistics
Hello, I was taking a look to my SOLR statistics and i see in part of the requesthandler a count of 23 by errors. How can i see which requests returns this errors? Can i log this somewhere? Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Errors-in-requesthandler-statistics-tp3379163p3379163.html Sent from the Solr - User mailing list archive at Nabble.com.
RE: Errors in requesthandler statistics
Hi, Thanks for your answer. I have some logging by jetty. Every request looks like this: 2011-09-29T12:28:47 1317292127479 18470 org.apache.solr.core.SolrCore INFO org.apache.solr.core.SolrCore execute 20 [] webapp=/solr path=/select/ params={spellcheck=true&facet=true&sort=geodist()+asc&sfield=coord&spellcheck.q=test&facet.limit=20&version=2.2&fl=id,what,where} hits=0 status=0 QTime=12 How can i see which gives an error? The file has stored 94000 requests Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Errors-in-requesthandler-statistics-tp3379163p3379288.html Sent from the Solr - User mailing list archive at Nabble.com.
Performance issue: Frange with geodist()
Hello, I use the facet.query to search documents nearby the search location. It looks like this: facet.query={!frange l=0 u=10}geodist() facet.query={!frange l=0 u=20}geodist() facet.query={!frange l=0 u=50}geodist() facet.query={!frange l=0 u=100}geodist() SFIELD and the PT are set, and it works but it's very slow. The Qtime is between 1400ms and 1800ms. Is there a way to win performance? or something else to get the same response? -- View this message in context: http://lucene.472066.n3.nabble.com/Performance-issue-Frange-with-geodist-tp3417962p3417962.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Performance issue: Frange with geodist()
Hello Mikhail, Thanks for your answer.. I think my cache is enabled for Geodist(). First time request takes 1440ms and second time only 2ms. In the statistics i see it's hits the cache. The problem is every request had another location with other distances and results. So almost every request takes ~1400ms. Do i have to make the cache bigger? I don't think it's possible to cache it all. -- View this message in context: http://lucene.472066.n3.nabble.com/Performance-issue-Frange-with-geodist-tp3417962p3418263.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Performance issue: Frange with geodist()
I don't want to use some basic facets. When the user doesn't get any results i want to search in the radius of his search location. Example: apple store in Manchester gives no result. I want this: Click here to see 2 results in a radius of 10km. Click here to see 11 results in a radius of 50km. Click here to see 19 results in a radius of 100km. With geodist() and facet.query is this possible but the performance isn't very good.. -- View this message in context: http://lucene.472066.n3.nabble.com/Performance-issue-Frange-with-geodist-tp3417962p3418429.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Performance issue: Frange with geodist()
Hi Yonik, I have used your suggestion to implement a better radius searcher: &facet.query={!geofilt d=10 key=d10} &facet.query={!geofilt d=20 key=d20} &facet.query={!geofilt d=50 key=d50} It is a little bit faster than with geodist() but still a bottleneck i think. -- View this message in context: http://lucene.472066.n3.nabble.com/Performance-issue-Frange-with-geodist-tp3417962p3427820.html Sent from the Solr - User mailing list archive at Nabble.com.
solr.PatternReplaceFilterFactory AND endoffset
Hi, I have some problems with the patternreplaceFilter. I can't use the worddelimiter because i only want to replace some special chars given by myself. Some example: Tottemham-hotspur (london) Arsenal (london) I want this: replace "-" with " " "(" OR ")" with "". In the analytics i see this: position1 term text tottemham hotspur london startOffset 0 endOffset 26 So the replacefilter works. Now i want to search "tottemham hotspur london". This gives no results. position1 term text tottemham hotspur london startOffset 0 endOffset 24 It works when i search for "tottemham-hotspur (london)". I think the problem is the difference in offset(24 vs 26). I need some help... -- View this message in context: http://lucene.472066.n3.nabble.com/solr-PatternReplaceFilterFactory-AND-endoffset-tp3454049p3454049.html Sent from the Solr - User mailing list archive at Nabble.com.
One field must match with edismax
Hello, I have some problems with my application. I have some fields and use edismax to search between them. Now i want to configure that one field must match. Let's give an example: firstname lastname Nicknames Lionel messi loe,pulga When i search i want only results that match on lastname and maybe other fields. So "lionel pulga" gives no results but "messi leo" will matched. So what i want is to configure that there always must be a match in one field(lastname). Is this possible? -- View this message in context: http://lucene.472066.n3.nabble.com/One-field-must-match-with-edismax-tp3496232p3496232.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: One field must match with edismax
Thanks, it was that easy. I was thinking about a variant of the mm option in dismax but this works great! -- View this message in context: http://lucene.472066.n3.nabble.com/One-field-must-match-with-edismax-tp3496232p3499312.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Autocomplete(terms) performance problem
Thanks for your answer Nagendra, The problem is i want to do some infix searches. When i search for "sisco" i want the autocomplete with "san fran*sisco*". In the example you gave me it's also not possible. Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Autocomplete-terms-performance-problem-tp3351352p3530891.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Autocomplete(terms) performance problem
Thanks, it looks great! In the nearby future i will give it a try. -- View this message in context: http://lucene.472066.n3.nabble.com/Autocomplete-terms-performance-problem-tp3351352p3533066.html Sent from the Solr - User mailing list archive at Nabble.com.
Replication downtime?? - master slave
Hello, I have one solr instance and i'm very happy with that. Now we have multiple daily updates and is see the response time is slower when doing a update. I think i need some master slave replication. Now my question is: Is a slave slower when there is an replication running from master to slave? Is there any downtime when switching from old to new data? I only need 1 slave for performace but when replication makes it slower i probably need 2. Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Replication-downtime-master-slave-tp3561031p3561031.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Replication downtime?? - master slave
Thanks Erick, It's good to hear the slave doesn't notice anything. Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Replication-downtime-master-slave-tp3561031p3572969.html Sent from the Solr - User mailing list archive at Nabble.com.
Differences in debugQuery and results
Hello, I have some configuration problems and can't get it working. I see some differences with the debugQuery. I search for: w.j ((DisjunctionMaxQueryname1_search:w name1_search:j)^5.0) | ((name2_search:w name2_search:j)^5.0))~1.0) I search for: w j ((DisjunctionMaxQuery((name1_search:w^5.0 | name2_search:w^5.0)~1.0 ((DisjunctionMaxQuery((name1_search:j^5.0 | name2_search:j^5.0)~1.0) I use the worddelimiter to split on a dot. Why is there a difference? I want that SOLR handles this issues on the same way? How can i fix this? CONFIG: Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Differences-in-debugQuery-and-results-tp3603817p3603817.html Sent from the Solr - User mailing list archive at Nabble.com.
From Solr3.1 to SolrCloud
hello, We are using solr 3.1 for searching on our webpage right now. We want to use the nice features of solr 4: realtime search. Our current configuration looks like this: Master Slave1 Slave2 Slave3 We have 3 slaves and 1 master and the data is replication every night. In the future we want to update every ~5 seconds. I was looking to SOLRCLOUD and got a few questions: - We aren't using shards because our index only contains 1 mil simple docs. We only need multiple server because the amount of traffic. In the examples of solrCloud i see only examples with shards. Is numshards=1 possible? One big index is faster than multiple shards? I need 1 collection with multiple nodes? - Should i run a single zookeeper instance(without solr) on a seperate server? - Is the DIH still there in solr 4? Any help is welcome! Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/From-Solr3-1-to-SolrCloud-tp4021536.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: From Solr3.1 to SolrCloud
Thanks Tomás, I will use numshards=1. Are there some instructions on how to install only zookeeper on a separate server? Or do i have to install solr 4 on that server? How make the connection between the solr instances and the zk instance(server)? Thanks so far, Roy -- View this message in context: http://lucene.472066.n3.nabble.com/From-Solr3-1-to-SolrCloud-tp4021536p4021583.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: From Solr3.1 to SolrCloud
I run a separate Zookeeper instance right now. Works great, nodes are visible in admin. Two more questions: - I change my synonyms.txt on a solr node. How can i get zookeeper in sync and the other solr nodes without restart? - I read something more about zookeeper ensemble. When i need to run with 4 solr nodes(replicas) i need 3 zookeepers in ensemble(50% live). When zookeeper and solr are separated it will takes 7 servers to get it live. In the past we only needed 4 servers. Are there some other options because the costs will grow? 3 zookeeper servers sounds like overkill. Thanks -- View this message in context: http://lucene.472066.n3.nabble.com/From-Solr3-1-to-SolrCloud-tp4021536p4021849.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: From Solr3.1 to SolrCloud
Thanks Tomás for the information so far. You said: You can effectively run with only one zk instance, the problem with this is that if that instance dies, then your whole cluster will go down. When the cluster goes down i can still send queries to the solr instances? We have a lb that's choose a solr instance round robin. Can Solr still handles query when there is no zookeeper up? Only updates wille be a problem? -- View this message in context: http://lucene.472066.n3.nabble.com/From-Solr3-1-to-SolrCloud-tp4021536p4021991.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: From Solr3.1 to SolrCloud
Ok, that's important for the traffic. Some questions about zookeeper. I have done some tests and i have the following questions: - How can i delete configs from zookeeper? - I see some nodes in the clusterstate that are already gone. Why is this not up-to-date? Same for graph. Thanks again! -- View this message in context: http://lucene.472066.n3.nabble.com/From-Solr3-1-to-SolrCloud-tp4021536p4022311.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: From Solr3.1 to SolrCloud
Mark: I'm using a separate zookeeper instance. I don't use the embedded zk in solr. I can't find the location where the configs are stored, i can login to zookeeper and see the configs. delete commando works but i can't delete the whole config directory in once, only file by file. Erick, The nodes aren't live anymore and not visible in "live_nodes" but still in the cloud graph. Why is this and how can i remove it from there? I was testing with 10 nodes and now only with 4. I see 6 nodes that aren't there anymore. -- View this message in context: http://lucene.472066.n3.nabble.com/From-Solr3-1-to-SolrCloud-tp4021536p4022358.html Sent from the Solr - User mailing list archive at Nabble.com.
Spelling output solr 4
Hello, It looks like the directspelling component returns the correction between "(correction)". Why is this and are there some other differences in the spelling component in solr 4(towards solr 3.1) Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Spelling-output-solr-4-tp4025043.html Sent from the Solr - User mailing list archive at Nabble.com.
Order SOLR 4 output
Hello, I have a really simple question i think: What is the order of the fields that are in the SOLR response? In SOLR 3.1 it was alfabetic but in SOLR 4 it isn't anymore. Is it configurable? I want to know this because i have test script that checks differences in output between the SOLR versions. When the order of the output fields is different it's really hard to check/test. Thanks -- View this message in context: http://lucene.472066.n3.nabble.com/Order-SOLR-4-output-tp4027711.html Sent from the Solr - User mailing list archive at Nabble.com.
Scoring differences solr versions
Hi, I have some question about the scoring in SOLR4. I have the same query on 2 versions of SOLR(same indexed docs). The debug of the scoring: *SOLR4:* 3.3243241 = (MATCH) sum of: 0.20717455 = (MATCH) max plus 1.0 times others of: 0.19920631 = (MATCH) weight(firstname_search:g^50.0 in 783453) [DefaultSimilarity], result of: 0.19920631 = score(doc=783453,freq=1.0 = termFreq=1.0 ), product of: 0.11625154 = queryWeight, product of: 50.0 = boost 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 6.784133E-4 = queryNorm 1.7135799 = fieldWeight in 783453, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 0.5 = fieldNorm(doc=783453) 0.007968252 = (MATCH) weight(name_first_letter:g in 783453) [DefaultSimilarity], result of: 0.007968252 = score(doc=783453,freq=1.0 = termFreq=1.0 ), product of: 0.0023250307 = queryWeight, product of: 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 6.784133E-4 = queryNorm 3.4271598 = fieldWeight in 783453, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 1.0 = fieldNorm(doc=783453) 3.1171496 = (MATCH) max plus 1.0 times others of: 3.1171496 = (MATCH) weight(lastname_search:aalbers^50.0 in 783453) [DefaultSimilarity], result of: 3.1171496 = score(doc=783453,freq=1.0 = termFreq=1.0 ), product of: 0.3251704 = queryWeight, product of: 50.0 = boost 9.586204 = idf(docFreq=413, maxDocs=2217897) 6.784133E-4 = queryNorm 9.586204 = fieldWeight in 783453, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 9.586204 = idf(docFreq=413, maxDocs=2217897) 1.0 = fieldNorm(doc=783453) *SOLR3.1:* 3.3741257 = (MATCH) sum of: 0.25697616 = (MATCH) max plus 1.0 times others of: 0.2490079 = (MATCH) weight(firstname_search:g^50.0 in 1697008), product of: 0.11625154 = queryWeight(firstname_search:g^50.0), product of: 50.0 = boost 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 6.784133E-4 = queryNorm 2.141975 = (MATCH) fieldWeight(firstname_search:g in 1697008), product of: 1.0 = tf(termFreq(firstname_search:g)=1) 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 0.625 = fieldNorm(field=firstname_search, doc=1697008) 0.007968252 = (MATCH) weight(name_first_letter:g in 1697008), product of: 0.0023250307 = queryWeight(name_first_letter:g), product of: 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 6.784133E-4 = queryNorm 3.4271598 = (MATCH) fieldWeight(name_first_letter:g in 1697008), product of: 1.0 = tf(termFreq(name_first_letter:g)=1) 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 1.0 = fieldNorm(field=name_first_letter, doc=1697008) 3.1171496 = (MATCH) max plus 1.0 times others of: 3.1171496 = (MATCH) weight(lastname_search:aalbers^50.0 in 1697008), product of: 0.3251704 = queryWeight(lastname_search:aalbers^50.0), product of: 50.0 = boost 9.586204 = idf(docFreq=413, maxDocs=2217897) 6.784133E-4 = queryNorm 9.586204 = (MATCH) fieldWeight(lastname_search:aalbers in 1697008), product of: 1.0 = tf(termFreq(lastname_search:aalbers)=1) 9.586204 = idf(docFreq=413, maxDocs=2217897) 1.0 = fieldNorm(field=lastname_search, doc=1697008) What is the reason for differences in score? Is there something really different in calculating scores in SOLR 4? Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Scoring-differences-solr-versions-tp4035106.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Scoring differences solr versions
Hello Shawn, Thanks for the help: Indented format: *SOLR4* 3.3243241 = (MATCH) sum of: 0.20717455 = (MATCH) max plus 1.0 times others of: 0.19920631 = (MATCH) weight(firstname_search:g^50.0 in 783453) [DefaultSimilarity], result of: 0.19920631 = score(doc=783453,freq=1.0 = termFreq=1.0 ), product of: 0.11625154 = queryWeight, product of: 50.0 = boost 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 6.784133E-4 = queryNorm 1.7135799 = fieldWeight in 783453, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 0.5 = fieldNorm(doc=783453) 0.007968252 = (MATCH) weight(name_first_letter:g in 783453) [DefaultSimilarity], result of: 0.007968252 = score(doc=783453,freq=1.0 = termFreq=1.0 ), product of: 0.0023250307 = queryWeight, product of: 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 6.784133E-4 = queryNorm 3.4271598 = fieldWeight in 783453, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 1.0 = fieldNorm(doc=783453) 3.1171496 = (MATCH) max plus 1.0 times others of: 3.1171496 = (MATCH) weight(lastname_search:aalbers^50.0 in 783453) [DefaultSimilarity], result of: 3.1171496 = score(doc=783453,freq=1.0 = termFreq=1.0 ), product of: 0.3251704 = queryWeight, product of: 50.0 = boost 9.586204 = idf(docFreq=413, maxDocs=2217897) 6.784133E-4 = queryNorm 9.586204 = fieldWeight in 783453, product of: 1.0 = tf(freq=1.0), with freq of: 1.0 = termFreq=1.0 9.586204 = idf(docFreq=413, maxDocs=2217897) 1.0 = fieldNorm(doc=783453) *SOLR 3.1* 3.3741257 = (MATCH) sum of: 0.25697616 = (MATCH) max plus 1.0 times others of: 0.2490079 = (MATCH) weight(firstname_search:g^50.0 in 1697008), product of: 0.11625154 = queryWeight(firstname_search:g^50.0), product of: 50.0 = boost 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 6.784133E-4 = queryNorm 2.141975 = (MATCH) fieldWeight(firstname_search:g in 1697008), product of: 1.0 = tf(termFreq(firstname_search:g)=1) 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 0.625 = fieldNorm(field=firstname_search, doc=1697008) 0.007968252 = (MATCH) weight(name_first_letter:g in 1697008), product of: 0.0023250307 = queryWeight(name_first_letter:g), product of: 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 6.784133E-4 = queryNorm 3.4271598 = (MATCH) fieldWeight(name_first_letter:g in 1697008), product of: 1.0 = tf(termFreq(name_first_letter:g)=1) 3.4271598 = idf(docFreq=195811, maxDocs=2217897) 1.0 = fieldNorm(field=name_first_letter, doc=1697008) 3.1171496 = (MATCH) max plus 1.0 times others of: 3.1171496 = (MATCH) weight(lastname_search:aalbers^50.0 in 1697008), product of: 0.3251704 = queryWeight(lastname_search:aalbers^50.0), product of: 50.0 = boost 9.586204 = idf(docFreq=413, maxDocs=2217897) 6.784133E-4 = queryNorm 9.586204 = (MATCH) fieldWeight(lastname_search:aalbers in 1697008), product of: 1.0 = tf(termFreq(lastname_search:aalbers)=1) 9.586204 = idf(docFreq=413, maxDocs=2217897) 1.0 = fieldNorm(field=lastname_search, doc=1697008) Why scores this doc higher in solr 3.1? -- View this message in context: http://lucene.472066.n3.nabble.com/Scoring-differences-solr-versions-tp4035106p4035334.html Sent from the Solr - User mailing list archive at Nabble.com.
Fieldnorm solr 4 -> specialchars(worddelimiter)
Hello, I had some differences in solr score between solr 3.1 and solr 4. I have a searchfield with the following type: An example of fieldnorms: SearchTerm = *barcelona* solr 3.1: fc *barcelona* soccer club -> 0.5 fc-*barcelona* soccer club -> 0.5 solr 4: fc *barcelona* soccer club -> 0.5 fc-*barcelona* soccer club -> 0.4375 It could be the catenateWords of the fieldtype conf: fc,barcelona,fcbarcelona,soccer,club(5 terms = 0.4375) Strange that in solr 3.1 it was just counting for 4 terms with the same filter. Why is fieldnorm different? I need some help with this:) Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Fieldnorm-solr-4-specialchars-worddelimiter-tp4036248.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Fieldnorm solr 4 -> specialchars(worddelimiter)
I have done some more testing with different examples. It's really the worddelimiter that influence the fieldnorm. When i search for "Barcelona" the doc with "FC Barcelona" scores higher than "FC-Barcelona". Fieldnorm for "FC Barcelona" = 0.625 and the fieldnorm for "FC-Barcelona" = 0.5. Analyze: fc | barcelona fc | barcelona | fcbarcelona So it's 2 terms against 3 and this explains the difference in score. In solr 3.1 the score is the same, fieldnorm is 0.625 for both docs. It looks like the catenatewords has no influence in solr 3.1. I want that the score is the same, with or without the catenatewords, like it's in solr 3.1. Is this possible? Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Fieldnorm-solr-4-specialchars-worddelimiter-tp4036248p4036679.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Fieldnorm solr 4 -> specialchars(worddelimiter)
Hello Jack, I'm using exactly the same fieldtype: It looks like the catenatewords has another influence in solr 4.1 than in previous version.(3.1) The analyze is the same in both versions. I want exactly the same results but can't get it. -- View this message in context: http://lucene.472066.n3.nabble.com/Fieldnorm-solr-4-specialchars-worddelimiter-tp4036248p4036749.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Fieldnorm solr 4 -> specialchars(worddelimiter)
Hello Jack, Thanks for your answer. It's clear, i think it was a bug in 3.1. The difference in fieldnorm was just not what i expected. I will tweak the schema to get it closer to the expected results. Thanks Jack, Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Fieldnorm-solr-4-specialchars-worddelimiter-tp4036248p4036759.html Sent from the Solr - User mailing list archive at Nabble.com.
SolrCloud: port out of range:-1
Hello, I have some problems with Solrcloud and Zookeeper. I have 2 servers and i want to have a solr instance on both servers. Both solr instances runs an embedded zookeeper. When i try to start the first one i get the error: "port out of range:-1". The command i run to start solr with embedded zookeeper: java -Djetty.port=4110 -DzkRun=10.100.10.101:5110 -DzkHost=10.100.10.101:5110,10.100.10.102:5120 -Dbootstrap_conf=true -DnumShards=1 -Xmx1024M -Xms512M -jar start.jar It runs Solr on port 4110, the embedded zk on 5110. The -DzkHost gives the urls of the localhost zk(5110) and the url of the other server(zk port). When i try to start this it give the error: "port out of range:-1". What's wrong? Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-port-out-of-range-1-tp4045804.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: SolrCloud: port out of range:-1
On the end i want 3 servers, this was only a test. I now that a majority of servers is needed to provide service. I read some tutorials about zookeeper and looked at the wiki. I installed zookeeper seperate on the server and connect them with eachother(zoo.cfg). In the log i see the zookeeper know eachother. When i start SOLR, i used the -Dzkhost parameter to declare the zookeepers of the servers: -Dzkhost=ip:2181,ip:2181,ip:2181 It works great:) ps. With embedded zookeepers i can't get it working. With a second server in the zkhost it returns a error. Strange, but for me the seperate zookeepers is a great solution, seperate logs and easy to use on other zookeeper servers(in the future i want to seperate in 3 solr instances and 5 zookeeper instances). THANKS -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-port-out-of-range-1-tp4045804p4046278.html Sent from the Solr - User mailing list archive at Nabble.com.
Advice: solrCloud + DIH
Hello, I need some advice with my solrcloud cluster and the DIH. I have a cluster with 3 cloud servers. Every server has an solr instance and a zookeeper instance. I start it with the -Dzkhost parameter. It works great, i send updates by an curl(xml) like this: curl http:/ip:SOLRport/solr/update -H "Content-Type: text/xml" --data-binary '223232test' Solr has 2 million docs in the index. Now i want a extra field: content2. I add this in my schema and upload this again to the cluster with -Dbootstrap_confdir and -Dcollection.configName. It's replicated to the whole cluster. Now i need a re-index to add the field to every doc. I have a database with all the data and want to use the full-import of DIH(this was the way i did this in previous solr versions). When i run this it goes with 3 doc/s(Really slow). When i run solr alone(not solrcloud) it goes 600 docs/sec. What's the best way to do a full re-index with solrcloud? Does solrcloud support DIH? Thanks -- View this message in context: http://lucene.472066.n3.nabble.com/Advice-solrCloud-DIH-tp4047339.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Advice: solrCloud + DIH
Thans for the support so far, I was running the dataimport on a replica! Now i start it on the leader and it goes with 590 doc/s. I think all docs were going to another node and then came back. Is there a way to get the leader? If there is, i can detect the leader with a script and start the DIH every night on the right server. Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Advice-solrCloud-DIH-tp4047339p4047627.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Re:how to monitor solr in newrelic
Try this when you start SOLR java -javaagent:/NEWRELICPATH/newrelic.jar -jar start.jar Normally you will see your SOLR installation on your newrelic dashboard in 2 minutes. -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-monitor-solr-in-newrelic-tp3739567p3743488.html Sent from the Solr - User mailing list archive at Nabble.com.
Boosting score by Geo distance
Hello, I want to boost the score of the founded documents by geo distance. I use this: bf=recip(geodist(),2,1000,30) It works but i don't know what the parameters mean? (2,1000,30) Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Boosting-score-by-Geo-distance-tp3984035.html Sent from the Solr - User mailing list archive at Nabble.com.
Must match and terms with only one letter
Hello, I use the MM function on my edismax requesthandler(70%). This works great but i have one problem: When is search for "A Cole" there has to been only one term match(mm = 70%). The problem is the "A", It returns 9200 documents with an "A" in it. Is there a posssibility to skip terms with only one character? The MM value is ok(2 terms -> 1 match), but not when a term is only one character. Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Must-match-and-terms-with-only-one-letter-tp3984139.html Sent from the Solr - User mailing list archive at Nabble.com.
Split token
Hello, I want to split my string when it contains "(". Example: spurs (London) Internationale (milan) to spurs (london) Internationale (milan) What tokenizer can i use to fix this problem? -- View this message in context: http://lucene.472066.n3.nabble.com/Split-token-tp2810772p2810772.html Sent from the Solr - User mailing list archive at Nabble.com.
PECL SOLR PHP extension, JSON output
Hello, I use the PECL php extension for SOLR. I want my output in JSON. This is not working: $query->set('wt', 'json'); How do i solve this problem? -- View this message in context: http://lucene.472066.n3.nabble.com/PECL-SOLR-PHP-extension-JSON-output-tp2846092p2846092.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: PECL SOLR PHP extension, JSON output
I have tried that but it seems like JSON is not supported Parameters responseWriter One of the following : - xml - phpnative -- View this message in context: http://lucene.472066.n3.nabble.com/PECL-SOLR-PHP-extension-JSON-output-tp2846092p2846728.html Sent from the Solr - User mailing list archive at Nabble.com.
WhitespaceTokenizer and scoring(field length)
Hello, I have a problem with the whitespaceTokenizer and scoring. An example: id Titel 1 Manchester united 2 Manchester With the whitespaceTokenizer "Manchester united" will be splitted to "Manchester" and "united". When i search for "manchester" i get id 1 and 2 in my results. What i want is that id 2 scores higher(field length). How can i fix this? -- View this message in context: http://lucene.472066.n3.nabble.com/WhitespaceTokenizer-and-scoring-field-length-tp2865784p2865784.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: WhitespaceTokenizer and scoring(field length)
I thought it was something simple. Here is my configuration: 357 LIDL Headoffice Supermarkt LIDL LIDL LIDL Headoffice Supermarket 719 LIDL Supermarket LIDL LIDL LIDL Supermarket 1.4330883 = (MATCH) fieldWeight(searchField:supermarket in 325), product of: 1.0 = tf(termFreq(searchField:supermarket)=1) 2.8661766 = idf(docFreq=3194, maxDocs=20651) 0.5 = fieldNorm(field=searchField, doc=325) 1.4330883 = (MATCH) fieldWeight(searchField:supermarket in 678), product of: 1.0 = tf(termFreq(searchField:supermarket)=1) 2.8661766 = idf(docFreq=3194, maxDocs=20651) 0.5 = fieldNorm(field=searchField, doc=678) -- View this message in context: http://lucene.472066.n3.nabble.com/WhitespaceTokenizer-and-scoring-field-length-tp2865784p2869524.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: WhitespaceTokenizer and scoring(field length)
I thought it was something simple. Here is my configuration: 357 LIDL Headoffice Supermarkt LIDL LIDL LIDL Headoffice Supermarket 719 LIDL Supermarket LIDL LIDL LIDL Supermarket 1.4330883 = (MATCH) fieldWeight(searchField:supermarket in 325), product of: 1.0 = tf(termFreq(searchField:supermarket)=1) 2.8661766 = idf(docFreq=3194, maxDocs=20651) 0.5 = fieldNorm(field=searchField, doc=325) 1.4330883 = (MATCH) fieldWeight(searchField:supermarket in 678), product of: 1.0 = tf(termFreq(searchField:supermarket)=1) 2.8661766 = idf(docFreq=3194, maxDocs=20651) 0.5 = fieldNorm(field=searchField, doc=678) -- View this message in context: http://lucene.472066.n3.nabble.com/WhitespaceTokenizer-and-scoring-field-length-tp2865784p2869527.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: WhitespaceTokenizer and scoring(field length)
I thought it was something simple. Here is my configuration: I search for "supermarket": 357 LIDL Headoffice Supermarkt LIDL LIDL LIDL Headoffice Supermarket 719 LIDL Supermarket LIDL LIDL LIDL Supermarket debugQuery: Both documents has the same score, but doc 357 has more characters in the searchField. 1.4330883 = (MATCH) fieldWeight(searchField:supermarket in 325), product of: 1.0 = tf(termFreq(searchField:supermarket)=1) 2.8661766 = idf(docFreq=3194, maxDocs=20651) 0.5 = fieldNorm(field=searchField, doc=325) 1.4330883 = (MATCH) fieldWeight(searchField:supermarket in 678), product of: 1.0 = tf(termFreq(searchField:supermarket)=1) 2.8661766 = idf(docFreq=3194, maxDocs=20651) 0.5 = fieldNorm(field=searchField, doc=678) -- View this message in context: http://lucene.472066.n3.nabble.com/WhitespaceTokenizer-and-scoring-field-length-tp2865784p2869546.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: WhitespaceTokenizer and scoring(field length)
Thanks!! It's clear now, sometimes the lengthNorm is the same. See the table below: # of termslengthNorm 1 1.0 2 .625 3 .5 4 .5 5 .4375 6 .375 7 .375 8 .3125 9 .3125 10.3125 Is it possible to change the lengthNorm? -- View this message in context: http://lucene.472066.n3.nabble.com/WhitespaceTokenizer-and-scoring-field-length-tp2865784p2870206.html Sent from the Solr - User mailing list archive at Nabble.com.
Autocomplete(terms) middle of words
Hello, I use the termsComponent to fix some autocomplete on my website. I use the prefix and get the following results: searching for manch: manchester city(10) manchester united(2) When a user search for ches i want the following results: chesterfield united(13) manchester united(2) I want to search in the middle of words. How can i fix that? I have tried the NgramsFilter on index time but i doesn't seems to work with the termsComponent. My current configuration: -- View this message in context: http://lucene.472066.n3.nabble.com/Autocomplete-terms-middle-of-words-tp2878694p2878694.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Autocomplete(terms) middle of words
Ok, i try NGrams. My configuration looks like this: i try to run the query: http://localhost:8983/solr/terms?terms.fl=suggestionField&terms.prefix=chest Result: chest cheste chester The result is not what i expected. I think the query is not ok?..-- View this message in context: http://lucene.472066.n3.nabble.com/Autocomplete-terms-middle-of-words-tp2878694p2878877.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Autocomplete(terms) middle of words
The words are now splitted in the index(nGram). It looks like this: m ma man manc manch manche manches manchest mancheste manchester The termsComponent does not see it as one word(manchester). It gives me the results back in NGrams(m,ma,man etc)-- View this message in context: http://lucene.472066.n3.nabble.com/Autocomplete-terms-middle-of-words-tp2878694p2878916.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Autocomplete(terms) middle of words
terms.regex doesn´t work for me. Prefix works fine. I use SOLR 1.4.. Is it compatible?-- View this message in context: http://lucene.472066.n3.nabble.com/Autocomplete-terms-middle-of-words-tp2878694p2878948.html Sent from the Solr - User mailing list archive at Nabble.com.
Dismax scoring multiple fields TIE
Hello, I have a question about scoring when i use the dismax handler. I will give some examples: name category related category 1. Chelsea best club everChelseaSport 2. ChelseaChelsea Sport When i search for "Chelsea" i want a higher score for number 2. I think it is a better match on fieldlength. I use the dismax and both records has the same score. I see some difference in fieldNorm both still the score is the same. How can i fix this? my config: dismax name category related_category 1.0 SCORE 1: 0.75269306 = (MATCH) sum of: 0.75269306 = (MATCH) max of: 0.75269306 = (MATCH) weight(category:chelsea in 680), product of: 0.3085193 = queryWeight(category:chelsea), product of: 2.4396951 = idf(docFreq=236, maxDocs=1000) 0.12645814 = queryNorm 2.4396951 = (MATCH) fieldWeight(category:chelsea in 680), product of: 1.0 = tf(termFreq(category:chelsea)=1) 2.4396951 = idf(docFreq=236, maxDocs=1000) 1.0 = fieldNorm(field=category, doc=680) 0.37634653 = (MATCH) weight(name:chelsea in 680), product of: 0.3085193 = queryWeight(name:chelsea), product of: 2.4396951 = idf(docFreq=236, maxDocs=1000) 0.12645814 = queryNorm 1.2198476 = (MATCH) fieldWeight(name:chelsea in 680), product of: 1.0 = tf(termFreq(name:chelsea)=1) 2.4396951 = idf(docFreq=236, maxDocs=1000) 0.5 = fieldNorm(field=name, doc=680) SCORE 2: 0.75269306 = (MATCH) sum of: 0.75269306 = (MATCH) max of: 0.75269306 = (MATCH) weight(category:chelsea in 678), product of: 0.3085193 = queryWeight(category:chelsea), product of: 2.4396951 = idf(docFreq=236, maxDocs=1000) 0.12645814 = queryNorm 2.4396951 = (MATCH) fieldWeight(category:chelsea in 678), product of: 1.0 = tf(termFreq(category:chelsea)=1) 2.4396951 = idf(docFreq=236, maxDocs=1000) 1.0 = fieldNorm(field=category, doc=678) 0.75269306 = (MATCH) weight(name:chelsea in 678), product of: 0.3085193 = queryWeight(name:chelsea), product of: 2.4396951 = idf(docFreq=236, maxDocs=1000) 0.12645814 = queryNorm 2.4396951 = (MATCH) fieldWeight(name:chelsea in 678), product of: 1.0 = tf(termFreq(name:chelsea)=1) 2.4396951 = idf(docFreq=236, maxDocs=1000) 1.0 = fieldNorm(field=name, doc=678) -- View this message in context: http://lucene.472066.n3.nabble.com/Dismax-scoring-multiple-fields-TIE-tp2893923p2893923.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Dismax scoring multiple fields TIE
No, but i think the difference between fieldlength is large and the score is still the same. Same score for this results(q=chelsea): 1. Chelsea is a very very big club in london, england Chelsea Sport 2. Chelsea Chelsea Sport -- View this message in context: http://lucene.472066.n3.nabble.com/Dismax-scoring-multiple-fields-TIE-tp2893923p2894026.html Sent from the Solr - User mailing list archive at Nabble.com.
Patch problems solr 1.4 - solr-2010
Hello, I want to patch my solr installation(1.4.1) with solr-2010.(https://issues.apache.org/jira/browse/SOLR-2010) I need this feature: Only return collations that are guaranteed to result in hits if re-queried Now i try the following code: wget https://issues.apache.org/jira/secure/attachment/12457683/SOLR-2010_141.patch -O - | patch -p0 --dry-run I get the following error message: --13:51:35-- https://issues.apache.org/jira/secure/attachment/12457683/SOLR-2010_141.patch Resolving issues.apache.org... 140.211.11.121 Connecting to issues.apache.org|140.211.11.121|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 55846 (55K) [text/x-patch] Saving to: `STDOUT' 100%[>] 55,846 59.3K/s in 0.9s 13:51:37 (59.3 KB/s) - `-' saved [55846/55846] (Stripping trailing CRs from patch.) patching file src/common/org/apache/solr/common/params/SpellingParams.java Hunk #1 FAILED at 81. 1 out of 1 hunk FAILED -- saving rejects to file src/common/org/apache/solr/common/params/SpellingParams.java.rej (Stripping trailing CRs from patch.) patching file src/java/org/apache/solr/handler/component/SpellCheckComponent.java Hunk #1 FAILED at 24. Hunk #2 FAILED at 141. Hunk #3 FAILED at 155. Hunk #4 FAILED at 214. Hunk #5 FAILED at 252. Hunk #6 FAILED at 262. 6 out of 6 hunks FAILED -- saving rejects to file src/java/org/apache/solr/handler/component/SpellCheckComponent.java.rej (Stripping trailing CRs from patch.) patching file src/java/org/apache/solr/spelling/PossibilityIterator.java (Stripping trailing CRs from patch.) patching file src/java/org/apache/solr/spelling/RankedSpellPossibility.java (Stripping trailing CRs from patch.) patching file src/java/org/apache/solr/spelling/SpellCheckCollation.java (Stripping trailing CRs from patch.) patching file src/java/org/apache/solr/spelling/SpellCheckCollator.java (Stripping trailing CRs from patch.) patching file src/java/org/apache/solr/spelling/SpellCheckCorrection.java (Stripping trailing CRs from patch.) patching file src/solrj/org/apache/solr/client/solrj/response/SpellCheckResponse.java Hunk #1 FAILED at 31. Hunk #2 FAILED at 45. Hunk #3 FAILED at 108. Hunk #4 FAILED at 210. 4 out of 4 hunks FAILED -- saving rejects to file src/solrj/org/apache/solr/client/solrj/response/SpellCheckResponse.java.rej (Stripping trailing CRs from patch.) patching file src/test/org/apache/solr/client/solrj/response/TestSpellCheckResponse.java Hunk #1 FAILED at 23. Hunk #2 FAILED at 109. 2 out of 2 hunks FAILED -- saving rejects to file src/test/org/apache/solr/client/solrj/response/TestSpellCheckResponse.java.rej (Stripping trailing CRs from patch.) patching file src/test/org/apache/solr/spelling/SpellCheckCollatorTest.java (Stripping trailing CRs from patch.) patching file src/test/org/apache/solr/spelling/SpellPossibilityIteratorTest.java (Stripping trailing CRs from patch.) patching file src/test/test-files/solr/conf/schema.xml Hunk #1 FAILED at 19. Hunk #2 FAILED at 50. Hunk #3 FAILED at 100. Hunk #4 FAILED at 408. Hunk #5 FAILED at 427. Hunk #6 FAILED at 453. Hunk #7 FAILED at 535. 7 out of 7 hunks FAILED -- saving rejects to file src/test/test-files/solr/conf/schema.xml.rej (Stripping trailing CRs from patch.) patching file src/test/test-files/solr/conf/solrconfig.xml Hunk #1 FAILED at 29. Hunk #2 FAILED at 116. Hunk #3 FAILED at 340. Hunk #4 FAILED at 396. 4 out of 4 hunks FAILED -- saving rejects to file src/test/test-files/solr/conf/solrconfig.xml.rej -- View this message in context: http://lucene.472066.n3.nabble.com/Patch-problems-solr-1-4-solr-2010-tp2898443p2898443.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Patch problems solr 1.4 - solr-2010
Hello, thanks for the answers, i use branch 1.4 and i have succesfully patch solr-2010. Now i want to use the collate spellchecking. How does my url look like. I tried this but it's not working(It's the same as solr without solr-2010). http://localhost:8983/solr/select?q=man unitet&spellcheck.q=man unitet&spellcheck=true&spellcheck.build=true&spellcheck.collate=true&spellcheck.collateExtendedResult=true&spellcheck.maxCollations=10&spellcheck.maxCollationTries=10 I get the collapse "man united" as suggestion. Man is good spelled, but not in this phrase. It must be "manchester united" and i want that solr requerying the collapse and only give the suggestion if it gives some results. How can i fix this?? -- View this message in context: http://lucene.472066.n3.nabble.com/Patch-problems-solr-1-4-solr-2010-tp2898443p2902546.html Sent from the Solr - User mailing list archive at Nabble.com.
Spatial search - SOLR 3.1
Hello, I'm using the spatial solr plugin from jteam for SOLR 1.4. Now i want to use SOLR 3.1 because it contains a lot of bugfixes:) Now i want to get a distance field back in my results. How can i do it? My url looks like: q=testquery&fq={!geofilt pt=52.78556,3.4546 sfield=latlon d=50} I get only the results between the distance(50 km) and that's great. I only missing the distance field for every document in the response. -- View this message in context: http://lucene.472066.n3.nabble.com/Spatial-search-SOLR-3-1-tp2927579p2927579.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Spatial search - SOLR 3.1
Hello David, It's easy to calculate it by myself but it was nice if SOLR returns distance in the response. I can sort on distance and calculate the distance with PHP to show it to the users. -- View this message in context: http://lucene.472066.n3.nabble.com/Spatial-search-SOLR-3-1-tp2927579p2930926.html Sent from the Solr - User mailing list archive at Nabble.com.
Spellcheck: Two dictionaries
Hello, I have 2 fields: what and where. For both of the field i want some spellchecking. I have 2 dictionary's in my config: ws what what spellchecker_what where where spellchecker_where I can search on dictionary with spellcheck.dictionary=what in my url. How can i set some spellchecking for both fields?? I see that SOLR 3.1 has spellcheck..key parameter. How can i use that in my url? -- View this message in context: http://lucene.472066.n3.nabble.com/Spellcheck-Two-dictionaries-tp2931458p2931458.html Sent from the Solr - User mailing list archive at Nabble.com.
Results with and without whitspace(soccer club and soccerclub)
Hello, My index looks like this: Soccer club Football club etc. Now i want that a user can search for "soccer club" and "soccerclub". "Soccer club" works but without the whitespace it's not a match. How can i fix this? How does my configuration looks like? Is there a filter or something? -- View this message in context: http://lucene.472066.n3.nabble.com/Results-with-and-without-whitspace-soccer-club-and-soccerclub-tp2934742p2934742.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Results with and without whitespace(soccer club and soccerclub)
mm,, it's about 10.000 terms. It's possible but not the best solution i think. -- View this message in context: http://lucene.472066.n3.nabble.com/Results-with-and-without-whitespace-soccer-club-and-soccerclub-tp2934742p2934888.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Exact match
Try this: "search term" -- View this message in context: http://lucene.472066.n3.nabble.com/Exact-match-tp2952591p2952699.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: how to convert YYYY-MM-DD to YYY-MM-DD hh:mm:ss - DIH
Try this in your query: TIME_FORMAT(timeDb, '%H:%i') as timefield http://www.java2s.com/Tutorial/MySQL/0280__Date-Time-Functions/TIMEFORMATtimeformat.htm -- View this message in context: http://lucene.472066.n3.nabble.com/how-to-convert--MM-DD-to-YYY-MM-DD-hh-mm-ss-DIH-tp2961481p2961591.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Faceting on distance in Solr: how do you generate links that search withing a given range of distance?
He I had the same problem. It's fix now. But it comes with a new issue: I get results for all company's in London. Now i want the following facets: 10km(20) 20km(34) 40km(55) 40km can be outside of London. There are only company's in london in my results. The company's outside of London are not counted in my facets. Is it possible? -- View this message in context: http://lucene.472066.n3.nabble.com/Faceting-on-distance-in-Solr-how-do-you-generate-links-that-search-withing-a-given-range-of-distance-tp2953806p2964591.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Results with and without whitspace(soccer club and soccerclub)
Hello, Thanks, i think both are good options. I prefer the option with the filter. What does a charfilter look like? -- View this message in context: http://lucene.472066.n3.nabble.com/Results-with-and-without-whitespace-soccer-club-and-soccerclub-tp2934742p2964950.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Results with and without whitspace(soccer club and soccerclub)
Thanks for the help so far, I don't think this solves the problem. What if my data look like this: soccer club Manchester united if i search for "soccerclub manchester" and for "soccer club manchester" i want this result back. A copyfield that removes whitespaces is not an option. With the charfilter i get something like this: 1. Index time: soccer club Manchester united--> soccerclubManchesterunited indexed. 2. Search time: soccer club OR soccerclub --> soccerclub searched. In this situation i still get no result if i search soccerclub. The index is soccerclubManchesterunited. How can i fix it? -- View this message in context: http://lucene.472066.n3.nabble.com/Results-with-and-without-whitespace-soccer-club-and-soccerclub-tp2934742p2965389.html Sent from the Solr - User mailing list archive at Nabble.com.
Dynamic facet field
Hello, I have some problem with dynamic facets. I have a database with 1mil. products and have indexed this with DIH. Now i have facets that are connected to one category of products. Example: Category Facet Television type(hd,plasma), Inches(38,42), Color(black,grey) Mobile phone brand(HTC,APPLE), OS(android,ios,bb) When a user search for television i want this facets: Type hd(203) plasma(32) Inches 42(39) 38(213) Color black(200) grey(30) URL facet.field=type&facet.field=inches&facet.field=color Now i get from another db the titles of the facets that i want for this category and use it in the URL(i don't want this anymore) I have dynamic fields to fill the index(type_facet,inches_facet,color_facet). I thought maybe something like this is possible: facet.field=*_facet All the fields with _facet will be a faceted?? -- View this message in context: http://lucene.472066.n3.nabble.com/Dynamic-facet-field-tp2979407p2979407.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Results with and without whitspace(soccer club and soccerclub)
Ok, I will do it with synonyms. What does the list look like? soccerclub,soccer club The index looks like this: Manchester united soccerclub Chelsea soccer club I want them both in my results if i search for "soccer club" or "soccerclub". How can i configure this in schema.xml? -- View this message in context: http://lucene.472066.n3.nabble.com/Results-with-and-without-whitespace-soccer-club-and-soccerclub-tp2934742p2979577.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Spellcheck: Two dictionaries
?? -- View this message in context: http://lucene.472066.n3.nabble.com/Spellcheck-Two-dictionaries-tp2931458p2987915.html Sent from the Solr - User mailing list archive at Nabble.com.
Problem with spellchecking, dont want multiple request to SOLR
Hello, First i will explain my situation. I have a 2 fields on my website: What and Where. When a user search i want spellcheck on both fields. Now i have 2 dictionaries, one for what and one for where. I want to search with one request and spellcheck both fields. Is it possible and how? -- View this message in context: http://lucene.472066.n3.nabble.com/Problem-with-spellchecking-dont-want-multiple-request-to-SOLR-tp2988167p2988167.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Problem with spellchecking, dont want multiple request to SOLR
mm ok. I configure 2 spellcheckers: spell_what spell_what true spellchecker_what spell_where spell_where true spellchecker_where How can i enable it in my search request handler and search both in one request? -- View this message in context: http://lucene.472066.n3.nabble.com/Problem-with-spellchecking-dont-want-multiple-request-to-SOLR-tp2988167p2992076.html Sent from the Solr - User mailing list archive at Nabble.com.
RE: Spellcheck: Two dictionaries
That uber dictionary is not what i want. I get also suggestions form the where in the what. An example: what where chelseaLondon Soccerclub Bondon London When i type "soccerclub london" i want the suggestion from the what dictionary. Did you mean "Soccerclub Bondon". With the uber dictionary i don't get this suggestion because it is spelled correctly.(based on the where) -- View this message in context: http://lucene.472066.n3.nabble.com/Spellcheck-Two-dictionaries-tp2931458p2992093.html Sent from the Solr - User mailing list archive at Nabble.com.
How many fields can SOLR handle?
Hello, I have a SOLR implementation with 1m products. Every products has some information, lets say a television has some information about pixels and inches, a computer has information about harddisk, cpu, gpu. When a user search for computer i want to show the correct facets. An example: User search for Computer Facets: CPU AMD(10) Intel(300) GPU Nvidia(20) Ati(290) Every product has different facets. I have something like this in my schema: In SOLR i have now a lot of fields: CPU_FACET, GPU_FACET etc. How many fields can SOLR handle? Another question: Is it possible to add the FACET fields automatically to my query? facet.field=*_FACET? Now i do first a request to a DB to get the FACET titles and add this to the request: facet.field=cpu_FACET,gpu_FACET. I'm affraid that *_FACET is a overkill solution. -- View this message in context: http://lucene.472066.n3.nabble.com/How-many-fields-can-SOLR-handle-tp3033910p3033910.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: [ANNOUNCEMENT] PHP Solr Extension 1.0.1 Stable Has Been Released
Hello, I have some problems with the installation of the new PECL package solr-1.0.1. I run this command: pecl uninstall solr-beta ( to uninstall old version, 0.9.11) pecl install solr The installing is running but then it gives the following error message: /tmp/tmpKUExET/solr-1.0.1/solr_functions_helpers.c: In function 'solr_json_to_php_native': /tmp/tmpKUExET/solr-1.0.1/solr_functions_helpers.c:1123: error: too many arguments to function 'php_json_decode' make: *** [solr_functions_helpers.lo] Error 1 ERROR: `make' failed I have php version 5.2.17. How can i fix this? -- View this message in context: http://lucene.472066.n3.nabble.com/ANNOUNCEMENT-PHP-Solr-Extension-1-0-1-Stable-Has-Been-Released-tp3024040p3034350.html Sent from the Solr - User mailing list archive at Nabble.com.
Solr monitoring: Newrelic
Hello, I found this tool to monitor solr querys, cache etc. http://newrelic.com/ http://newrelic.com/ I have some problems with the installation of it. I get the following errors: Could not locate a Tomcat, Jetty or JBoss instance in /var/www/sites/royr Try re-running the install command from /newrelic. If that doesn't work, locate and edit the start script manually. Generated New Relic configuration file /var/www/sites/royr/newrelic/newrelic.yml * Install incomplete Does anybody have experience with Newrelic in combination with Solr? -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-monitoring-Newrelic-tp3042889p3042889.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr monitoring: Newrelic
I use Jetty, it's standard in the solr package. Where can i find the "jetty" folder? then i can start this command: java -jar newrelic.jar install -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-monitoring-Newrelic-tp3042889p3042981.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr monitoring: Newrelic
Yes, that's the problem. There is no jetty folder. I have try the example/lib directory, it's not working. There is no jetty war file, only jetty-***.jar files Same error, could not locate a jetty instance. -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-monitoring-Newrelic-tp3042889p3043080.html Sent from the Solr - User mailing list archive at Nabble.com.
WordDelimiter and stemEnglishPossessive doesn't work
Hello, I have some problem with the wordDelimiter. My data looks like this: mcdonald's#burgerking#Free record shop#h&m I want to tokenize this on #. After that it has to split on whitespace. I use the wordDelimiter for that(can't use 2 tokenizers) Now this works but there is one problem, it removes the '. My index looks like this: mcdonald burgerking free record shop h&m I don't want this so i use the stemEnglishPossessive. The description from this part of the filter looks like this: stemEnglishPossessive="1" causes trailing "'s" to be removed for each subword. "Doug's" => "Doug" default is true ("1"); set to 0 to turn off My Field looks like this: It looks like the stemEnglishPossessive=0 is not working. How can i fix this problem? Other filter? Did i forget something? -- View this message in context: http://lucene.472066.n3.nabble.com/WordDelimiter-and-stemEnglishPossessive-doesn-t-work-tp3047678p3047678.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Query on Synonyms feature in Solr
Maybe you can try to escape the synonyms so it's no tokized by whitespace.. Private\ schools,NGO\ Schools,Unaided\ schools -- View this message in context: http://lucene.472066.n3.nabble.com/Query-on-Synonyms-feature-in-Solr-tp3058197p3062392.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: WordDelimiter and stemEnglishPossessive doesn't work
Ok, with catenatewords the index term will be mcdonalds. But that's not what i want. I only use the wordDelimiter to split on whitespace. I have already used the PatternTokenizerFactory so i can't use the whitespacetokenizer. I want my index looks like this: dataset: mcdonald's#burgerking#Free record shop#h&m mcdonald's burgerking free record shop h&m Can i configure the wordDelimiter as an whitespaceTokenizer? So it only splits on whitespaces and nothing more(not removing 's etc).. -- View this message in context: http://lucene.472066.n3.nabble.com/WordDelimiter-and-stemEnglishPossessive-doesn-t-work-tp3047678p3062461.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: WordDelimiter and stemEnglishPossessive doesn't work
THANK YOU!! I thought i only could use one character for the pattern.. Now i use a regular expression:) I don't need the wordDelimiter anymore. It's split on # and whitespace dataset: mcdonald's#burgerking#Free record shop#h&m mcdonald's burgerking free record shop h&m This is exactly how we want it. -- View this message in context: http://lucene.472066.n3.nabble.com/WordDelimiter-and-stemEnglishPossessive-doesn-t-work-tp3047678p3062984.html Sent from the Solr - User mailing list archive at Nabble.com.
SnowballPorterFilterFactory and apostrophes
Hello, I use the snowballPorterFilter(dutch) to stem the words in my index. Like this: restaurants => restaurant restauranten => restaurant apples => apple Now i see on my solr analytics page that this happens with mcdonald's: mcdonald's => mcdonald' I don't want stemming for apostrophes. Is it possible? I could use keepword.txt but there are thousand of words with apostrophes -- View this message in context: http://lucene.472066.n3.nabble.com/SnowballPorterFilterFactory-and-apostrophes-tp3066709p3066709.html Sent from the Solr - User mailing list archive at Nabble.com.
Complex situation
Hello, First i will try to explain the situation: I have some companies with openinghours. Some companies has multiple seasons with different openinghours. I wil show some example data : Companyid Startdate(d-m) Enddate(d-m) Openinghours_end 101-0101-04 17:00 101-0401-08 18:00 101-0831-12 17:30 201-0131-12 20:00 301-0101-06 17:00 301-0631-12 18:00 What i want is some facets on the left site of my page. They have to look like this: Closing today on: 17:00(23) 18:00(2) 20:00(1) So i need to get the NOW to know which openinghours(seasons) i need in my facet results. How should my index look like? Can anybody helps me how i can save this data in the solr index? -- View this message in context: http://lucene.472066.n3.nabble.com/Complex-situation-tp3071936p3071936.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Complex situation
Hi, I want all the results, not only the results for current season. Let say i search for "supermarket", i get results 1, 2 and 3 in my response(previous post) Then i want on the left part some facets with openinghours. Let's say it is today 02/08/2011. Then my facets looks like this: 18:00(2) 20:00(1) Company 2 is open till 20:00. Company 1 and 3 are open till 18:00. So there are different openinghours dependent on season. I hope you understand with i mean. -- View this message in context: http://lucene.472066.n3.nabble.com/Complex-situation-tp3071936p3085129.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Complex situation
with this facet.query=startdate:[* TO NOW] AND enddate:[NOW TO *] i will get all the results?? Now i get the startdate and enddate from my db with the DIH. My schema.xml looks like this: When i use the facet.query i only get a count with companies. What i want is a count for openinghours. Maybe i forgot something? -- View this message in context: http://lucene.472066.n3.nabble.com/Complex-situation-tp3071936p3086455.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Complex situation
Thanks it works!! I want to change the format of the NOW in SOLR. Is it possible? Now date format looks like this: -MM-dd T HH:mm:sss Z In my db the format is dd-MM. How can i fix the NOW so i can do something like * TO NOW(dd-mm)?? -- View this message in context: http://lucene.472066.n3.nabble.com/Complex-situation-tp3071936p3089632.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Complex situation
Yes, current year. I understand that something like dd-mm-yy isn't possible. I will fix this in my db, Thanks for your help! -- View this message in context: http://lucene.472066.n3.nabble.com/Complex-situation-tp3071936p3090247.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: [ANNOUNCEMENT] PHP Solr Extension 1.0.1 Stable Has Been Released
Are you working on some changes to support earlier versions of PHP? -- View this message in context: http://lucene.472066.n3.nabble.com/ANNOUNCEMENT-PHP-Solr-Extension-1-0-1-Stable-Has-Been-Released-tp3024040p3090702.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Complex situation
Hello, I have change my db dates to the correct format like 2011-01-11T00:00:00Z. Now i have the following data: Manchester Store2011-01-01T00:00:00Z 2011-31-03T00:00:00Z 18:00 Manchester Store2011-01-04T00:00:00Z 2011-31-12T00:00:00Z 20:00 The "Manchester Store" has 2 seasons with different closing times(18:00 and 20:00). Now i have 4 fields in SOLR. Companyname Manchester Store startdate(multiV) 2011-01-01T00:00:00Z, 2011-01-04T00:00:00Z enddate(multiV) 2011-31-03T00:00:00Z, 2011-31-12T00:00:00Z closingTime(multiV) 18:00, 20:00 I want some facets like this: Open today(2011-23-06): 20:00(1) The facet query needs to look what's the current date and needs to use that closing time. My facet.query look like this: facet.query=startdate:[* TO NOW] AND enddate:[NOW TO *] AND closingTime:"18:00" This returns 1 count like this: 18:00(1) When i use this facet.query it returns also 1 result: facet.query=startdate:[* TO NOW] AND enddate:[NOW TO *] AND closingTime:"20:00" This result is not correct because NOW(2011-23-06) it's not open till 20:00. It looks like there is no link between the season and the closingTime. Can somebody helps me?? The fields in SOLR are not correct? Thanks Roy -- View this message in context: http://lucene.472066.n3.nabble.com/Complex-situation-tp3071936p3098875.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Complex situation
Hello Lee, I thought maybe this is a solution: I can index every night the correct openinghours for next day. So tonight(00:01) i can index the openinghours for 2011-24-06. My query in my dih can looks like this: select * from OPENINGHOURS o where o.startdate <= NOW() AND o.enddate >= NOW() AND o.companyid = '${OTHER_ENTITY.companyid}' With this query i only save the openinghours for today. So i have only one field(openinghours). Openinghours 18:00 Then i can facets easilty on openinghours(facet.field=openinghours). I don't know if i can update it every night without problems? I can use the delte import? -- View this message in context: http://lucene.472066.n3.nabble.com/Complex-situation-tp3071936p3099468.html Sent from the Solr - User mailing list archive at Nabble.com.
Query may only contain [a-z][0-9]
Hello, Is it possible to configure into SOLR that only numbers and letters are accepted([a-z][0-9])?? When a user gives a term like "+" or "-" i get some SOLR errors. How can i exclude this characters? -- View this message in context: http://lucene.472066.n3.nabble.com/Query-may-only-contain-a-z-0-9-tp3103553p3103553.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Query may only contain [a-z][0-9]
Yes i use the dismax handler, but i will fix this in my application layer. Thanks -- View this message in context: http://lucene.472066.n3.nabble.com/Query-may-only-contain-a-z-0-9-tp3103553p3103945.html Sent from the Solr - User mailing list archive at Nabble.com.