Re: Solr document missing or not getting indexed though we get 200 ok status from server
Please check doc's unique key(Id). All keys shd be unique. Else docs having same id will be replaced. On 04-Sep-2016 12:13 PM, "Ganesh M" wrote: > Hi, > we are keep sending documents to Solr from our app server. Single document > per request, but in parallel of 10 request hits solr cloud in a second. > > We could see our post request ( update request ) hitting our solr 5.4 in > localhost_access logs, and it's response as 200 Ok response. And also we > get HTTP 200 OK response to our app servers as well for out HTTP request we > fired to SOLR Cloud. > > But few documents are not getting indexed. Out of 2000 documents we sent > 10 documents are getting missed. Thought there is not error, few documents > are getting missed. > > We use autoSoftcommit as 2 secs and autohardcommit as 30 secs. > > Why is that 10 documents not getting indexed and also no error getting > thrown back if server is not able to index it ? > > Regards, > > > >
Re: polygon self-crossing
Yes, decompose into multi polygon. On 05-Jun-2017 6:11 AM, "rgamarra" wrote: > Hi there all, > > I'm using the RPT field type to perform geospatial polygonal searches (e.g. > https://cwiki.apache.org/confluence/display/solr/Spatial+Search)/ > > How do you recommend with self-crossing polygons (8 figure or bow-tie, for > example)? > > Decompose it in a multi polygon? Setting to "none" validation rule: in what > cases the unexpected behavior arises? > > or, configure different validation rules at the schema level? > > According to https://lucene.apache.org/core/6_2_0/core/org/apache/ > lucene/geo/Polygon.html > >1. The polygon must not be self-crossing, otherwise may result in >unexpected behavior. > > The expected behavior of the search is that of searching for points in each > of the lobes of the 8 figure, for the example given. > > Best regards > > > -- > Rodolfo Federico Gamarra >
JOIN query
Hi, Can we use join query for more than 2 cores in solr. If yes, please provide reference or example. Thanks, Nitin
SOLR JOIN
Hi, Can we use join query for more than 2 cores in solr. If yes, please provide reference or example. Thanks, Nitin
Re: Solr ignores configuration file
One workaround is while indexing add +2 hours. On Mon 8 Apr, 2019, 4:16 PM , wrote: > > Dear recipients, > > Can you help me with the following issue: > > I should present my time stamps in solr in UTC+2 instead of UTC. How can > I do it ? > > I've created the following question in StackOverflow > > > https://stackoverflow.com/questions/55530142/solr-7-6-0-ignores-configuration-file-bin-solr-in-sh?noredirect=1#comment97766221_55530142 > > Br, Jaana Miettinen > >
Re: Sql entity processor sortedmapbackedcache out of memory issue
Is caching works with other entity processor like SolrEntityprocessor? On Fri 12 Apr, 2019, 3:10 PM Srinivas Kashyap, wrote: > Hi Shawn/Mikhail Khludnev, > > I was going through Jira https://issues.apache.org/jira/browse/SOLR-4799 > and see, I can do my intended activity by specifying zipper. > > I tried doing it, however I'm getting error as below: > > Caused by: org.apache.solr.handler.dataimport.DataImportHandlerException: > java.lang.IllegalArgumentException: expect increasing foreign keys for > Relation CHILD_KEY=PARENT.PARENT_KEY got: QA-HQ008880,HQ011782 > at > org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:62) > at > org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:246) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514) > at > org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414) > ... 5 more > Caused by: java.lang.IllegalArgumentException: expect increasing foreign > keys for Relation CHILD_KEY=PARENT.PARENT_KEY got: QA-HQ008880,HQ011782 > at > org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:70) > at > org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:126) > at > org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:74) > at > org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:243) > > > Below is my dih config: > > > query="SELECT > PQRS,PARENT_KEY,L,M,N,O FROM DEF order by PARENT_KEY DESC" > > > > /> > /> > /> > > name="childentity1" pk="PQRS" > > query="SELECT A,B,C,D,E,F,CHILD_KEY,MODIFY_TS FROM ABC ORDER BY CHILD_KEY > DESC" > > processor="SqlEntityProcessor" join="zipper" where="CHILD_KEY= > PARENT.PARENT_KEY" > > > > name="A" column="A" /> > name="B" column="B" /> > > > > Thanks and Regards, > Srinivas Kashyap > > -Original Message- > From: Shawn Heisey > Sent: 09 April 2019 01:27 PM > To: solr-user@lucene.apache.org > Subject: Re: Sql entity processor sortedmapbackedcache out of memory issue > > On 4/8/2019 11:47 PM, Srinivas Kashyap wrote: > > I'm using DIH to index the data and the structure of the DIH is like > below for solr core: > > > > > > 16 child entities > > > > > > During indexing, since the number of requests being made to database was > high(to process one document 17 queries) and was utilizing most of > connections of database thereby blocking our web application. > > If you have 17 entities, then one document will indeed take 17 queries. > That's the nature of multiple DIH entities. > > > To tackle it, we implemented SORTEDMAPBACKEDCACHE with cacheImpl > parameter to reduce the number of requests to database. > > When you use SortedMapBackedCache on an entity, you are asking Solr to > store the results of the entire query in memory, even if you don't need all > of the results. If the database has a lot of rows, that's going to take a > lot of memory. > > In your excerpt from the config, your inner entity doesn't have a WHERE > clause. Which means that it's going to retrieve all of the rows of the ABC > table for *EVERY* single entry in the DEF table. That's going to be > exceptionally slow. Normally the SQL query on inner entities will have > some kind of WHERE clause that limits the results to rows that match the > entry from the outer entity. > > You may need to write a custom indexing program that runs separately from > Solr, possibly on an entirely different server. That might be a lot more > efficient than DIH. > > Thanks, > Shawn > > DISCLAIMER: > E-mails and attachments from Bamboo Rose, LLC are confidential. > If you are not the intended recipient, please notify the sender > immediately by replying to the e-mail, and then delete it without making > copies or using it in any way. > No representation is made that this email or any attachments are free of > viruses. Virus scanning is recommended and is the responsibility of the > recipient. >
Re: Learning to Rank (LTR) with grouping
Can anybody please share, ltr and group together works fine in which solr version. On Wed, Apr 18, 2018 at 3:47 PM, Diego Ceccarelli (BLOOMBERG/ LONDON) < dceccarel...@bloomberg.net> wrote: > I just updated the PR to upstream - I still have to fix some things in > distribute mode, but unit tests in non distribute mode works. > > Hope this helps, > Diego > > From: solr-user@lucene.apache.org At: 04/15/18 03:37:54To: > solr-user@lucene.apache.org > Subject: Re: Learning to Rank (LTR) with grouping > > People sometimes fill in the Fix/Version field when they're creating > the JIRA, since anyone can open a JIRA it's hard to control. I took > that out just now. > > Basically if the "Resolution" field doesn't indicate it's fixed, you > should assume that it hasn't been addressed. > > Patches welcome. > > Best, > Erick > > On Tue, Apr 3, 2018 at 9:11 AM, ilayaraja wrote: > > Thanks Roopa. > > > > I was expecting that the issue has been fixed in solr 7.0 as per here > > https://issues.apache.org/jira/browse/SOLR-8776. > > > > Let me see why it is still not working on solr-ltr-7.2.1 > > > > > > > > - > > --Ilay > > -- > > Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html > > >
Solr 7.2.1 Master-slave replication Issue
Hi, Facing issue in Solr 7.2.1 Master-slave replication, Master-slave replication is working fine. But if I disable replication from master, Slaves shows no data (numFound=0). Slave in not serving data, it had before replication. I suspect, Index generation is getting updated in slave, which was not there is previous Solr version. Please advise. Thanks, Nitin