RE: Issue with scoreNodes stream expression

2017-09-21 Thread Aurélien MAZOYER
Hi,

Thank you for your advice. It helps me to notice that the exception seems to be 
thrown when no data is gathered by the gatherNodes expression (not a very 
explicit error message 😊).
I modified the expression and it works well now.

Thank you,

Aurélien 

-Message d'origine-
De : Joel Bernstein [mailto:joels...@gmail.com] 
Envoyé : mercredi 20 septembre 2017 04:11
À : solr-user@lucene.apache.org
Objet : Re: Issue with scoreNodes stream expression

Have you tried running a very simple expression first. For example does this 
run:

random(gettingstarted, q="*:*", fl="id", rows="200")



Joel Bernstein
http://joelsolr.blogspot.com/

On Tue, Sep 19, 2017 at 4:56 PM, Aurélien MAZOYER < 
aurelien.mazo...@francelabs.com> wrote:

> Hi,
>
>
>
> I wanted to try the new scoreNodes stream expression that is used to 
> make
> recommendations:
>
> https://cwiki.apache.org/confluence/display/solr/Graph+
> Traversal#GraphTraver
> sal-UsingthescoreNodesFunctiontoMakeaRecommendation
>
> but encountered some issue with it.
>
>
>
> The following steps can easily reproduce the problem:
>
> I started Solr (6.6.1) in cloud mode :
>
> solr -e cloud -noprompt
>
> then run the following command in exampledocs to index the sample data :
>
> java -Dc=gettingstarted -jar post.jar *.xml
>
> and to finish copy/paste the following expression in the stream tab:
>
> scoreNodes(top(n=25,
>
>   sort="count(*) desc",
>
>   nodes(gettingstarted,
>
>  random(gettingstarted, q="*:*", 
> fl="id", rows="200"),
>
>  walk="id->id",
>
>  gather="id",
>
>  count(*
>
> (yes I now that my stream expression does nothing usefull :-P).
>
> Anyway, I got the following exception when I run the query:
>
> "EXCEPTION": "org.apache.solr.client.solrj.SolrServerException: No 
> collection param specified on request and no default collection has 
> been set.",
>
> Any idea of what i did wrong?
>
>
>
> Thank you,
>
>
>
> Regards,
>
>
>
> Aurélien
>
>
>
>
>
>
>
>



Re: Rescoring from 0 - full

2017-09-21 Thread Emir Arnautović
Hi Dariusz,
You could use fq for filtering (can disable caching to avoid polluting filter 
cache) and q=*:*. That way you’ll get score=1 for all doc and can rerank. The 
issue with this approach is that you rerank top N and without score they 
wouldn’t be ordered so it is no-go.
What you could do (did not try) in rescoring divide by score (not sure if can 
access calculated but could calculate) to eliminate score.

HTH,
Emir

> On 20 Sep 2017, at 21:38, Dariusz Wojtas  wrote:
> 
> Hi,
> When I use boosting fuctionality, it is always about adding or
> multiplicating the score calculated in the 'q' param.
> I mau use function queries inside 'q', but this may hit performance on
> calling multiple nested functions.
> I thaught that 'rerank' could help, but it is still about changing the
> original score, not full calculation.
> 
> How can take full control on score in rerank? Is it possible?
> 
> Best regards,
> Dariusz Wojtas



Re: Seeing very low ingestion performance for a single non-cloud Solr core

2017-09-21 Thread Emir Arnautović
Hi,
What are your commit configs? Maybe you are committing too frequently. 

Thanks,
Emir

> On 21 Sep 2017, at 06:19, saiks  wrote:
> 
> Hi,
> 
> Environment:
> - Solr is running in non-cloud mode on 6.4.2, Sun Java8, Linux
> 4.4.0-31-generic x86_64
> - Ingesting into a single core
> - SoftCommit = 5 seconds, HardCommit = 10 seconds
> - System has 16 Cpus and 32 Gb of memory (Solr is given 20 Gb of JVM heap)
> - text = StandardTokenizer, id = solr.StrField/docValues, hostname =
> solr.StrField/docValues, app = solr.StrField/docValues, epoch =
> solr.TrieLongField/docValues
> 
> I am using jmeter to ingest to Solr core using UpdateRequestHandle
> ("/update/json") and sending in a batch of 1000 messages(same message) in a
> single json array.
> 
> Sample message
> [{"text":"May 11 10:18:22 scrooge Web-Requests: May 11 10:18:22
> @IunAIir17k-- EVENT_WR-Y-attack-600 SG_child[823]: [event.error]
> Possible attack - 5 blocked requests within 120 seconds",
> "id":"id1",
> "hostname": "xx.com",
> "app": "",
> "epoch": 1483667347941
> },
> ]
> 
> Jmeter is configured to run 10 threads in parallel repeating the request
> 1000 times, which should ingest 10,000,000 messages in total.
> Jmeter post url:
> "/solr/mycore/update/json?overwrite=false&wt=json&commit=false"
> 
> Jmeter summary:
> summary =   5000 in 00:03:07 =   26.7/s Avg:   370 Min:27 Max:  1734
> Err: 0 (0.00%)
> 
> I am only able to ingest 26000 messages per second, looking at system
> resources only one or two cpus are at 25-30% and the rest are sitting idle
> and also Solr heap is flat at 3Gb with no iowait on the devices.
> Increasing parallelism in Jmeter to ingest using 20 threads did not increase
> ingested messages per second, but increased the latency by 2x for each
> request.
> 
> I don't understand why Solr is not able to use all the cpus on the host if I
> increase Jmeter parallelism from 10 -> 20 -> 40. What can I do to achieve
> performance gain and make Solr utilize system resources to their maximum.
> 
> Please help.
> 
> Thank you
> 
> 
> 
> 
> 
> 
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html



How to build solr

2017-09-21 Thread srini sampath
Hi,
How to build and compile solr in my locale machine? it seems the
https://wiki.apache.org/solr/HowToCompileSolr page became obsolete.
Thanks in advance


Re: How to build solr

2017-09-21 Thread Aman Tandon
Hi Srini,

Kindly refer to the READ.ME section of this link of GitHub, this should
work.
https://github.com/apache/lucene-solr/blob/master/README.md

With regards,
Aman Tandon


On Sep 21, 2017 1:53 PM, "srini sampath" 
wrote:

> Hi,
> How to build and compile solr in my locale machine? it seems the
> https://wiki.apache.org/solr/HowToCompileSolr page became obsolete.
> Thanks in advance
>


Re: Is there a way to delete multiple documents using wildcard?

2017-09-21 Thread balmydrizzle
Doesn't work, either. wildcard query can't be used in delete. At least for
old Solr 3.x



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


[bulk]: Re: Dates and DataImportHandler

2017-09-21 Thread Mannott, Birgit
As far as I understood, you can use the locale so that DIH saves the last index 
time for  the given time zone and not for UTC. So if you set the locale 
according to the timezone of your DB you don't need to convert dates for 
comparison.

But for me it's not working because every time I include settings for the 
propertyWriter (concrete data doesn't matter - I tried a lot),  the null 
pointer exception appears. So I had to find a workaround and convert now the 
time saved by the DIH from UTC to the timezone of my DB when comparing for 
delta imports. But it's an ugly workaround I don't like. 

So, if anyone has an idea why this NPE occurs it would be great. Do I perhaps 
have to add something to solrconfig.xml?

Thanks,
Birgit



-Original Message-
From: Jamie Jackson [mailto:jamieja...@gmail.com] 
Sent: Tuesday, September 19, 2017 6:54 PM
To: solr-user@lucene.apache.org
Subject: [bulk]: Re: [bulk]: Dates and DataImportHandler

FWIW, I know mine worked, so maybe try:



I can't conceive of what the locale would possibly do when a dateFormat is 
specified, so I omitted the attribute. (Maybe one can specify dateFormat *or 
*locale--it seems like specifying both would cause a clash.) For what it's 
worth, the format you're trying to write seems identical to the default*, so 
I'm not sure what benefit you're getting by using that propertyWriter.

*It's identical to *my* default, anyway. Maybe the default changes based on 
one's system configuration, I don't know. This stuff isn't very well documented.

On Tue, Sep 19, 2017 at 7:22 AM, Mannott, Birgit 
wrote:

> Hi,
>
> I have a similar problem. I try to change the timezone for the 
> last_index_time by setting
>
>  type="SimplePropertiesWriter" locale="en_US" />
>
> in the  section of my data-config.xml file.
>
> But when doing this I always get a NullPointerException on Delta Import:
>
> 2017-09-15 14:04:00.825 INFO  (Thread-2938) [   x:mex_prd_dev1100-ap]
> o.a.s.h.d.DataImporter Starting Delta Import
> 2017-09-15 14:04:00.827 ERROR (Thread-2938) [   x:mex_prd_dev1100-ap]
> o.a.s.h.d.DataImporter Delta Import Failed
> org.apache.solr.handler.dataimport.DataImportHandlerException: Unable 
> to PropertyWriter implementation:SimplePropertiesWriter
> at org.apache.solr.handler.dataimport.DataImporter.
> createPropertyWriter(DataImporter.java:330)
> at org.apache.solr.handler.dataimport.DataImporter.
> doDeltaImport(DataImporter.java:439)
> at org.apache.solr.handler.dataimport.DataImporter.
> runCmd(DataImporter.java:476)
> at org.apache.solr.handler.dataimport.DataImporter.
> lambda$runAsync$0(DataImporter.java:457)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.text.SimpleDateFormat.(SimpleDateFormat.java:
> 598)
> at org.apache.solr.handler.dataimport.
> SimplePropertiesWriter.init(SimplePropertiesWriter.java:100)
> at org.apache.solr.handler.dataimport.DataImporter.
> createPropertyWriter(DataImporter.java:328)
> ... 4 more
>
> Has anyone an idea what is wrong or missing?
>
> Thanks,
> Birgit
>
>
>
> -Original Message-
> From: Jamie Jackson [mailto:jamieja...@gmail.com]
> Sent: Tuesday, September 19, 2017 3:42 AM
> To: solr-user@lucene.apache.org
> Subject: [bulk]: Dates and DataImportHandler
>
> Hi folks,
>
> My DB server is on America/Chicago time. Solr (on Docker) is running 
> on UTC. Dates coming from my (MariaDB) data source seem to get 
> translated properly into the Solr index without me doing anything special.
>
> However when doing delta imports using last_index_time ( 
> http://wiki.apache.org/solr/DataImportHandlerDeltaQueryViaFullImport 
> ), I can't seem to get the date, which Solr provides, to be understood 
> by the DB as being UTC (and translated back, accordingly). In other 
> words, the DB thinks the Solr UTC date is local, so it thinks the date 
> is ahead by six hours.
>
> '${dataimporter.request.clean}' != 'false'
>
> or dt > '${dataimporter.last_index_time}'
>
> I came up with this workaround, which seems to work:
>
> '${dataimporter.request.clean}' != 'false'
>
> /* ${user.timezone} is UTC, and the 
> ${custom.dataimporter.datasource.tz}
> property is set to America/Chicago */
>
> or dt > CONVERT_TZ('${dataimporter.last_index_time}','${user.
> timezone}','${
> custom.dataimporter.datasource.tz}')
>
> However, isn't there a way for this translation to happen more naturally?
>
> I thought maybe I could do something like this:
>
> 
> dateFormat="-MM-dd HH:mm:ssZ"
>
> type="SimplePropertiesWriter"
>
> />
>
> The above did set the property as expected (with a trailiing `+`), 
> but that didn't seem to help the DB understand/translate the date.
>
> Thanks,
> Jamie
>


Replication Handler failing

2017-09-21 Thread Rubi Hali
Hi Team

We are getting continous replication errors on our slaves . The stack trace
of which is the following :

Index fetch failed :org.apache.solr.common.SolrException:
openNewSearcher called on closed core
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1516)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1753)
at 
org.apache.solr.handler.IndexFetcher.openNewSearcherAndUpdateCommitPoint(IndexFetcher.java:734)
at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:511)
at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:251)
at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:388)
at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$62(ReplicationHandler.java:1089)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(


Replication settings on slave is with poll interval of 60 secs.


there are hardly more than 6k docs and we are not having any frequent
queries also but still memory consumption is going very high.


please help


Regards

Rubi Hali


Re: Rescoring from 0 - full

2017-09-21 Thread Diego Ceccarelli (BLOOMBERG/ LONDON)
Hi Dariusz,
If you use *:* you'll rerank only the top N random documents, as Emir said, 
that will not produce interesting results probably. 
If you want to replace the original score, you can take a look at the learning 
to rank module [1], that would allow you to reassign a 
new score to the top N documents returned by your query and then reorder them 
based on that (ignoring the original score, if you want).

Cheers,
Diego  

[1] https://cwiki.apache.org/confluence/display/solr/Learning+To+Rank

From: solr-user@lucene.apache.org At: 09/21/17 08:49:13
To: solr-user@lucene.apache.org
Subject: Re: Rescoring from 0 - full

Hi Dariusz,
You could use fq for filtering (can disable caching to avoid polluting filter 
cache) and q=*:*. That way you’ll get score=1 for all doc and can rerank. The 
issue with this approach is that you rerank top N and without score they 
wouldn’t be ordered so it is no-go.
What you could do (did not try) in rescoring divide by score (not sure if can 
access calculated but could calculate) to eliminate score.

HTH,
Emir

> On 20 Sep 2017, at 21:38, Dariusz Wojtas  wrote:
> 
> Hi,
> When I use boosting fuctionality, it is always about adding or
> multiplicating the score calculated in the 'q' param.
> I mau use function queries inside 'q', but this may hit performance on
> calling multiple nested functions.
> I thaught that 'rerank' could help, but it is still about changing the
> original score, not full calculation.
> 
> How can take full control on score in rerank? Is it possible?
> 
> Best regards,
> Dariusz Wojtas




RE: [bulk]: Re: Dates and DataImportHandler

2017-09-21 Thread Mannott, Birgit
Forget it. I mixed up the timezone with the locale. I thought I could set the 
timezone for the DIH last index but that's not possible

-Original Message-
From: Mannott, Birgit [mailto:b.mann...@klopotek.de] 
Sent: Thursday, September 21, 2017 1:09 PM
To: solr-user@lucene.apache.org
Subject: [bulk]: [bulk]: Re: Dates and DataImportHandler

As far as I understood, you can use the locale so that DIH saves the last index 
time for  the given time zone and not for UTC. So if you set the locale 
according to the timezone of your DB you don't need to convert dates for 
comparison.

But for me it's not working because every time I include settings for the 
propertyWriter (concrete data doesn't matter - I tried a lot),  the null 
pointer exception appears. So I had to find a workaround and convert now the 
time saved by the DIH from UTC to the timezone of my DB when comparing for 
delta imports. But it's an ugly workaround I don't like. 

So, if anyone has an idea why this NPE occurs it would be great. Do I perhaps 
have to add something to solrconfig.xml?

Thanks,
Birgit



-Original Message-
From: Jamie Jackson [mailto:jamieja...@gmail.com]
Sent: Tuesday, September 19, 2017 6:54 PM
To: solr-user@lucene.apache.org
Subject: [bulk]: Re: [bulk]: Dates and DataImportHandler

FWIW, I know mine worked, so maybe try:



I can't conceive of what the locale would possibly do when a dateFormat is 
specified, so I omitted the attribute. (Maybe one can specify dateFormat *or 
*locale--it seems like specifying both would cause a clash.) For what it's 
worth, the format you're trying to write seems identical to the default*, so 
I'm not sure what benefit you're getting by using that propertyWriter.

*It's identical to *my* default, anyway. Maybe the default changes based on 
one's system configuration, I don't know. This stuff isn't very well documented.

On Tue, Sep 19, 2017 at 7:22 AM, Mannott, Birgit 
wrote:

> Hi,
>
> I have a similar problem. I try to change the timezone for the 
> last_index_time by setting
>
>  type="SimplePropertiesWriter" locale="en_US" />
>
> in the  section of my data-config.xml file.
>
> But when doing this I always get a NullPointerException on Delta Import:
>
> 2017-09-15 14:04:00.825 INFO  (Thread-2938) [   x:mex_prd_dev1100-ap]
> o.a.s.h.d.DataImporter Starting Delta Import
> 2017-09-15 14:04:00.827 ERROR (Thread-2938) [   x:mex_prd_dev1100-ap]
> o.a.s.h.d.DataImporter Delta Import Failed
> org.apache.solr.handler.dataimport.DataImportHandlerException: Unable 
> to PropertyWriter implementation:SimplePropertiesWriter
> at org.apache.solr.handler.dataimport.DataImporter.
> createPropertyWriter(DataImporter.java:330)
> at org.apache.solr.handler.dataimport.DataImporter.
> doDeltaImport(DataImporter.java:439)
> at org.apache.solr.handler.dataimport.DataImporter.
> runCmd(DataImporter.java:476)
> at org.apache.solr.handler.dataimport.DataImporter.
> lambda$runAsync$0(DataImporter.java:457)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at java.text.SimpleDateFormat.(SimpleDateFormat.java:
> 598)
> at org.apache.solr.handler.dataimport.
> SimplePropertiesWriter.init(SimplePropertiesWriter.java:100)
> at org.apache.solr.handler.dataimport.DataImporter.
> createPropertyWriter(DataImporter.java:328)
> ... 4 more
>
> Has anyone an idea what is wrong or missing?
>
> Thanks,
> Birgit
>
>
>
> -Original Message-
> From: Jamie Jackson [mailto:jamieja...@gmail.com]
> Sent: Tuesday, September 19, 2017 3:42 AM
> To: solr-user@lucene.apache.org
> Subject: [bulk]: Dates and DataImportHandler
>
> Hi folks,
>
> My DB server is on America/Chicago time. Solr (on Docker) is running 
> on UTC. Dates coming from my (MariaDB) data source seem to get 
> translated properly into the Solr index without me doing anything special.
>
> However when doing delta imports using last_index_time ( 
> http://wiki.apache.org/solr/DataImportHandlerDeltaQueryViaFullImport
> ), I can't seem to get the date, which Solr provides, to be understood 
> by the DB as being UTC (and translated back, accordingly). In other 
> words, the DB thinks the Solr UTC date is local, so it thinks the date 
> is ahead by six hours.
>
> '${dataimporter.request.clean}' != 'false'
>
> or dt > '${dataimporter.last_index_time}'
>
> I came up with this workaround, which seems to work:
>
> '${dataimporter.request.clean}' != 'false'
>
> /* ${user.timezone} is UTC, and the
> ${custom.dataimporter.datasource.tz}
> property is set to America/Chicago */
>
> or dt > CONVERT_TZ('${dataimporter.last_index_time}','${user.
> timezone}','${
> custom.dataimporter.datasource.tz}')
>
> However, isn't there a way for this translation to happen more naturally?
>
> I thought maybe I could do something like this:
>
> 
> dateFormat="-MM-dd HH:mm:ssZ"
>
> type="SimplePropertiesWriter"
>
> />
>
> The a

Re: Is there a way to delete multiple documents using wildcard?

2017-09-21 Thread Emir Arnautović
Hi,
Delete by query should work - posting to /update

*:*

should delete all doc.

HTH,
Emir

> On 21 Sep 2017, at 05:25, balmydrizzle  wrote:
> 
> Doesn't work, either. wildcard query can't be used in delete. At least for
> old Solr 3.x
> 
> 
> 
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html



Re: Is there a way to delete multiple documents using wildcard?

2017-09-21 Thread Diego Ceccarelli (BLOOMBERG/ LONDON)
https://wiki.apache.org/solr/FAQ#How_can_I_delete_all_documents_from_my_index.3F

have a look also at the last post here: https://gist.github.com/nz/673027

I think there's a way to disallow delete by *:* in the solrconfig.xml but I 
can't find it (I would take a look in the solrconfig just in case). 


From: solr-user@lucene.apache.org At: 09/21/17 13:28:13To:  
solr-user@lucene.apache.org
Subject: Re: Is there a way to delete multiple documents using wildcard?

Hi,
Delete by query should work - posting to /update

   *:*

should delete all doc.

HTH,
Emir

> On 21 Sep 2017, at 05:25, balmydrizzle  wrote:
> 
> Doesn't work, either. wildcard query can't be used in delete. At least for
> old Solr 3.x
> 
> 
> 
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html




Re: Seeing very low ingestion performance for a single non-cloud Solr core

2017-09-21 Thread Walter Underwood
5 seconds and 10 seconds is very short for auto commit.

20 Gb is probably too much heap.

Sending the exact same message for every update will create a few very long 
posting lists. Not sure if that is slow, but it is not realistic.

Finally, 26,000 per second is not that slow. That is over 1.5 million/minute. 
We are indexing bigger documents, but seeing 1 million/minute to a cluster with 
four shards.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)


> On Sep 21, 2017, at 1:18 AM, Emir Arnautović  
> wrote:
> 
> Hi,
> What are your commit configs? Maybe you are committing too frequently. 
> 
> Thanks,
> Emir
> 
>> On 21 Sep 2017, at 06:19, saiks  wrote:
>> 
>> Hi,
>> 
>> Environment:
>> - Solr is running in non-cloud mode on 6.4.2, Sun Java8, Linux
>> 4.4.0-31-generic x86_64
>> - Ingesting into a single core
>> - SoftCommit = 5 seconds, HardCommit = 10 seconds
>> - System has 16 Cpus and 32 Gb of memory (Solr is given 20 Gb of JVM heap)
>> - text = StandardTokenizer, id = solr.StrField/docValues, hostname =
>> solr.StrField/docValues, app = solr.StrField/docValues, epoch =
>> solr.TrieLongField/docValues
>> 
>> I am using jmeter to ingest to Solr core using UpdateRequestHandle
>> ("/update/json") and sending in a batch of 1000 messages(same message) in a
>> single json array.
>> 
>> Sample message
>> [{"text":"May 11 10:18:22 scrooge Web-Requests: May 11 10:18:22
>> @IunAIir17k-- EVENT_WR-Y-attack-600 SG_child[823]: [event.error]
>> Possible attack - 5 blocked requests within 120 seconds",
>> "id":"id1",
>> "hostname": "xx.com",
>> "app": "",
>> "epoch": 1483667347941
>> },
>> ]
>> 
>> Jmeter is configured to run 10 threads in parallel repeating the request
>> 1000 times, which should ingest 10,000,000 messages in total.
>> Jmeter post url:
>> "/solr/mycore/update/json?overwrite=false&wt=json&commit=false"
>> 
>> Jmeter summary:
>> summary =   5000 in 00:03:07 =   26.7/s Avg:   370 Min:27 Max:  1734
>> Err: 0 (0.00%)
>> 
>> I am only able to ingest 26000 messages per second, looking at system
>> resources only one or two cpus are at 25-30% and the rest are sitting idle
>> and also Solr heap is flat at 3Gb with no iowait on the devices.
>> Increasing parallelism in Jmeter to ingest using 20 threads did not increase
>> ingested messages per second, but increased the latency by 2x for each
>> request.
>> 
>> I don't understand why Solr is not able to use all the cpus on the host if I
>> increase Jmeter parallelism from 10 -> 20 -> 40. What can I do to achieve
>> performance gain and make Solr utilize system resources to their maximum.
>> 
>> Please help.
>> 
>> Thank you
>> 
>> 
>> 
>> 
>> 
>> 
>> --
>> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
> 



Re: Not able to import timestamp data into Solr

2017-09-21 Thread shankhamajumdar
Hi,

I have gone through the below link and tried with different options. But
still getting the same error.
http://lucene.apache.org/solr/guide/6_6/working-with-dates.html

Can you please check the below configuration and let me know what is wrong
there?

managed-schema 
  
  
 
 

dataconfig.xml 
 query="SELECT test_data1,test_data2,test_data3, upserttime from 
 test_table" 
autoCommit="true"> 
 
  
  
  

The timestamp data in Cassandra for example - 2017-09-20
10:25:46.752000+

Regards,
Shankha
 



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


overwrite the parameter query in DIH

2017-09-21 Thread solr2020
Hi All,

We are retrieving mongodb data using Dataimport handler. We have a scenario
where we have to overwrite the mongodb query configured in data-config file.
We have to do this overwrite programmatically using solrj. For this we are
using ModifiableSolrParams to set the parameters. Here is the code snippet
used to create a dataimport http request.

String solrURL=
"http://:/solr/collectionname";
SolrClient solr = new HttpSolrClient.Builder(solrURL).build();
ModifiableSolrParams params = new ModifiableSolrParams();
params.set("qt", "/dataimport");
params.set("command", "full-import");
params.set("query=id:{ $in: ", idlist+ " }");
QueryResponse response = solr.query(params);

Here the expectation is it should use this query parameter value given in
the code snippet instead of using the query parameter configured in
data-config file.

Is there a way to do this? .Please suggest.




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Rescoring from 0 - full

2017-09-21 Thread Erick Erickson
Sure, you can take full control of the scoring, just write a custom similarity.

What's not at all clear is why you want to. RerankQParserPlugin will
re-rank the to N documents by pushing them through a different query,
can you make that work?

Best,
Erick



On Thu, Sep 21, 2017 at 4:20 AM, Diego Ceccarelli (BLOOMBERG/ LONDON)
 wrote:
> Hi Dariusz,
> If you use *:* you'll rerank only the top N random documents, as Emir said, 
> that will not produce interesting results probably.
> If you want to replace the original score, you can take a look at the 
> learning to rank module [1], that would allow you to reassign a
> new score to the top N documents returned by your query and then reorder them 
> based on that (ignoring the original score, if you want).
>
> Cheers,
> Diego
>
> [1] https://cwiki.apache.org/confluence/display/solr/Learning+To+Rank
>
> From: solr-user@lucene.apache.org At: 09/21/17 08:49:13
> To: solr-user@lucene.apache.org
> Subject: Re: Rescoring from 0 - full
>
> Hi Dariusz,
> You could use fq for filtering (can disable caching to avoid polluting filter 
> cache) and q=*:*. That way you’ll get score=1 for all doc and can rerank. The 
> issue with this approach is that you rerank top N and without score they 
> wouldn’t be ordered so it is no-go.
> What you could do (did not try) in rescoring divide by score (not sure if can 
> access calculated but could calculate) to eliminate score.
>
> HTH,
> Emir
>
>> On 20 Sep 2017, at 21:38, Dariusz Wojtas  wrote:
>>
>> Hi,
>> When I use boosting fuctionality, it is always about adding or
>> multiplicating the score calculated in the 'q' param.
>> I mau use function queries inside 'q', but this may hit performance on
>> calling multiple nested functions.
>> I thaught that 'rerank' could help, but it is still about changing the
>> original score, not full calculation.
>>
>> How can take full control on score in rerank? Is it possible?
>>
>> Best regards,
>> Dariusz Wojtas
>
>


Re: overwrite the parameter query in DIH

2017-09-21 Thread Erick Erickson
Quite frankly if you can't just configure this in DIH I'd do all the
indexing from SolrJ.

Long blog on the subject here:
https://lucidworks.com/2012/02/14/indexing-with-solrj/

Best,
Erick

On Thu, Sep 21, 2017 at 8:17 AM, solr2020  wrote:
> Hi All,
>
> We are retrieving mongodb data using Dataimport handler. We have a scenario
> where we have to overwrite the mongodb query configured in data-config file.
> We have to do this overwrite programmatically using solrj. For this we are
> using ModifiableSolrParams to set the parameters. Here is the code snippet
> used to create a dataimport http request.
>
> String solrURL=
> "http://:/solr/collectionname";
> SolrClient solr = new HttpSolrClient.Builder(solrURL).build();
> ModifiableSolrParams params = new ModifiableSolrParams();
> params.set("qt", "/dataimport");
> params.set("command", "full-import");
> params.set("query=id:{ $in: ", idlist+ " }");
> QueryResponse response = solr.query(params);
>
> Here the expectation is it should use this query parameter value given in
> the code snippet instead of using the query parameter configured in
> data-config file.
>
> Is there a way to do this? .Please suggest.
>
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: How to build solr

2017-09-21 Thread Erick Erickson
And did you follow the link provided on that page?

Best,
Erick

On Thu, Sep 21, 2017 at 3:07 AM, Aman Tandon  wrote:
> Hi Srini,
>
> Kindly refer to the READ.ME section of this link of GitHub, this should
> work.
> https://github.com/apache/lucene-solr/blob/master/README.md
>
> With regards,
> Aman Tandon
>
>
> On Sep 21, 2017 1:53 PM, "srini sampath" 
> wrote:
>
>> Hi,
>> How to build and compile solr in my locale machine? it seems the
>> https://wiki.apache.org/solr/HowToCompileSolr page became obsolete.
>> Thanks in advance
>>


Re: Replicates not recovering after rolling restart

2017-09-21 Thread Erick Erickson
Hmmm, I didn't ask what version you're upgrading _from_. 5 years ago
would be Solr 4. Are you replacing Solr 5 or 4? I'm guessing 5, but
want to check unlikely possibilities.

Next question: I'm assuming all your nodes have been upgraded to Solr 6, right?

Best,
Erick

On Wed, Sep 20, 2017 at 7:18 PM, Bill Oconnor  wrote:
> I have no clue where that number comes from it does not seem to be in the 
> actual post to the leader as seen in my tcpdump.   It is mystery.
>
> 
> From: Walter Underwood 
> Sent: Wednesday, September 20, 2017 7:00:53 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Replicates not recovering after rolling restart
>
>
>> On Sep 20, 2017, at 6:15 PM, Bill Oconnor  wrote:
>>
>> I restart using the standard "sudo service solr start/stop"
>
> You might look into what that actually does.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>


CloudSolrServer set http request timeout

2017-09-21 Thread Vincenzo D'Amore
Hi,

I have a huge problem with few queries in SolrCloud 4.8.1 that hangs the
client.

Actually I'm unable to understand even if the cluster really receives the
requests.

How can I set a timeout when Solrj client wait too much ?

Best regards,
Vincenzo

-- 
Vincenzo D'Amore
email: v.dam...@gmail.com
skype: free.dev
mobile: +39 349 8513251


Re: CloudSolrServer set http request timeout

2017-09-21 Thread Jason Gerlowski
Hi Vincenzo,

Have you tried setting the read/socket timeout on your client?
CloudSolrServer uses a LBHttpSolrServer under the hood, which you can
get with the getLBServer method
(https://lucene.apache.org/solr/4_1_0/solr-solrj/org/apache/solr/client/solrj/impl/CloudSolrServer.html#getLbServer()).
Once you have access to LBHttpSolrServer, you can use the
"setSoTimeout" method
(https://lucene.apache.org/solr/4_1_0/solr-solrj/org/apache/solr/client/solrj/impl/LBHttpSolrServer.html#setSoTimeout(int))
to choose an appropriate maximum timeout.

At least, that's how the Javadocs make it look in 4.x, and how I know
it works in more recent versions.  Hope that helps.

Jason

On Thu, Sep 21, 2017 at 1:07 PM, Vincenzo D'Amore  wrote:
> Hi,
>
> I have a huge problem with few queries in SolrCloud 4.8.1 that hangs the
> client.
>
> Actually I'm unable to understand even if the cluster really receives the
> requests.
>
> How can I set a timeout when Solrj client wait too much ?
>
> Best regards,
> Vincenzo
>
> --
> Vincenzo D'Amore
> email: v.dam...@gmail.com
> skype: free.dev
> mobile: +39 349 8513251


Re:Strange Behavior When Extracting Features

2017-09-21 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Hi Michael,

Thanks for reporting this here and via SOLR-11386 ticket!

I've just added a note to the https://issues.apache.org/jira/browse/SOLR-11386 
ticket.

Christine

- Original Message -
From: solr-user@lucene.apache.org
To: solr-user@lucene.apache.org
At: 09/20/17 20:07:24

Hi all,

I'm getting some extremely strange behavior when trying to extract features
for a learning to rank model. The following query incorrectly says all
features have zero values:

http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=added
couple of fiber channel&rq={!ltr model=redhat_efi_model reRankDocs=1
efi.case_summary=the efi.case_description=added couple of fiber channel
efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10

But this query, which simply moves the word "added" from the front of the
provided text to the back, properly fills in the feature values:

http://gss-test-fusion.usersys.redhat.com:8983/solr/access/query?q=couple
of fiber channel added&rq={!ltr model=redhat_efi_model reRankDocs=1
efi.case_summary=the efi.case_description=couple of fiber channel added
efi.case_issue=the efi.case_environment=the}&fl=id,score,[features]&rows=10

The explain output for the failing query can be found here:

https://gist.github.com/manisnesan/18a8f1804f29b1b62ebfae1211f38cc4

and the explain output for the properly functioning query can be found here:

https://gist.github.com/manisnesan/47685a561605e2229434b38aed11cc65

Have any of you run into this issue? Seems like it could be a bug.

Thanks,
Michael A. Alcorn



Re: CloudSolrServer set http request timeout

2017-09-21 Thread Vincenzo D'Amore
Thanks Jason, I'll give it a try immediately. 

Ciao,
Vincenzo

--
mobile: 3498513251
skype: free.dev

> On 21 Sep 2017, at 19:51, Jason Gerlowski  wrote:
> 
> Hi Vincenzo,
> 
> Have you tried setting the read/socket timeout on your client?
> CloudSolrServer uses a LBHttpSolrServer under the hood, which you can
> get with the getLBServer method
> (https://lucene.apache.org/solr/4_1_0/solr-solrj/org/apache/solr/client/solrj/impl/CloudSolrServer.html#getLbServer()).
> Once you have access to LBHttpSolrServer, you can use the
> "setSoTimeout" method
> (https://lucene.apache.org/solr/4_1_0/solr-solrj/org/apache/solr/client/solrj/impl/LBHttpSolrServer.html#setSoTimeout(int))
> to choose an appropriate maximum timeout.
> 
> At least, that's how the Javadocs make it look in 4.x, and how I know
> it works in more recent versions.  Hope that helps.
> 
> Jason
> 
>> On Thu, Sep 21, 2017 at 1:07 PM, Vincenzo D'Amore  wrote:
>> Hi,
>> 
>> I have a huge problem with few queries in SolrCloud 4.8.1 that hangs the
>> client.
>> 
>> Actually I'm unable to understand even if the cluster really receives the
>> requests.
>> 
>> How can I set a timeout when Solrj client wait too much ?
>> 
>> Best regards,
>> Vincenzo
>> 
>> --
>> Vincenzo D'Amore
>> email: v.dam...@gmail.com
>> skype: free.dev
>> mobile: +39 349 8513251


DocValues error when upgrading to 6.6.1 from 6.5

2017-09-21 Thread Xie, Sean
Hi,

When I upgrade the existing SOLR from 6.5.1 to 6.6.1, I’m getting:
cannot change DocValues type from SORTED to SORTED_SET for field “…..”

During the upgrades, there is no change on schema and schema version (we are 
using schema version 1.5 so seDocValuesAsStored defaults are not taking into 
affect).

Not sure why this is happening.

Planning to upgrade the SOLR version on other clusters, but don’t really want 
to do re-index for all the data.

Any suggestion?

Thanks
Sean

Confidentiality Notice::  This email, including attachments, may include 
non-public, proprietary, confidential or legally privileged information.  If 
you are not an intended recipient or an authorized agent of an intended 
recipient, you are hereby notified that any dissemination, distribution or 
copying of the information contained in or transmitted with this e-mail is 
unauthorized and strictly prohibited.  If you have received this email in 
error, please notify the sender by replying to this message and permanently 
delete this e-mail, its attachments, and any copies of it immediately.  You 
should not retain, copy or use this e-mail or any attachment for any purpose, 
nor disclose all or any part of the contents to any other person. Thank you.


Re: Replicates not recovering after rolling restart

2017-09-21 Thread Bill Oconnor

  1.  We are moving from 4.X to 6.6.
  2.  Changed the schema - adding the version etc nothing major.
  3.  Full re-index of documents into the cluster - so this is not a migration.
  4.  Changed the the JVM parameter from 12GB to 16GB and did a restart.
  5.  Replicates go into recovery which fails to complete after many hours. 
They still respond to queries but the /update POST from the replicates fails 
with the 500 server error and a stack trace because of the number format 
failure.


My other cluster  does not reuse any nodes. The restart went as expected with 
the JVM change. Al


From: Erick Erickson 
Sent: Thursday, September 21, 2017 8:25:32 AM
To: solr-user
Subject: Re: Replicates not recovering after rolling restart

Hmmm, I didn't ask what version you're upgrading _from_. 5 years ago
would be Solr 4. Are you replacing Solr 5 or 4? I'm guessing 5, but
want to check unlikely possibilities.

Next question: I'm assuming all your nodes have been upgraded to Solr 6, right?

Best,
Erick

On Wed, Sep 20, 2017 at 7:18 PM, Bill Oconnor  wrote:
> I have no clue where that number comes from it does not seem to be in the 
> actual post to the leader as seen in my tcpdump.   It is mystery.
>
> 
> From: Walter Underwood 
> Sent: Wednesday, September 20, 2017 7:00:53 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Replicates not recovering after rolling restart
>
>
>> On Sep 20, 2017, at 6:15 PM, Bill Oconnor  wrote:
>>
>> I restart using the standard "sudo service solr start/stop"
>
> You might look into what that actually does.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
[https://wunderwood.files.wordpress.com/2017/02/diva.png?w=32]

Most Casual Observer
observer.wunderwood.org



>


Re: [bulk]: Re: Dates and DataImportHandler

2017-09-21 Thread Jamie Jackson
That's what I ended up doing too, for my MariaDB (MySQL derivative) DB:

CONVERT_TZ('${dataimporter.last_index_time}','${user.timezone}','${
custom.dataimporter.datasource.tz}')

user.timezone is Solr's time zone and custom.dataimporter.datasource.tz is
a property I set on startup.

The other option was to change Solr's timezone to match the DB, but I
thought that was even more of a hack.

On Thu, Sep 21, 2017 at 7:08 AM, Mannott, Birgit 
wrote:

> As far as I understood, you can use the locale so that DIH saves the last
> index time for  the given time zone and not for UTC. So if you set the
> locale according to the timezone of your DB you don't need to convert dates
> for comparison.
>
> But for me it's not working because every time I include settings for the
> propertyWriter (concrete data doesn't matter - I tried a lot),  the null
> pointer exception appears. So I had to find a workaround and convert now
> the time saved by the DIH from UTC to the timezone of my DB when comparing
> for delta imports. But it's an ugly workaround I don't like.
>
> So, if anyone has an idea why this NPE occurs it would be great. Do I
> perhaps have to add something to solrconfig.xml?
>
> Thanks,
> Birgit
>
>
>
> -Original Message-
> From: Jamie Jackson [mailto:jamieja...@gmail.com]
> Sent: Tuesday, September 19, 2017 6:54 PM
> To: solr-user@lucene.apache.org
> Subject: [bulk]: Re: [bulk]: Dates and DataImportHandler
>
> FWIW, I know mine worked, so maybe try:
>
>  type="SimplePropertiesWriter" />
>
> I can't conceive of what the locale would possibly do when a dateFormat is
> specified, so I omitted the attribute. (Maybe one can specify dateFormat
> *or *locale--it seems like specifying both would cause a clash.) For what
> it's worth, the format you're trying to write seems identical to the
> default*, so I'm not sure what benefit you're getting by using that
> propertyWriter.
>
> *It's identical to *my* default, anyway. Maybe the default changes based
> on one's system configuration, I don't know. This stuff isn't very well
> documented.
>
> On Tue, Sep 19, 2017 at 7:22 AM, Mannott, Birgit 
> wrote:
>
> > Hi,
> >
> > I have a similar problem. I try to change the timezone for the
> > last_index_time by setting
> >
> >  > type="SimplePropertiesWriter" locale="en_US" />
> >
> > in the  section of my data-config.xml file.
> >
> > But when doing this I always get a NullPointerException on Delta Import:
> >
> > 2017-09-15 14:04:00.825 INFO  (Thread-2938) [   x:mex_prd_dev1100-ap]
> > o.a.s.h.d.DataImporter Starting Delta Import
> > 2017-09-15 14:04:00.827 ERROR (Thread-2938) [   x:mex_prd_dev1100-ap]
> > o.a.s.h.d.DataImporter Delta Import Failed
> > org.apache.solr.handler.dataimport.DataImportHandlerException: Unable
> > to PropertyWriter implementation:SimplePropertiesWriter
> > at org.apache.solr.handler.dataimport.DataImporter.
> > createPropertyWriter(DataImporter.java:330)
> > at org.apache.solr.handler.dataimport.DataImporter.
> > doDeltaImport(DataImporter.java:439)
> > at org.apache.solr.handler.dataimport.DataImporter.
> > runCmd(DataImporter.java:476)
> > at org.apache.solr.handler.dataimport.DataImporter.
> > lambda$runAsync$0(DataImporter.java:457)
> > at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.lang.NullPointerException
> > at java.text.SimpleDateFormat.(SimpleDateFormat.java:
> > 598)
> > at org.apache.solr.handler.dataimport.
> > SimplePropertiesWriter.init(SimplePropertiesWriter.java:100)
> > at org.apache.solr.handler.dataimport.DataImporter.
> > createPropertyWriter(DataImporter.java:328)
> > ... 4 more
> >
> > Has anyone an idea what is wrong or missing?
> >
> > Thanks,
> > Birgit
> >
> >
> >
> > -Original Message-
> > From: Jamie Jackson [mailto:jamieja...@gmail.com]
> > Sent: Tuesday, September 19, 2017 3:42 AM
> > To: solr-user@lucene.apache.org
> > Subject: [bulk]: Dates and DataImportHandler
> >
> > Hi folks,
> >
> > My DB server is on America/Chicago time. Solr (on Docker) is running
> > on UTC. Dates coming from my (MariaDB) data source seem to get
> > translated properly into the Solr index without me doing anything
> special.
> >
> > However when doing delta imports using last_index_time (
> > http://wiki.apache.org/solr/DataImportHandlerDeltaQueryViaFullImport
> > ), I can't seem to get the date, which Solr provides, to be understood
> > by the DB as being UTC (and translated back, accordingly). In other
> > words, the DB thinks the Solr UTC date is local, so it thinks the date
> > is ahead by six hours.
> >
> > '${dataimporter.request.clean}' != 'false'
> >
> > or dt > '${dataimporter.last_index_time}'
> >
> > I came up with this workaround, which seems to work:
> >
> > '${dataimporter.request.clean}' != 'false'
> >
> > /* ${user.timezone} is UTC, and the
> > ${custom.dataimporter.datasource.tz}
> > property is set to America/Chicago */
> >
> > o

Error Opening new IndexSearcher - LockObtainFailedException

2017-09-21 Thread Shashank Pedamallu
Hi,

I’m seeing the following exception in Solr that gets automatically resolved 
eventually.
2017-09-22 00:18:17.243 ERROR (qtp1702660825-17) [   x:spedamallu1-core-1] 
o.a.s.c.CoreContainer Error creating core [spedamallu1-core-1]: Error opening 
new searcher
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.(SolrCore.java:952)
at org.apache.solr.core.SolrCore.(SolrCore.java:816)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:890)
at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:1167)
at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:252)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:418)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:296)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1891)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2011)
at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1041)
at org.apache.solr.core.SolrCore.(SolrCore.java:925)
... 32 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by this 
virtual machine: 
/Users/spedamallu/Desktop/mount-1/spedamallu1-core-1/data/index/write.lock
at 
org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:127)
at 
org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41)
at 
org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45)
at 
org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:104)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:804)
at 
org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:125)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:100)
at 
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:240)
at 
org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:114)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1852)

I kind of have a theory of why this is happening. Can someone please confirm if 
this is indeed the case.

  *   I was trying to run a long running operation on a transient core and it 
was getting evicted because of LRU
  *   So, to ensure my operation completion, I have added a solrCore.open() 
call before t

Re: Error Opening new IndexSearcher - LockObtainFailedException

2017-09-21 Thread Luiz Armesto
Hi Shashank,

There is an open issue about this exception [1]. Can you take a look and
test the patch to see if it works in your case?

[1] https://issues.apache.org/jira/browse/SOLR-11297

On Sep 21, 2017 10:19 PM, "Shashank Pedamallu" 
wrote:

Hi,

I’m seeing the following exception in Solr that gets automatically resolved
eventually.
2017-09-22 00:18:17.243 ERROR (qtp1702660825-17) [   x:spedamallu1-core-1]
o.a.s.c.CoreContainer Error creating core [spedamallu1-core-1]: Error
opening new searcher
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.(SolrCore.java:952)
at org.apache.solr.core.SolrCore.(SolrCore.java:816)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:890)
at org.apache.solr.core.CoreContainer.getCore(
CoreContainer.java:1167)
at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:252)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:418)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
SolrDispatchFilter.java:345)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
SolrDispatchFilter.java:296)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.
doFilter(ServletHandler.java:1691)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(
ServletHandler.java:582)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(
ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(
SecurityHandler.java:548)
at org.eclipse.jetty.server.session.SessionHandler.
doHandle(SessionHandler.java:226)
at org.eclipse.jetty.server.handler.ContextHandler.
doHandle(ContextHandler.java:1180)
at org.eclipse.jetty.servlet.ServletHandler.doScope(
ServletHandler.java:512)
at org.eclipse.jetty.server.session.SessionHandler.
doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.
doScope(ContextHandler.java:1112)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(
ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
ContextHandlerCollection.java:213)
at org.eclipse.jetty.server.handler.HandlerCollection.
handle(HandlerCollection.java:119)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(
HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at org.eclipse.jetty.server.HttpConnection.onFillable(
HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(
AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(
SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
executeProduceConsume(ExecuteProduceConsume.java:303)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
produceConsume(ExecuteProduceConsume.java:148)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(
ExecuteProduceConsume.java:136)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
QueuedThreadPool.java:671)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(
QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1891)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2011)
at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1041)
at org.apache.solr.core.SolrCore.(SolrCore.java:925)
... 32 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by
this virtual machine: /Users/spedamallu/Desktop/mount-1/spedamallu1-core-1/
data/index/write.lock
at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(
NativeFSLockFactory.java:127)
at org.apache.lucene.store.FSLockFactory.obtainLock(
FSLockFactory.java:41)
at org.apache.lucene.store.BaseDirectory.obtainLock(
BaseDirectory.java:45)
at org.apache.lucene.store.FilterDirectory.obtainLock(
FilterDirectory.java:104)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:804)
at org.apache.solr.update.SolrIndexWriter.(
SolrIndexWriter.java:125)
at org.apache.solr.update.SolrIndexWriter.create(
SolrIndexWriter.java:100)
at org.apache.solr.update.DefaultSolrCoreState.
createMainIndexWriter(DefaultSolrCoreState.java:240)
at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(
DefaultSolrCoreState.java:114)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1852)

I kind of have a theory of why this is happening. Can someone please

AEM SOLR integaration

2017-09-21 Thread Gunalan V
Hello,

I'm looking for suggestion in building the SOLR infrastructure so Kindly
let me know if anyone has integerated AEM (Adobe Experience Manager) with
SOLR?



Thanks,
GVK


Re: DocValues error when upgrading to 6.6.1 from 6.5

2017-09-21 Thread Erick Erickson
This error is not about DocValuesAsStored, but about
multiValued=true|false. It indicates that multiValued is set to
"false" for the current index but "true" in the new schema. At least
that's my guess

Best,
Erick

On Thu, Sep 21, 2017 at 11:56 AM, Xie, Sean  wrote:
> Hi,
>
> When I upgrade the existing SOLR from 6.5.1 to 6.6.1, I’m getting:
> cannot change DocValues type from SORTED to SORTED_SET for field “…..”
>
> During the upgrades, there is no change on schema and schema version (we are 
> using schema version 1.5 so seDocValuesAsStored defaults are not taking into 
> affect).
>
> Not sure why this is happening.
>
> Planning to upgrade the SOLR version on other clusters, but don’t really want 
> to do re-index for all the data.
>
> Any suggestion?
>
> Thanks
> Sean
>
> Confidentiality Notice::  This email, including attachments, may include 
> non-public, proprietary, confidential or legally privileged information.  If 
> you are not an intended recipient or an authorized agent of an intended 
> recipient, you are hereby notified that any dissemination, distribution or 
> copying of the information contained in or transmitted with this e-mail is 
> unauthorized and strictly prohibited.  If you have received this email in 
> error, please notify the sender by replying to this message and permanently 
> delete this e-mail, its attachments, and any copies of it immediately.  You 
> should not retain, copy or use this e-mail or any attachment for any purpose, 
> nor disclose all or any part of the contents to any other person. Thank you.


Re: Error Opening new IndexSearcher - LockObtainFailedException

2017-09-21 Thread Shashank Pedamallu
Hi Luiz,

Unfortunately, I’m on version Solr-6.4.2 and the patch does not apply straight 
away.

Thanks,
Shashank

On 9/21/17, 8:35 PM, "Luiz Armesto"  wrote:

Hi Shashank,

There is an open issue about this exception [1]. Can you take a look and
test the patch to see if it works in your case?

[1] 
https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_SOLR-2D11297&d=DwIFaQ&c=uilaK90D4TOVoH58JNXRgQ&r=blJD2pBapH3dDkoajIf9mT9SSbbs19wRbChNde1ErNI&m=EBLEhJ6TlQpK4rJngNBxBwypGpdbAuhnuqmgiRGcxZg&s=j69wKZOK2Ve9oeIPl92iyiQLSZS38Qe-ZLj-2OeN-u0&e=
 

On Sep 21, 2017 10:19 PM, "Shashank Pedamallu" 
wrote:

Hi,

I’m seeing the following exception in Solr that gets automatically resolved
eventually.
2017-09-22 00:18:17.243 ERROR (qtp1702660825-17) [   x:spedamallu1-core-1]
o.a.s.c.CoreContainer Error creating core [spedamallu1-core-1]: Error
opening new searcher
org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.(SolrCore.java:952)
at org.apache.solr.core.SolrCore.(SolrCore.java:816)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:890)
at org.apache.solr.core.CoreContainer.getCore(
CoreContainer.java:1167)
at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:252)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:418)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
SolrDispatchFilter.java:345)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(
SolrDispatchFilter.java:296)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.
doFilter(ServletHandler.java:1691)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(
ServletHandler.java:582)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(
ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(
SecurityHandler.java:548)
at org.eclipse.jetty.server.session.SessionHandler.
doHandle(SessionHandler.java:226)
at org.eclipse.jetty.server.handler.ContextHandler.
doHandle(ContextHandler.java:1180)
at org.eclipse.jetty.servlet.ServletHandler.doScope(
ServletHandler.java:512)
at org.eclipse.jetty.server.session.SessionHandler.
doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.
doScope(ContextHandler.java:1112)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(
ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(
ContextHandlerCollection.java:213)
at org.eclipse.jetty.server.handler.HandlerCollection.
handle(HandlerCollection.java:119)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(
HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at org.eclipse.jetty.server.HttpConnection.onFillable(
HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(
AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(
SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
executeProduceConsume(ExecuteProduceConsume.java:303)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.
produceConsume(ExecuteProduceConsume.java:148)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(
ExecuteProduceConsume.java:136)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
QueuedThreadPool.java:671)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(
QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1891)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2011)
at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1041)
at org.apache.solr.core.SolrCore.(SolrCore.java:925)
... 32 more
Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by
this virtual machine: /Users/spedamallu/Desktop/mount-1/spedamallu1-core-1/
data/index/write.lock
at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(
NativeFSLockFactory.java:127)
at org.apache.lucene.store.FSLockFactory.obtainLock(
FSLockFactory.java:41)
at org.apache

Re: How to build solr

2017-09-21 Thread srini sampath
Thanks Aman,
Erick, I followed the link and I am getting the following error,

Buildfile: ${user.home}\git\lucene-solr\build.xml

compile:

-check-git-state:

-git-cleanroot:

-copy-git-state:

git-autoclean:

resolve:

ivy-availability-check:

BUILD FAILED
${user.home}\git\lucene-solr\build.xml:309: The following error occurred
while executing this line:
${user.home}\git\lucene-solr\lucene\build.xml:124: The following error
occurred while executing this line:
${user.home}\git\lucene-solr\lucene\common-build.xml:424:
${user.home}\.ant\lib does not exist.

Total time: 0 seconds

Any Idea?
How can I run solr server In debug mode.

Here is the thing I am trying to do,
Change a custom plugin called solrTextTagger
and add some extra query
parameters to it.

I defined my custom handler in the following way

   - 

   

   - And I defined my custom handler jar file location location in
   solrschema.xml in the following way

  
(solr-text-tagger.jar
location)

   - I made some changes to the solrTextTagger,
    And built a jar using
   maven.
   - I am running solr as a service. And sending a request using HTTP Post
   method.
   - But the problem is how can I debug solr-text-tagger.jar code to check
   and make changes. (I mean how to do remote debugging?)


I am using eclipse IDE for development.
I found similar problem here
.
But I could not understand the solution.

.Best,
Srini Sampth.





On Thu, Sep 21, 2017 at 8:51 PM, Erick Erickson 
wrote:

> And did you follow the link provided on that page?
>
> Best,
> Erick
>
> On Thu, Sep 21, 2017 at 3:07 AM, Aman Tandon 
> wrote:
> > Hi Srini,
> >
> > Kindly refer to the READ.ME section of this link of GitHub, this should
> > work.
> > https://github.com/apache/lucene-solr/blob/master/README.md
> >
> > With regards,
> > Aman Tandon
> >
> >
> > On Sep 21, 2017 1:53 PM, "srini sampath" 
> > wrote:
> >
> >> Hi,
> >> How to build and compile solr in my locale machine? it seems the
> >> https://wiki.apache.org/solr/HowToCompileSolr page became obsolete.
> >> Thanks in advance
> >>
>


Re: How to build solr

2017-09-21 Thread srini sampath
PS: I have Installed both Ant and Ivy in my system. But there is no
${user.home}\.ant\lib
folder

On Fri, Sep 22, 2017 at 11:41 AM, srini sampath  wrote:

> Thanks Aman,
> Erick, I followed the link and I am getting the following error,
>
> Buildfile: ${user.home}\git\lucene-solr\build.xml
>
> compile:
>
> -check-git-state:
>
> -git-cleanroot:
>
> -copy-git-state:
>
> git-autoclean:
>
> resolve:
>
> ivy-availability-check:
>
> BUILD FAILED
> ${user.home}\git\lucene-solr\build.xml:309: The following error occurred
> while executing this line:
> ${user.home}\git\lucene-solr\lucene\build.xml:124: The following error
> occurred while executing this line:
> ${user.home}\git\lucene-solr\lucene\common-build.xml:424:
> ${user.home}\.ant\lib does not exist.
>
> Total time: 0 seconds
>
> Any Idea?
> How can I run solr server In debug mode.
>
> Here is the thing I am trying to do,
> Change a custom plugin called solrTextTagger
> and add some extra query
> parameters to it.
>
> I defined my custom handler in the following way
>
>- 
>
>
>
>- And I defined my custom handler jar file location location in
>solrschema.xml in the following way
>
>
> (solr-text-tagger.jar
> location)
>
>- I made some changes to the solrTextTagger,
> And built a jar using
>maven.
>- I am running solr as a service. And sending a request using HTTP
>Post method.
>- But the problem is how can I debug solr-text-tagger.jar code to
>check and make changes. (I mean how to do remote debugging?)
>
>
> I am using eclipse IDE for development.
> I found similar problem here
> .
> But I could not understand the solution.
>
> .Best,
> Srini Sampth.
>
>
>
>
>
> On Thu, Sep 21, 2017 at 8:51 PM, Erick Erickson 
> wrote:
>
>> And did you follow the link provided on that page?
>>
>> Best,
>> Erick
>>
>> On Thu, Sep 21, 2017 at 3:07 AM, Aman Tandon 
>> wrote:
>> > Hi Srini,
>> >
>> > Kindly refer to the READ.ME section of this link of GitHub, this should
>> > work.
>> > https://github.com/apache/lucene-solr/blob/master/README.md
>> >
>> > With regards,
>> > Aman Tandon
>> >
>> >
>> > On Sep 21, 2017 1:53 PM, "srini sampath" 
>> > wrote:
>> >
>> >> Hi,
>> >> How to build and compile solr in my locale machine? it seems the
>> >> https://wiki.apache.org/solr/HowToCompileSolr page became obsolete.
>> >> Thanks in advance
>> >>
>>
>
>


Re: CloudSolrServer set http request timeout

2017-09-21 Thread Vincenzo D'Amore
Thanks for the suggestion, it's working like a charm. 

Ciao,
Vincenzo


> On 21 Sep 2017, at 19:51, Jason Gerlowski  wrote:
> 
> Hi Vincenzo,
> 
> Have you tried setting the read/socket timeout on your client?
> CloudSolrServer uses a LBHttpSolrServer under the hood, which you can
> get with the getLBServer method
> (https://lucene.apache.org/solr/4_1_0/solr-solrj/org/apache/solr/client/solrj/impl/CloudSolrServer.html#getLbServer()).
> Once you have access to LBHttpSolrServer, you can use the
> "setSoTimeout" method
> (https://lucene.apache.org/solr/4_1_0/solr-solrj/org/apache/solr/client/solrj/impl/LBHttpSolrServer.html#setSoTimeout(int))
> to choose an appropriate maximum timeout.
> 
> At least, that's how the Javadocs make it look in 4.x, and how I know
> it works in more recent versions.  Hope that helps.
> 
> Jason
> 
>> On Thu, Sep 21, 2017 at 1:07 PM, Vincenzo D'Amore  wrote:
>> Hi,
>> 
>> I have a huge problem with few queries in SolrCloud 4.8.1 that hangs the
>> client.
>> 
>> Actually I'm unable to understand even if the cluster really receives the
>> requests.
>> 
>> How can I set a timeout when Solrj client wait too much ?
>> 
>> Best regards,
>> Vincenzo
>> 
>> --
>> Vincenzo D'Amore
>> email: v.dam...@gmail.com
>> skype: free.dev
>> mobile: +39 349 8513251


Re: AEM SOLR integaration

2017-09-21 Thread Nicole Bilić
Hi,

Maybe this could help you out http://www.aemsolrsearch.com/

Regards,
Nicole

On Sep 22, 2017 05:41, "Gunalan V"  wrote:

> Hello,
>
> I'm looking for suggestion in building the SOLR infrastructure so Kindly
> let me know if anyone has integerated AEM (Adobe Experience Manager) with
> SOLR?
>
>
>
> Thanks,
> GVK
>