Re: Solr 8.4.0 Cloud Graph is not shown due to CSP

2020-01-08 Thread Jörn Franke
I have to admit it was the cache. Sorry I believed I deleted it. Thanks for the 
efforts and testing ! I will update the Jira.

> Am 07.01.2020 um 22:14 schrieb Jörn Franke :
> 
> 
> here you go:
> https://issues.apache.org/jira/browse/SOLR-14176
> 
> a detailed screenshot of the message can be made tomorrow, but it looks 
> pretty much to CSP to me - i have seen others with other applications using 
> CSP.
> 
>> On Tue, Jan 7, 2020 at 6:45 PM Kevin Risden  wrote:
>> So this is caused by SOLR-13982 [1] and specifically SOLR-13987 [2]. Can
>> you open a new Jira specifically for this? It would be great if you could
>> capture from Chrome Dev Tools (or Firefox) the error message around what
>> specifically CSP is complaining about.
>> 
>> The other thing to ensure is that you force refresh the UI to make sure
>> nothing is cached. Idk if that is in play here but doesn't hurt.
>> 
>> [1] https://issues.apache.org/jira/browse/SOLR-13982
>> [2] https://issues.apache.org/jira/browse/SOLR-13987
>> 
>> Kevin Risden
>> 
>> On Tue, Jan 7, 2020, 11:15 Jörn Franke  wrote:
>> 
>> > Dear all,
>> >
>> > I noted that in Solr Cloud 8.4.0 the graph is not shown due to
>> > Content-Security-Policy. Apparently it violates unsafe-eval.
>> > It is a minor UI thing, but should I create an issue to that one? Maybe it
>> > is rather easy to avoid in the source code of the admin page?
>> >
>> > Thank you.
>> >
>> > Best regards
>> >
>> >
>> >


support need in solr for min and max

2020-01-08 Thread Mohamed Azharuddin
Hi team,

We are migrating from mysql to apache solr since solr is fast in searching.
Thank you. We had a scenario to


> *find 1) difference (max-min)*

*2) with group by date(timeStamp)*


Given below is our mysql table :
[image: Untitled.png]

And mysql query is,
*SELECT Date(eventTimeStamp), MAX(field) - MIN(field) AS Energy FROM
PowerTable GROUP BY DATE(eventTimeStamp);*

will results,
[image: Untitled2.png]

So we have to calculate difference per day, where date column is in
datetime format where we are using result grouping as
*group=true&group.query=eventTimeStamp:[2019-12-11T00:00:00Z TO
2019-12-11T23:59:59Z]&group.query=eventTimeStamp:[2019-12-12T00:00:00Z TO
2019-12-12T23:59:59Z]*

Using Apache solr statistics option, we are able to calculate max and min
for whole result, But we need max and min value per day basis.
[image: Untitled31.png]

When we try to get max and min value per day basis, we are able to fetch
either min or max using following query.
*&group.sort=event1 desc or &group.sort=event1 asc*

[image: Untitled6.png]

But we need both min and max in single query.

So kindly help us to go ahead.

-- 

Regards,
Azar@EJ


JOIN query

2020-01-08 Thread Paresh
Hi,

I have two collections: collection1 and collection2
I have fields like -
colleciton1: id, prop1, prop2, prop3
collection2: id, col1, col2, col3

I am doing a join query with collection1.prop1 = collection2.col1 on
collection2.

As a result, I can get any field from collection2 in 'fl'.

Is there any way to get field from collection1 while performing query from
collection2 joining with collection1?


Regards,
Paresh



--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: support need in solr for min and max

2020-01-08 Thread Walter Underwood
I hope you do not plan to use Solr as a primary repository. Solr is NOT a 
database. If you use Solr as a database, you will lose data at some point.

The Solr feature set is very different from MySQL. There is no guarantee that a 
SQL query can be translated into a Solr query.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jan 8, 2020, at 3:07 AM, Mohamed Azharuddin  wrote:
> 
> Hi team,
> 
> We are migrating from mysql to apache solr since solr is fast in searching. 
> Thank you. We had a scenario to 
>  
> find 1) difference (max-min) 
> 2) with group by date(timeStamp)
>  
> Given below is our mysql table :
> 
> 
> And mysql query is,
> SELECT Date(eventTimeStamp), MAX(field) - MIN(field) AS Energy FROM 
> PowerTable GROUP BY DATE(eventTimeStamp);
> 
> will results,
> 
> 
> So we have to calculate difference per day, where date column is in datetime 
> format where we are using result grouping as 
> group=true&group.query=eventTimeStamp:[2019-12-11T00:00:00Z TO 
> 2019-12-11T23:59:59Z]&group.query=eventTimeStamp:[2019-12-12T00:00:00Z TO 
> 2019-12-12T23:59:59Z]
> 
> Using Apache solr statistics option, we are able to calculate max and min for 
> whole result, But we need max and min value per day basis.
> 
> 
> When we try to get max and min value per day basis, we are able to fetch 
> either min or max using following query. 
> &group.sort=event1 desc or &group.sort=event1 asc
> 
> 
> 
> But we need both min and max in single query.
> 
> So kindly help us to go ahead.
> 
> -- 
> Regards,
> Azar@EJ



RE: Edismax ignoring queries containing booleans

2020-01-08 Thread Claire Pollard
It would be lovely to be able to use range to complete my searches, but sadly 
documents aren't necessarily sequential so I might want say 18, 24 or 30 in 
future.

I've re-run the query with debug on. Is there anything here that looks unusual? 
Thanks.

{
  "responseHeader":{
"status":0,
"QTime":75,
"params":{
  "mm":"\r\n   0<1 2<-1 5<-2 6<90%\r\n  ",
  "spellcheck.collateExtendedResults":"true",
  "df":["text",
"text"],
  "q.alt":"*:*",
  "ps":"100",
  "spellcheck.dictionary":["default",
"wordbreak"],
  "bf":"",
  "echoParams":"all",
  "fl":"*,score",
  "spellcheck.maxCollations":"5",
  "rows":"10",
  "spellcheck.alternativeTermCount":"5",
  "spellcheck.extendedResults":"true",
  "q":"recordID:(18 OR 19 OR 20)",
  "defType":"edismax",
  "spellcheck.maxResultsForSuggest":"5",
  "qf":"\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\ttext^0.4 recordID^10.0 
annotations^0.5 collectionTitle^1.9 collectionDescription^0.9 title^2.0 
Test_FR^1.0 Test_DE^1.0 Test_AR^1.0 genre^1.0 genre_fr^1.0 
french2^1.0\r\n\n\t\t\t\t\n\t\t\t",
  "spellcheck":"on",
  "pf":"\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\ttext^0.2 recordID^10.0 
annotations^0.6 collectionTitle^2.0 collectionDescription^1.0 title^2.1 
Test_FR^1.1 Test_DE^1.1 Test_AR^1.1 genre^1.1 genre_fr^1.1 
french2^1.1\r\n\n\t\t\t\t\n\t\t\t",
  "spellcheck.count":"10",
  "debugQuery":"on",
  "_":"1578499092576",
  "spellcheck.collate":"true"}},
  "response":{"numFound":0,"start":0,"maxScore":0.0,"docs":[]
  },
  "spellcheck":{
"suggestions":[],
"correctlySpelled":false,
"collations":[]},
  "debug":{
"rawquerystring":"recordID:(18 OR 19 OR 20)",
"querystring":"recordID:(18 OR 19 OR 20)",
"parsedquery":"+((recordID:[18 TO 18]) (recordID:[19 TO 19]) (recordID:[20 
TO 20]))~2 DisjunctionMaxQuery(((text:\"19 20\"~100)^0.2 | (annotations:\"19 
20\"~100)^0.6 | (collectionTitle:\"19 20\"~100)^2.0 | 
collectionDescription:\"19 20\"~100 | (title:\"19 20\"~100)^2.1 | (Test_FR:\"19 
20\"~100)^1.1 | (Test_DE:\"19 20\"~100)^1.1 | (Test_AR:\"19 20\"~100)^1.1))",
"parsedquery_toString":"+((recordID:[18 TO 18] recordID:[19 TO 19] 
recordID:[20 TO 20])~2) ((text:\"19 20\"~100)^0.2 | (annotations:\"19 
20\"~100)^0.6 | (collectionTitle:\"19 20\"~100)^2.0 | 
collectionDescription:\"19 20\"~100 | (title:\"19 20\"~100)^2.1 | (Test_FR:\"19 
20\"~100)^1.1 | (Test_DE:\"19 20\"~100)^1.1 | (Test_AR:\"19 20\"~100)^1.1)",
"explain":{},
"QParser":"ExtendedDismaxQParser",
"altquerystring":null,
"boost_queries":null,
"parsed_boost_queries":[],
"boostfuncs":[""],
"timing":{
  "time":75.0,
  "prepare":{
"time":35.0,
"query":{
  "time":35.0},
"facet":{
  "time":0.0},
"facet_module":{
  "time":0.0},
"mlt":{
  "time":0.0},
"highlight":{
  "time":0.0},
"stats":{
  "time":0.0},
"expand":{
  "time":0.0},
"terms":{
  "time":0.0},
"spellcheck":{
  "time":0.0},
"debug":{
  "time":0.0}},
  "process":{
"time":38.0,
"query":{
  "time":29.0},
"facet":{
  "time":0.0},
"facet_module":{
  "time":0.0},
"mlt":{
  "time":0.0},
"highlight":{
  "time":0.0},
"stats":{
  "time":0.0},
"expand":{
  "time":0.0},
"terms":{
  "time":0.0},
"spellcheck":{
  "time":6.0},
"debug":{
  "time":1.0}

-Original Message-
From: Edward Ribeiro  
Sent: 07 January 2020 01:05
To: solr-user@lucene.apache.org
Subject: Re: Edismax ignoring queries containing booleans

Hi Claire,

You can add the following parameter `&debug=all` on the URL to bring back 
debugging info and share with us (if you are using the Solr admin UI you should 
check the `debugQuery` checkbox).

Also, if you are searching a sequence of values you could perform a range
query: recordID:[18 TO 20]

Best,
Edward

On Mon, Jan 6, 2020 at 10:46 AM Claire Pollard 
wrote:
>
> Ok... It doesn't work for me. I'm fairly new to Solr so any help would 
> be
appreciated!
>
> My managed-schema field and field type look like this:
>
> 
> 
>
> And my solrconfig.xml select/query handlers look like this:
>
> 
> 
> all
> 
> edismax
> 
> 	text^0.4 recordID^10.0
annotations^0.5 collectionTitle^1.9 collectionDescription^0.9 title^2.0
Test_FR^1.0 Test_DE^1.0 Test_AR^1.0 genre^1.0 genre_fr^1.0 
french2^1.0

> 
> text
> *:*
> 10
> *,score
> 
> 	text^0.2 recordID^10.0
annot

Re: support need in solr for min and max

2020-01-08 Thread Mel Mason
Try looking at range JSON facets: 
https://lucene.apache.org/solr/guide/8_2/json-facet-api.html#range-facet. 
If you facet over the eventTimeStamp with a gap of 1 day, you should 
then be able to use a sub facet to return a min and max value 
(https://lucene.apache.org/solr/guide/8_2/json-facet-api.html#stat-facet-functions) 
for each day bucket.


On 08/01/2020 11:07, Mohamed Azharuddin wrote:

Hi team,

We are migrating from mysql to apache solr since solr is fast in 
searching. Thank you. We had a scenario to


*find 1) difference (max-min)* 


*        2) with group by date(timeStamp)*

Given below is our mysql table :
Untitled.png

And mysql query is,
*/SELECT Date(eventTimeStamp), MAX(field) - MIN(field) AS Energy FROM 
PowerTable GROUP BY DATE(eventTimeStamp);/*


will results,
Untitled2.png

So we have to calculate difference per day, where date column is in 
datetime format where we are using result grouping as
*/group=true&group.query=eventTimeStamp:[2019-12-11T00:00:00Z TO 
2019-12-11T23:59:59Z]&group.query=eventTimeStamp:[2019-12-12T00:00:00Z 
TO 2019-12-12T23:59:59Z]/*


Using Apache solr statistics option, we are able to calculate max and 
min for whole result, But we need max and min value per day basis.

Untitled31.png

When we try to get max and min value per day basis, we are able to 
fetch either min or max using following query.

*/&group.sort=event1 desc or &group.sort=event1 asc/*
*/
/*
Untitled6.png

But we need both min and max in single query.

So kindly help us to go ahead.

--

Regards,
Azar@EJ



Fwd: Solr spatial search - overlapRatio of polygons

2020-01-08 Thread David Smiley
My response to a direct email (copying here with permission):

It's possible; you'll certainly have to write some code here to make this
work, including some new Solr plugin; perhaps ValueSourceParser that can
compute a more accurate overlap.  Such a thing would have to get the
Spatial4J Shape from the RptWithGeometrySpatialField (see getValues).  Then
some casting to unwrap it to get to a JTS Geometry.  All this is going to
be slow, so I propose you use Solr query re-ranking to only do this on the
top results that are based on the bounding-box overlap ratio as an
approximation.

https://lucene.apache.org/solr/guide/8_3/query-re-ranking.html

-- Forwarded message -
From: Marc
Date: Tue, Jan 7, 2020 at 6:14 AM
Subject: Solr spatial search - overlapRatio of polygons
To: David Smiley 



Dear Mr Smiley,

I have a tricky question concerning the spatial search features of
Solr and therefore I am directly contacting you, as a specialist.

Currently I am developing a new catalogue for our map collection with
Solr. I would like to sort the search results by the overlap ratio of
the search rectangle and the polygon of the map corners. Solr provides
such a feature for comparing and sorting bounding boxes only.
But it should be possible to compare polygons with the help of JTS
functions
(locationtech.github.io/jts/javadoc/org/locationtech/jts/geom/Geometry.html).

With intersection() you can compute the geometry of the overlapping
part. Afterwards you may calculate the size of it with getArea() and
compare it with the size of the search rectangle.
Is there a way to use such JTS functions in a Solr query? Or do you
know another option to sort by the overlap ratio of polygons?


Re: Solr spatial search - overlapRatio of polygons

2020-01-08 Thread David Smiley
Your English is perfect.

I forwarded my response without your contact info.

I *do* follow solr-users but only certain key words like "spatial" (and
some other topics) and some words related to that domain (e.g. polygon,
etc.).  I so your post would have gotten my attention.

On Wed, Jan 8, 2020 at 1:16 PM David Smiley  wrote:

> My response to a direct email (copying here with permission):
>
> It's possible; you'll certainly have to write some code here to make this
> work, including some new Solr plugin; perhaps ValueSourceParser that can
> compute a more accurate overlap.  Such a thing would have to get the
> Spatial4J Shape from the RptWithGeometrySpatialField (see getValues).  Then
> some casting to unwrap it to get to a JTS Geometry.  All this is going to
> be slow, so I propose you use Solr query re-ranking to only do this on the
> top results that are based on the bounding-box overlap ratio as an
> approximation.
>
> https://lucene.apache.org/solr/guide/8_3/query-re-ranking.html
>
> -- Forwarded message -
> From: Marc
> Date: Tue, Jan 7, 2020 at 6:14 AM
> Subject: Solr spatial search - overlapRatio of polygons
> To: David Smiley 
>
>
>
> Dear Mr Smiley,
>
> I have a tricky question concerning the spatial search features of
> Solr and therefore I am directly contacting you, as a specialist.
>
> Currently I am developing a new catalogue for our map collection with
> Solr. I would like to sort the search results by the overlap ratio of
> the search rectangle and the polygon of the map corners. Solr provides
> such a feature for comparing and sorting bounding boxes only.
> But it should be possible to compare polygons with the help of JTS
> functions
> (
> locationtech.github.io/jts/javadoc/org/locationtech/jts/geom/Geometry.html).
>
> With intersection() you can compute the geometry of the overlapping
> part. Afterwards you may calculate the size of it with getArea() and
> compare it with the size of the search rectangle.
> Is there a way to use such JTS functions in a Solr query? Or do you
> know another option to sort by the overlap ratio of polygons?
>
>
>


tlogs are not purged when CDCR is enabled

2020-01-08 Thread Louis
Using Solr 7.7.3-snapshot, 1 shard + 3 replicas on source and target cluster

When unidirectional CDCR enabled and buffer disabled, my understanding is,
when data is successfully forwarded to target and committed, tlogs on both
source and target should be purged.

However, the source node doesn't purge tlogs no matter how I tried(manually
committed as well) while tlogs on target are purged. (if I turn off CDCR and
import data, tlogs is nicely cleaned)
 
So I tested with some queries.. and there are no errors. queue size is 0,
and the last processed version is not -1 either.

I also double-checked CDCR buffer disabled on both source and target, and
CDCR(unidirectioanl) data replication is working fine(except that fact that
tlogs keep growing).

What am I missing and what else should I check next?

$ curl -k
https://localhost:8983/solr/tbh_manuals_uni_shard1_replica_n2/cdcr?action=QUEUES
{
  "responseHeader":{
"status":0,
"QTime":0},
  "queues":[
"host1:8981,host2:8981,host3:8981/solr",[
  "tbh_manuals_uni",[
"queueSize",0,
"lastTimestamp","2020-01-08T23:16:26.899Z"]]],
  "tlogTotalSize":503573,
  "tlogTotalCount":278,
  "updateLogSynchronizer":"stopped"}

$ curl -k
https://localhost:8983/solr/tbh_manuals_uni_shard1_replica_n2/cdcr?action=ERRORS
{
  "responseHeader":{
"status":0,
"QTime":1},
  "errors":[
"host1:8981,host2:8981,host3:8981/solr",[
  "tbh_manuals_uni",[
"consecutiveErrors",0,
"bad_request",0,
"internal",0,
"last",[}

$ curl -k
https://localhost:8983/solr/tbh_manuals_uni_shard1_replica_n2/cdcr?action=LASTPROCESSEDVERSION
{
  "responseHeader":{
"status":0,
"QTime":0},
  "lastProcessedVersion":1655203836093005824}



I actually see some errors on zookeeper.out file only in target's leader
node as follows. However honestly, I don't know what they mean..



2020-01-08 15:11:42,740 [myid:2] - INFO  [ProcessThread(sid:2
cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when
processing sessionid:0x301d2ecaf590008 type:create cxid:0xd2
zxid:0x300b4 txntype:-1 reqpath:n/a Error Path:/solr/collections
Error:KeeperErrorCode = NodeExists for /solr/collections
2020-01-08 15:11:42,742 [myid:2] - INFO  [ProcessThread(sid:2
cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when
processing sessionid:0x301d2ecaf590008 type:create cxid:0xd3
zxid:0x300b5 txntype:-1 reqpath:n/a Error
Path:/solr/collections/tbh_manuals_uni Error:KeeperErrorCode = NodeExists
for /solr/collections/tbh_manuals_uni
2020-01-08 15:11:42,744 [myid:2] - INFO  [ProcessThread(sid:2
cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when
processing sessionid:0x301d2ecaf590008 type:create cxid:0xd4
zxid:0x300b6 txntype:-1 reqpath:n/a Error
Path:/solr/collections/tbh_manuals_uni/terms Error:KeeperErrorCode =
NodeExists for /solr/collections/tbh_manuals_uni/terms
2020-01-08 15:11:42,745 [myid:2] - INFO  [ProcessThread(sid:2
cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when
processing sessionid:0x301d2ecaf590008 type:create cxid:0xd5
zxid:0x300b7 txntype:-1 reqpath:n/a Error
Path:/solr/collections/tbh_manuals_uni/terms/shard1 Error:KeeperErrorCode =
NodeExists for /solr/collections/tbh_manuals_uni/terms/shard1
2020-01-08 15:11:42,821 [myid:2] - INFO  [ProcessThread(sid:2
cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when
processing sessionid:0x301d2ecaf590005 type:create cxid:0x23c
zxid:0x300ba txntype:-1 reqpath:n/a Error Path:/solr/collections
Error:KeeperErrorCode = NodeExists for /solr/collections
2020-01-08 15:11:42,823 [myid:2] - INFO  [ProcessThread(sid:2
cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when
processing sessionid:0x301d2ecaf590005 type:create cxid:0x23d
zxid:0x300bb txntype:-1 reqpath:n/a Error
Path:/solr/collections/tbh_manuals_uni Error:KeeperErrorCode = NodeExists
for /solr/collections/tbh_manuals_uni
2020-01-08 15:11:42,825 [myid:2] - INFO  [ProcessThread(sid:2
cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when
processing sessionid:0x301d2ecaf590005 type:create cxid:0x23e
zxid:0x300bc txntype:-1 reqpath:n/a Error
Path:/solr/collections/tbh_manuals_uni/terms Error:KeeperErrorCode =
NodeExists for /solr/collections/tbh_manuals_uni/terms
2020-01-08 15:11:42,827 [myid:2] - INFO  [ProcessThread(sid:2
cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when
processing sessionid:0x301d2ecaf590005 type:create cxid:0x23f
zxid:0x300bd txntype:-1 reqpath:n/a Error
Path:/solr/collections/tbh_manuals_uni/terms/shard1 Error:KeeperErrorCode =
NodeExists for /solr/collections/tbh_manuals_uni/terms/shard1
2020-01-08 15:11:45,185 [myid:2] - INFO  [ProcessThread(sid:2
cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when
processing sessionid:0x301d2ecaf590005 type:setData cxid:0x274
zxid:0x300ce txntype:-1 reqpath:n/a Error
Pa

Re: Tlogs are not purged when CDCR is enabled

2020-01-08 Thread Louis
Another finding is, no matter how I tried to disable buffer with the
following setup on target node, it is always enabled first time.


  
  
disabled
  


Once I call CDCR API to disable buffer, it turns to be disabled. I wonder if
https://issues.apache.org/jira/browse/SOLR-11652 is related to this issue..

How can I make the default state of buffer disabled if this setup doesn't
work?



--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Edismax ignoring queries containing booleans

2020-01-08 Thread Edward Ribeiro
Hi Claire,

Unfortunately I didn't see anything in the debug explain that could
potentially be the source of the problem. As Saurabh, I tested on a core
and it worked for me.

I suggest that you simplify the solrconfig (commenting out qf, mm,
spellchecker config and pf, for example) and reload the core. If the query
works then you  reinsert the config one by one, reloading the core and see
if the query works.

A few remarks based on a snippet of the solrconfig you posted on a previous
e-mail:

* Your solrconfig.xml defines df two times (the debug shows "df":["text",
"text"]);

* There are a couple codes like 	

 and 
 It would be nice to remove It;

Please, let us know if you find why. :)

Best,
Edward


Em qua, 8 de jan de 2020 13:00, Claire Pollard 
escreveu:

> It would be lovely to be able to use range to complete my searches, but
> sadly documents aren't necessarily sequential so I might want say 18, 24 or
> 30 in future.
>
> I've re-run the query with debug on. Is there anything here that looks
> unusual? Thanks.
>
> {
>   "responseHeader":{
> "status":0,
> "QTime":75,
> "params":{
>   "mm":"\r\n   0<1 2<-1 5<-2 6<90%\r\n  ",
>   "spellcheck.collateExtendedResults":"true",
>   "df":["text",
> "text"],
>   "q.alt":"*:*",
>   "ps":"100",
>   "spellcheck.dictionary":["default",
> "wordbreak"],
>   "bf":"",
>   "echoParams":"all",
>   "fl":"*,score",
>   "spellcheck.maxCollations":"5",
>   "rows":"10",
>   "spellcheck.alternativeTermCount":"5",
>   "spellcheck.extendedResults":"true",
>   "q":"recordID:(18 OR 19 OR 20)",
>   "defType":"edismax",
>   "spellcheck.maxResultsForSuggest":"5",
>   "qf":"\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\ttext^0.4 recordID^10.0
> annotations^0.5 collectionTitle^1.9 collectionDescription^0.9 title^2.0
> Test_FR^1.0 Test_DE^1.0 Test_AR^1.0 genre^1.0 genre_fr^1.0
> french2^1.0\r\n\n\t\t\t\t\n\t\t\t",
>   "spellcheck":"on",
>   "pf":"\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t\ttext^0.2 recordID^10.0
> annotations^0.6 collectionTitle^2.0 collectionDescription^1.0 title^2.1
> Test_FR^1.1 Test_DE^1.1 Test_AR^1.1 genre^1.1 genre_fr^1.1
> french2^1.1\r\n\n\t\t\t\t\n\t\t\t",
>   "spellcheck.count":"10",
>   "debugQuery":"on",
>   "_":"1578499092576",
>   "spellcheck.collate":"true"}},
>   "response":{"numFound":0,"start":0,"maxScore":0.0,"docs":[]
>   },
>   "spellcheck":{
> "suggestions":[],
> "correctlySpelled":false,
> "collations":[]},
>   "debug":{
> "rawquerystring":"recordID:(18 OR 19 OR 20)",
> "querystring":"recordID:(18 OR 19 OR 20)",
> "parsedquery":"+((recordID:[18 TO 18]) (recordID:[19 TO 19])
> (recordID:[20 TO 20]))~2 DisjunctionMaxQuery(((text:\"19 20\"~100)^0.2 |
> (annotations:\"19 20\"~100)^0.6 | (collectionTitle:\"19 20\"~100)^2.0 |
> collectionDescription:\"19 20\"~100 | (title:\"19 20\"~100)^2.1 |
> (Test_FR:\"19 20\"~100)^1.1 | (Test_DE:\"19 20\"~100)^1.1 | (Test_AR:\"19
> 20\"~100)^1.1))",
> "parsedquery_toString":"+((recordID:[18 TO 18] recordID:[19 TO 19]
> recordID:[20 TO 20])~2) ((text:\"19 20\"~100)^0.2 | (annotations:\"19
> 20\"~100)^0.6 | (collectionTitle:\"19 20\"~100)^2.0 |
> collectionDescription:\"19 20\"~100 | (title:\"19 20\"~100)^2.1 |
> (Test_FR:\"19 20\"~100)^1.1 | (Test_DE:\"19 20\"~100)^1.1 | (Test_AR:\"19
> 20\"~100)^1.1)",
> "explain":{},
> "QParser":"ExtendedDismaxQParser",
> "altquerystring":null,
> "boost_queries":null,
> "parsed_boost_queries":[],
> "boostfuncs":[""],
> "timing":{
>   "time":75.0,
>   "prepare":{
> "time":35.0,
> "query":{
>   "time":35.0},
> "facet":{
>   "time":0.0},
> "facet_module":{
>   "time":0.0},
> "mlt":{
>   "time":0.0},
> "highlight":{
>   "time":0.0},
> "stats":{
>   "time":0.0},
> "expand":{
>   "time":0.0},
> "terms":{
>   "time":0.0},
> "spellcheck":{
>   "time":0.0},
> "debug":{
>   "time":0.0}},
>   "process":{
> "time":38.0,
> "query":{
>   "time":29.0},
> "facet":{
>   "time":0.0},
> "facet_module":{
>   "time":0.0},
> "mlt":{
>   "time":0.0},
> "highlight":{
>   "time":0.0},
> "stats":{
>   "time":0.0},
> "expand":{
>   "time":0.0},
> "terms":{
>   "time":0.0},
> "spellcheck":{
>   "time":6.0},
> "debug":{
>   "time":1.0}
>
> -Original Message-
> From: Edward Ribeiro 
> Sent: 07 January 2020 01:05
> To: solr-user@lucene.apache.org
> Subject: Re: Edismax ignoring queries containing booleans
>
> Hi Claire,
>
> You can add the following parameter `&debug=all` on the URL to bring back
> debugging info and share with us (if you are using the Solr admin UI you
> should check the `debugQuery`

Re: JOIN query

2020-01-08 Thread Mikhail Khludnev
Hi, Paresh.

I'm afraid the only way is to join them back in post processing
https://lucene.apache.org/solr/guide/6_6/transforming-result-documents.html#TransformingResultDocuments-_subquery_
Although, I'm not sure it will ever work with particular collections.

On Wed, Jan 8, 2020 at 3:42 PM Paresh  wrote:

> Hi,
>
> I have two collections: collection1 and collection2
> I have fields like -
> colleciton1: id, prop1, prop2, prop3
> collection2: id, col1, col2, col3
>
> I am doing a join query with collection1.prop1 = collection2.col1 on
> collection2.
>
> As a result, I can get any field from collection2 in 'fl'.
>
> Is there any way to get field from collection1 while performing query from
> collection2 joining with collection1?
>
>
> Regards,
> Paresh
>
>
>
> --
> Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>


-- 
Sincerely yours
Mikhail Khludnev