Hello,
I have the following statement:
({!join from=project_uuid to=id}type:EM_PM_Timerecord AND
created:[2015-01-01T01:00:00Z TO 2016-01-01T01:00:00Z]) OR ({!join
from=project_uuid to=id}type:EM_CM_Request_Member AND
created:[2015-01-01T01:00:00Z TO 2016-01-01T01:00:00Z])
It doesn't return any
I tested it with solr version 6.1.0 and 6.2.1.
Thanks,
Mathias
--
View this message in context:
http://lucene.472066.n3.nabble.com/OR-two-joins-tp4302415p4302416.html
Sent from the Solr - User mailing list archive at Nabble.com.
Couple more good links for this:
https://lucidworks.com/blog/2013/06/13/solr-cloud-document-routing/
and
http://stackoverflow.com/questions/15678142/how-to-add-shards-dynamically-to-collection-in-solr
(see Jay's answer about implicit routers - it's a better explanation than the
docs in my view!)
As I understand it for non-SolrCloud aware clients you have to manually load
balance your searches, see ymonad's answer here:
http://stackoverflow.com/questions/22523588/loadbalancer-and-solrcloud
This is from 2014 so maybe this has changed now - would be interested to know
as well.
Also, for in
Hello! I've got problem with excluding filter query when using json facet
api.
My query:
q=*
fq={!tag=fieldA} fieldA:"valueA"
fq={!tag=fieldB} fieldB:"valueB"
and there is no documents with fieldA:"valueA" and fieldB:"valueB",
so docs list is empty.
Then, if I use
facet=true&facet.field={!ex=fi
Hello! I've got problem with excluding filter query when using json facet
api.
My query:
q=*
fq={!tag=fieldA} fieldA:"valueA"
fq={!tag=fieldB} fieldB:"valueB"
and there is no documents with fieldA:"valueA" and fieldB:"valueB",
so docs list is empty.
Then, if I use
facet=true&facet.field={!ex=fi
On 07/10/2016 10:52, Charlie Hull wrote:
Hi all,
We're running a Lucene hackday in London - you can follow along with
Twitter using hashtag #LuceneSolrLondon and see what we're doing on
Github at https://github.com/flaxsearch/london-hackday-2016 - as the
README shows we're currently looking at:
That was great fun, especially being able to talk to contributors and
committers without them running of to another (or their own)
presentation.
Just as a quick update for Jira reports, JIRA does allow some of the
additional information I need (with expand flag).
However, http://jirasearch.mikemc
try ({!join from=project_uuid to=id}(type:EM_PM_Timerecord AND
created:[2015-01-01T01:00:00Z TO 2016-01-01T01:00:00Z])) OR ({!join
from=project_uuid to=id}(type:EM_CM_Request_Member AND
created:[2015-01-01T01:00:00Z TO 2016-01-01T01:00:00Z]))
or
({!join from=project_uuid to=id v=$q1}) OR ({!join
f
Thanks John. I got it sorted, but that part you pointed still looks
confusing. Imho it should be "You could also use the _route_ parameter
to name a specific shard*when ingesting documents, so Solrcloud will
route your document to specific shard.*"
Cheers.
On 20/10/16 19:14, John Bickerstaf
With the first version I get the fallowing error:
"org.apache.solr.search.SyntaxError: Cannot parse '(type:EM_PM_Timerecord':
Encountered \"\" at line 1, column 22.\nWas expecting one of:\n
...\n ...\n ...\n\"+\" ...\n\"-\" ...\n
...\n\"(\" ...\n\")\" ...\n\"*\"
On Fri, Oct 21, 2016 at 7:07 AM, Mathias
wrote:
> With the first version I get the fallowing error:
>
> "org.apache.solr.search.SyntaxError: Cannot parse '(type:EM_PM_Timerecord':
> Encountered \"\" at line 1, column 22.\nWas expecting one of:\n
> ...\n ...\n ...\n\"+\" ...\n\"-\"
That's this issue:
https://issues.apache.org/jira/browse/SOLR-9519
-Yonik
On Fri, Oct 21, 2016 at 5:34 AM, Никита Веневитин
wrote:
> Hello! I've got problem with excluding filter query when using json facet
> api.
> My query:
>
> q=*
> fq={!tag=fieldA} fieldA:"valueA"
> fq={!tag=fieldB} fieldB:
Yonik Seeley wrote
> On Fri, Oct 21, 2016 at 7:07 AM, Mathias
> <
> mathias.mahlknecht@
> > wrote:
>> With the first version I get the fallowing error:
>>
>> "org.apache.solr.search.SyntaxError: Cannot parse
>> '(type:EM_PM_Timerecord':
>> Encountered \"
>
> \" at line 1, column 22.\nWas expecti
Hi All,
I have an requirement where in SQL we have two different sets of data like
Company and Contact in SQL.
We are planning to get this to SOLR, I wanted to know whether we can have two
separate collections in SOLR and say have a link between them with say id of
one collection or if there an
Hi All,
I have an requirement where in SQL we have two different sets of data like
Company and Contact in SQL.
We are planning to get this to SOLR, I wanted to know whether we can have two
separate collections in SOLR and say have a link between them with say id of
one collection or if there
Preeti Bhat would like to recall the message, "Can we query across collections
in SOLR?".
NOTICE TO RECIPIENTS: This communication may contain confidential and/or
privileged information. If you are not the intended recipient (or have received
this communication in error) please notify the sende
Hi All,
I have an requirement where in SQL we have two different sets of data like
Company and Contact in SQL.
We are planning to get this to SOLR, I wanted to know whether we can have two
separate collections in SOLR and say have a link between them with say id of
one collection or if there
Hi,
Check documentation on join parser,
https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-JoinQueryParser
Regards,
Adi
On Fri, Oct 21, 2016, 5:11 PM Preeti Bhat wrote:
> Hi All,
>
> I have an requirement where in SQL we have two different sets of data like
> Company and
Thank you!
2016-10-21 14:31 GMT+03:00 Yonik Seeley :
> That's this issue:
> https://issues.apache.org/jira/browse/SOLR-9519
>
> -Yonik
>
>
> On Fri, Oct 21, 2016 at 5:34 AM, Никита Веневитин
> wrote:
> > Hello! I've got problem with excluding filter query when using json facet
> > api.
> > My qu
I just realized that I made an assumption about your initial question that may
not be true.
Everything I've said has been based on handling requests to add/update
documents during the indexing process. That process involves the "leader
first" concept I've been mentioning.
So to answer your or
You may wanna to checkout below these options as well
https://cwiki.apache.org/confluence/display/solr/Advanced+Distributed+Request+Options
https://cwiki.apache.org/confluence/display/solr/Streaming+Expressions#StreamingExpressions-innerJoin
On Fri, Oct 21, 2016 at 7:49 AM, Adi wrote:
> Hi,
>
Hi,
I'm running solrcloud in foreground mode (-f). Does it make a difference
for Solr if I stop it by pressing ctrl-c, sending it a SIGTERM or using
"solr stop"?
regards,
Hendrik
>>> Yes, that's possible. It's what I was thinking about when I mentioned
>>>"...general case flow". That capability is relatively new, and not the
>>>default, which is why I didn't mention it.
Yes, thought you probably meant that, was just adding it explicitly.
>>> And load balancing for relia
On 10/21/2016 6:56 AM, Hendrik Haddorp wrote:
> I'm running solrcloud in foreground mode (-f). Does it make a
> difference for Solr if I stop it by pressing ctrl-c, sending it a
> SIGTERM or using "solr stop"?
All of those should produce the same result in the end -- Solr's
shutdown hook will be
bq: I did hope that SolrCloud would have a standard load balancing
mechanism for all client types rather than just those using a specific
Java library.
It does. For queries. There is a software load balancer as Garth
mentioned, the "aggregator" node can be farmed out. But for queries
you want to u
Join queries don't work across sharded collections. Well,
there's a special case where the "from" collection can be hosted
in-toto on every replica the "to" collection is hosted on, but
If you can denormalize the data, that's always the first option.
Whenever I find myself trying to
express s
Hi Daniel. I noticed a post you had about using solr_url in logstash. We
just started attempting to index into solr yesterday.
Previously we were using logstash to index .csv files into elastic search.
I cannot get indexing working into solr.I can't find any examples of what
the conf fil
Hi Shawn,
Thanks for the thoughtful response on middleware and the solr philosophy.
You are correct and I intend to handle this outside of Solr. This inquiry
was me doing some forethought on a distant project. When I see an
XSLTResponseWriter the jump-to-conclusions part of my brain jumps to PDF.
bq: So, a node is part of the cluster but no collections? How can we
add a node to cloud without active participation?
See the collections API create command, in particular the
createNodeSet. You can specify exactly what Solr instances the
collection is created on so you can have two collections u
On 21 October 2016 at 09:58, Matthew Roth wrote:
> . I could always process the upstream relational data to
> produce my PDF reports.
I think this is the best option. This allows you to
mangle/de-normalize your data stored in Solr to be the best fit for
search.
Regards,
Alex.
Solr Exampl
Don Tavoletti,
I'm not sure you mean "me" by Daniel, despite that being my name. There is a
LogStash output plugin to output to Solr:
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-solr_http.html
For really simple use cases, there is also a LogStash input plugin for JDBC:
htt
The best way is to look at your Solr logs. When you see the commit
message, you'll see things like
"start
commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}"
that ought to work, as should something like:
curl blah blah/update?sof
If the PDF report is truly a report, I agree with this. We have a use-case
with IBM InfoSphere Watson Explorer where our users want a PDF report on the
results for their query to be generated on the fly. They can then save the
query and have the report emailed to them :) Not only is Solr m
Thanks Shawn - We've had to increase this to 300 seconds when using a
large cache size with HDFS, and a fairly heavily loaded index routine (3
million docs per day). I don't know if that's why it takes a long time
to shutdown, but it can take a while for solr cloud to shutdown
gracefully. If
Upon further investigation this is a bug in Solr.
If I change the order of my interval definitions to be Negative, Zero,
Positive, instead of Negative, Positive, Zero it correctly assigns the
document with the zero value to the Zero interval.
I dug into the 5.3.1 code and the problem is in the
or
Useless shit which should be deleted from the Internet, because this
confuses people instead of helping them.
On 21/10/16 09:46, hairymccla...@yahoo.com.INVALID wrote:
Couple more good links for this:
https://lucidworks.com/blog/2013/06/13/solr-cloud-document-routing/
and
http://stackoverflow
Which link are you talking about?
On Friday, October 21, 2016 8:09 PM, Customer
wrote:
Useless shit which should be deleted from the Internet, because this
confuses people instead of helping them.
On 21/10/16 09:46, hairymccla...@yahoo.com.INVALID wrote:
> Couple more good links for
> I think this is the best option.
I really do too once I think about it some more. Rubber Ducky strikes
again. Once I say it aloud--in this case type it out--it seems much clearer
what the answer is to this question.
Thanks again. I've really appreciated all the feedback on this question.
Matt
Can someone please help me troubleshoot my Solr 6.0 highlighting issue? I
have a production Solr 4.9.0 unit configured to highlight responses and it
has worked for a long time now without issues. I have recently been testing
Solr 6.0 and have been unable to get highlighting to work. I used my 4.9
c
Sowmya,
My memory is that the cache feature does not work with Delta Imports. In fact,
I believe that nearly all DIH features except straight JDBC imports do not work
with Delta Imports. My advice is to not use the Delta Import feature at all as
the same result can (often more-efficiently) be
Hi Andy
> How should I proceed from here?
I'd say this qualifies as an issue in JIRA - if you're able to come up with
a test, that would be great, but not needed
Patches are typically created against thr master-branch, but as long as you
include all needed information (version, file, ..) - we'r
Thanks Joel.
I will migrate to Solr 6.0.0.
However, I have one more question. Have you come across any discussion
about Spark-on-Solr corrupting the data?
So, I am getting the JSONParse exceptions only for a collection on which I
tried loading the data using Spark Dataframe API (which internally
Just to the add to my previous question: I used dynamic shard splitting
while consuming data from the Solr collection using /export handler.
On Fri, Oct 21, 2016 at 2:27 PM, Chetas Joshi
wrote:
> Thanks Joel.
>
> I will migrate to Solr 6.0.0.
>
> However, I have one more question. Have you come
I've got a DocExpirationUpdateProcessorFactory configured to periodically
remove expired documents from the Solr index, which is working in that the
documents no longer show up in queries once they've reached expiration date.
But the index size isn't reduced when they expire, and I'm wondering if i
Are you indexing to the collection? In the "usual" case,
as documents get added to the index, background
merging will reclaim the occupied space eventually, see
McCandless' excellent visualization here:
http://blog.mikemccandless.com/2011/02/visualizing-lucenes-segment-merges.html
The third animat
46 matches
Mail list logo