How are you submitting the query to to collections? Aliasing to them both?
The simplest would be just to index the name of the collection with
each doc and return that field
Best,
Erick
On Thu, Jun 22, 2017 at 6:05 PM, Jagrut Sharma wrote:
> I'm submitting a search term to SolrCloud to query 2
I'm submitting a search term to SolrCloud to query 2 collections. The
response that comes back does not have the collection name from which the
result came.
Is it possible to know the collection which returned the result?
Thanks.
--
Jagrut
deniz,
I was going to add something here. The reason what you want is probably
hard to do is because you are asking solr, which stores a document, to
return documents using an attribute of document pairs. As only a though
exercise, if you stored record pairs as a single document, you could
proba
OK. We’re going with a separate call to /suggest. For those of us with
controlled vocabularies, a suggest.distrib would be a handy thing.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Jun 22, 2017, at 4:32 PM, Alessandro Benedetti
> wrote:
>
>
Hi Walter,
As I mentioned in the first mail, I don't think [1] will help, I was
referring to the source code to explain that in my opinion such feature is
not available.
Looking in the source code ( the JavaDoc is not enough) , that class
presents the suggester params, and there is no param for the
First I wouldn't use regex queries, they're expensive.
WordDelimiter(Graph)Filter is designed for these use cases, have you
considered that?
And what do you mean: "special dash character issue"? Yes, it's the NOT
operator, but you can always escape it.
Best,
Erick
On Thu, Jun 22, 2017 at 1:54 PM
Yes, that was the missing piece. Thanks a lot!
On Thu, Jun 22, 2017 at 5:20 PM, Joel Bernstein wrote:
> Here is the psuedo code:
>
> rollup(sort(fetch(gatherNodes(
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Thu, Jun 22, 2017 at 5:19 PM, Joel Bernstein
> wrote:
>
> > You'll ne
Here is the psuedo code:
rollup(sort(fetch(gatherNodes(
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, Jun 22, 2017 at 5:19 PM, Joel Bernstein wrote:
> You'll need to use the sort expression to sort the nodes by schemaType
> first. The rollup expression is doing a MapReduce rollup th
You'll need to use the sort expression to sort the nodes by schemaType
first. The rollup expression is doing a MapReduce rollup that requires the
the records to be sorted by the "over" fields.
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, Jun 22, 2017 at 2:49 PM, Pratik Patel wrote:
> Hi
Hi,
How can I search for SSN regex pattern which overwhelms special dash
character issue?
As you know that /[0-9]{3}-[0-9]{2}-[0-9]{4}/ will not work as intended.
Kind Regards,
Furkan KAMACI
Solr hasn't got built in support for NER, but you can try its UIMA integration
with external third-party suppliers:
https://cwiki.apache.org/confluence/display/solr/UIMA+Integration
-Original message-
> From:FOTACHE CHRISTIAN
> Sent: Thursday 22nd June 2017 19:03
> To: Solr-user
> S
Hi Joel,
I am able to reproduce this in a simple way. Looks like Let Stream is
having some issues. Below complement function works fine if I execute
outside let and returns an EOF:true tuple but if a tuple with EOF:true
assigned to let variable, it gets changed to EXCEPTION "Index 0, Size 0"
etc
The problem with the suggest response is that the suggest.q value is used as an
attribute in the JSON response. That is just weird.
Is there some way to put in a wildcard in the Velocity selector?
“$response.response.terms.name” works for /terms, but /suggest is different.
And I’m running two s
Hi,
I have a streaming expression which uses rollup function. My understanding
is that rollup takes an incoming stream and aggregates over given buckets.
However, with following query the result contains duplicate tuples.
Following is the streaming expression.
rollup(
fetch(
collecti
Please let me know if I shall create a JIRA and i can provide both
expressions and data to reproduce.
On Thu, Jun 22, 2017 at 11:23 AM, Susheel Kumar
wrote:
> Yes, i tried building up expression piece by piece but looks like there is
> an issue with how complement expects / behave for sort.
>
>
I really don’t understand [1]. I read the JavaDoc for that, but how does it
help? What do I put in the solrconfig.xml?
I’m pretty good at figuring out Solr stuff. I started with Solr 1.2.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Jun 22, 2017
I need to enable NER Plugin in Solr 6.x in order to extract locations from the
text when committing documents to Solr .
How can I achieve this in the simpliest way possible? Please help Christian
Fotache Tel: 0728.297.207
"They're just files, man". If you can afford a bit of down-time, you can
shut your Solr down and recursively copy the data directory from your
source to destingation. SCP, rsync, whatever then restart solr.
Do take some care when copying between Windows and *nix that you do a
_binary_ transfer.
I
Usually we index directly into Prod solr than copying from local/lower
environments. If that works in your scenario, i would suggest to directly
index into Prod than copying/restoring from local Windows env to Linux.
On Thu, Jun 22, 2017 at 12:13 PM, Moritz Michael
wrote:
>
>
>
>
>
>
>
>
>
BTW, is there a better/recommended way to transfer an index to
another solr?
On Thu, Jun 22, 2017 at 6:09 PM +0200, "Moritz Michael"
wrote:
Hello Michael,
I used the backup functionality to create a snapshot and uploaded this
snapshot, so I feel it should be save.
I'll try it again. Maybe the copy operation wasn't successful.
BestMoritz
Sorry for typo
Facing a weird behavior when using hashJoin / innerJoin etc. The below
expression display tuples from variable a shown below
let(a=fetch(SMS,having(rollup(over=email,
count(email),
select(search(SMS,
q=*:*,
Hello Joel,
Facing a weird behavior when using hashJoin / innerJoin etc. The below
expression display tuples from variable a and the moment I use get on
innerJoin / hashJoin expr on variable c
let(a=fetch(SMS,having(rollup(over=email,
count(email),
select(searc
Yes, i tried building up expression piece by piece but looks like there is
an issue with how complement expects / behave for sort.
if i use below g and h expr inside complement which are already sorted
(sort) then it doesn't work
e=select(get(c),id,email),
f=select(get(d),id,email),
g=sort(get(e)
I suspect something is wrong in the syntax but I'm not seeing it.
Have you tried building up the expression piece by piece until you get the
syntax error?
Joel Bernstein
http://joelsolr.blogspot.com/
On Wed, Jun 21, 2017 at 3:20 PM, Susheel Kumar
wrote:
> While simple complement works in this
Running "ant eclipse" or "ant test" in verbose mode will provide you the
exact lib in ivy2 cache which is corrupt. Delete that particular lib and
run "ant" again. Also don't try to get out / exit "ant" commands via
Ctrl+C or Ctrl+V while it is downloading the libraries to ivy2 folder.
Sometimes I've seen something like this when the ivy cache is corrupt. It's
a pain since it takes a while to re-download things, but you might try to
remove that entire cache. On my mac it's 'rm -rf ~.ivy2/cache'
Erick
On Thu, Jun 22, 2017 at 3:39 AM, Susheel Kumar
wrote:
> Hello,
>
> Am i miss
Thanks for your help Alessandro!
Ryan
On Wed, 21 Jun 2017 at 19:25 alessandro.benedetti
wrote:
> Hi Ryan,
> first thing to know is that Learning To Rank is about relevancy and
> specifically it is about to improve your relevancy function.
> Deciding if to use or not LTR has nothing to do with
I suspect Erik's right that clean=true is the problem. That's the default
in the DIH interface.
I find that when using DIH, it's best to set preImportDeleteQuery for every
entity. This safely scopes the clean variable to just that entity.
It doesn't look like the docs have examples of using preIm
Hi Moritz,
did you stop your local Solr sever before? Copying data from a running
instance may cause headaches.
If yes, what happens if you copy everything again? It seems that your
copy operations wasn't successful.
Best,
Michael
Am 22.06.2017 um 14:37 schrieb Moritz Munte:
> Hello,
>
>
>
>
Hello,
I created an index on my local machine (Windows 10) and it works fine there.
After uploading the index to the production server (Linux), the server shows
an error:
java.util.concurrent.ExecutionException:
org.apache.solr.common.SolrException: Unable to create core
[contentselect_v3]
Hello,
Am i missing something or the source code is broken. Took latest code from
master and when doing "ant eclipse" or "ant test", getting below error.
ivy-configure:
[ivy:configure] :: loading settings :: file =
/Users/kumars5/src/git/code/lucene-solr/lucene/top-level-ivy-settings.xml
res
That would indeed be great! Does anyone know if there is a specific reason
for this or has it just not been implemented?
Jeffery Yuan schrieb am Di., 20. Juni 2017, 22:54:
>
> FuzzyLookupFactory is great as it can still find matches even if users
> mis-spell.
>
> context filtering is also great,
I am trying to confirm my understanding of MLT after going through
following page:
https://cwiki.apache.org/confluence/display/solr/MoreLikeThis.
Three approaches are mentioned:
1) Use it as a request handler and send text to the MoreLikeThis request
handler as needed.
2) Use it as a search compo
A short answer seems to be No [1] .
On the other side I discussed in a couple of related Jira issues in the past
as I( + other people) believe we should anyway always return unique
suggestions [2] .
Despite it passed a year, nor me nor others have actually progressed on that
issue :(
[1] o
Ok. It should be something like the scratch below (I'm sorry for full
package names).
The snippet below requests /dataimport status on db core and yields "idle"
string
org.apache.solr.client.solrj.impl.HttpSolrClient solrj=
new org.apache.solr.client.solrj.impl.HttpSolrClient.Builder()
.with
36 matches
Mail list logo