Thanks both!
I already tried "&debug=true", but it doesn't tell me that much...Or at
least, I don't see any problem...
Below are the responses...
1. /select?q=hospital AND_query_:"{!q.op=AND
v=$a}"&fl=abstract,title&a=hospital Leapfrog&debug=true
0
280
hospital AND_query_:"{!q.op=AND
Hi Joel,
I wiped my repo and started fresh with the hope of documenting my steps and
producing a stacktrace for you - of course, this time innerJoin() works
fine. :-) I must have gotten something sideways the first time.
Thanks for your time. I look forward to diving into this really cool
featu
Hello, all! I'm a BloombergBNA employee and need to obtain/write a
dtSearch parser for solr (and probably a bunch of other things a little
later).
I've looked at the available parsers and thought that the surround parser
may do the trick, but it apparently doesn't like nested N or W subqueries.
I
Hi,
I'm trying to update the set-property option in security.json
authentication section. As per the documentation,
"Set arbitrary properties for authentication plugin. The only supported
property is 'blockUnknown'"
https://cwiki.apache.org/confluence/display/solr/Basic+Authentication+Plugin
: https://lucidworks.com/blog/2014/05/07/document-expiration/
:
: Each step is followed but nothing happens. I was expecting to see expiry
: automatically set as per multiple mechanisms configured.
if you are not seeing the configured expirationFieldName added to your
docs, then it sounds like
Hi Team,
I followed below walkthough for auto expiry configuration
https://lucidworks.com/blog/2014/05/07/document-expiration/
Each step is followed but nothing happens. I was expecting to see expiry
automatically set as per multiple mechanisms configured.
Does it work differently in 6.0.0 than
Shawn, that's a great idea for how to integrate f5 with Solr. I'd thought
about having Apache httpd in-front of Solr, but I suppose I could just have f5
BigIP on its own.
-Original Message-
From: Sandy Foley [mailto:sandy.fo...@verndale.com]
Sent: Thursday, May 12, 2016 2:38 PM
To: so
We are getting the following error:
Full Import failed:java.lang.RuntimeException:
org.apache.solr.handler.dataimport.DataImportHandlerException:
java.lang.NullPointerException
at
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:270)
at
org.apache.solr.handler.dataimport.DataI
Thanks Shawn for the response. This was helpful.
-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: Wednesday, May 11, 2016 7:02 PM
To: solr-user@lucene.apache.org
Subject: Re: Using Ping Request Handler in SolrCloud within a load balancer
On 5/9/2016 10:56 PM, S
On 5/12/2016 9:08 AM, Horváth Péter Gergely wrote:
> As part of benchmark, I attempted to create about 2500 collections to
> see how well that would work for us. Unfortunately, the experiment
> yielded some disappointing results, after about 2000 being created
> SolR got hung; REST requests started
On 5/12/2016 10:46 AM, Lars Noschinski wrote:
> does Solr (6.0) give an guarantees about the atomicity of commits in Solr
> Cloud? I.e., if I add a large amount of documents, then commit, do the
> changed state become visible on all shards at the same time? I could not
> find anything towards that,
I'm not a dev, but I would assume the following if I were concerned with
speed and atomicity
A. A commit WILL be reflected in all appropriate shards / replicas in a
very short time.
I believe Solr Cloud guarantees this, although the time frame
will be dependent on "B"
B. Network, proces
If I understand the question correctly...
I'm assuming you are indexing rich documents (PDF/DOC/MSG, etc) with DIH's Tika
handler. Some of those documents have attachments.
If that's the case, all of the content of embedded docs _should_[0] be
extracted, but then all of that content across the
Hi everyone,
does Solr (6.0) give an guarantees about the atomicity of commits in Solr
Cloud? I.e., if I add a large amount of documents, then commit, do the
changed state become visible on all shards at the same time? I could not
find anything towards that, so I assume that there is no such guara
Could you please let us know which crawler are you using to fetch data from
document and its attachment?
On Thu, May 12, 2016 at 3:26 PM, Solr User wrote:
> Hi
>
> If I index a document with a file attachment attached to it in solr, can I
> visualise data of that attached file attachment also w
: I have a few classes that are Analyzers, Readers, and TokenFilters. These
: classes use a large hashmap to map tokens to another value. The code is
: working great. I go to the Analysis page on the Solr dashboard and everything
: works as I would like. The problem is that the first time each one
Interesting ... yeah, it's totally possible that those combinations of
features may not play nicely with eachother, because the
DocExpirationUpdateProcessor initiates it's own solr requests under the
covers - it has no "original request" to copy authentication credentials
from.
Can you please
I have a few classes that are Analyzers, Readers, and TokenFilters.
These classes use a large hashmap to map tokens to another value. The
code is working great. I go to the Analysis page on the Solr dashboard
and everything works as I would like. The problem is that the first time
each one of t
Thank you so much Erick. The CollapsingQparse was exactly what I needed.
I'm able to group by the field and then do the collapse from products into
items and get the correct answer. The Collapsing is also more appropriate
for the general grouping we need to do all the time now as well, so we'll
pro
Backup simply by copying the files? or is there some option by which to say
"include analyzingInfixSuggesterIndexDir as well"?
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Wednesday, May 11, 2016 11:53 PM
To: solr-user
Subject: Re: backups of analyzingI
Try adding &debug=query to your query and look at the parsed results.
This shows you exactly what Solr sees rather than what you think
it should.
Best,
Erick
On Thu, May 12, 2016 at 6:24 AM, Ahmet Arslan wrote:
> Hi,
>
> Well, what happens
>
> q=hospital&fq={!lucene q.op=AND v=$a}&a=hospital Lea
You are correct Alan. My cluster seems to have not started correctly. Debugging
now...
Thanks
-Original Message-
From: Alan Woodward [mailto:a...@flax.co.uk]
Sent: 12 May 2016 13:18
To: solr-user@lucene.apache.org
Subject: Re: http request to MiniSolrCloudCluster
Are you sure that the
Hi Abdel,
Your configuration looks ok regarding the cdcr update log.
Could you tell us a bit more about your Solr installation ? More
specifically, does the solr instances, both source and target, contain
one collection that was created prior the configuration of cdcr ?
Best,
--
Renaud Delbru
Hi
If I index a document with a file attachment attached to it in solr, can I
visualise data of that attached file attachment also while querying that
particular document? Please help me on this
Thanks & Regards
Vidya Nadella
--
View this message in context:
http://lucene.472066.n3.nabble.co
Hi,
Well, what happens
q=hospital&fq={!lucene q.op=AND v=$a}&a=hospital Leapfrog
OR
q=+_query_:"{!lucene q.op=AND v='hospital'}" +_query_:"{!lucene q.op=AND
v=$a}"&a=hospital Leapfrog
Ahmet
On Thursday, May 12, 2016 3:28 PM, Bastien Latard - MDPI AG
wrote:
Hi Ahmet,
Thanks for your ans
All,
I'm facing some difficulties utilizing both document expiration and the
security plug-ins in Solr 5.5.0. Looking at the log file for the shard1
leader, I can see it initiate the delete process. Unfortunately, it
rapidly emits errors for all of the other nodes, as those requests get
reject
Hi Ahmet,
Thanks for your answer, but this doesn't work on my local index.
q1 returns 2 results.
http://localhost:8983/solr/my_core/select?q=hospital AND
_query_:"{!q.op=AND%20v=$a}"&fl=abstract,title&a=hospital Leapfrog
==> returns 254 results (the same as
http://localhost:8983/solr/my_core/s
Are you sure that the cluster is running properly? Probably worth checking its
logs to make sure Solr has started correctly?
Alan Woodward
www.flax.co.uk
On 12 May 2016, at 12:48, Rohana Rajapakse wrote:
> Wait.
> With correct port, curl says : "curl: (52) Empty reply from server"
>
>
> ---
Wait.
With correct port, curl says : "curl: (52) Empty reply from server"
-Original Message-
From: Alan Woodward [mailto:a...@flax.co.uk]
Sent: 12 May 2016 11:35
To: solr-user@lucene.apache.org
Subject: Re: http request to MiniSolrCloudCluster
Hi Rohana,
What error messages do you get
"Unfortunately our documents now need to be grouped as well (product
variants into items) and that grouping query needs to work on that grouping
instead. As far as I'm aware you can't do nested grouping in Solr."
What about collapsing the product variants into a group Head which will
become the "p
On browser I get : "The 127.0.0.1 page isn't working. 127.0.0.1 didn't send any
data."
On curl I get : "curl: (7) Failed connect to 127.0.0.1:63175; No error"
-Original Message-
From: Alan Woodward [mailto:a...@flax.co.uk]
Sent: 12 May 2016 11:35
To: solr-user@lucene.apache.org
Subje
Hi Rohana,
What error messages do you get from curl? MiniSolrCloudCluster just runs
jetty, so you ought to be able to talk to it over HTTP.
Alan Woodward
www.flax.co.uk
On 12 May 2016, at 09:36, Rohana Rajapakse wrote:
> Hi,
>
> Is it possible to make http requests (e.g. from cURL) to an ac
Hi,
Is it possible to make http requests (e.g. from cURL) to an active/running
MiniSolrCloudCluster?
One of my existing projects use http requests to an EmbeddedSolrServer. Now I
am migrating to Solr-6/7 and trying to use MiniSolrCloudCluster. I have got a
MiniSolrCloudCluster up and running,
33 matches
Mail list logo