We are implementing a JDBC driver on drill with Solr as a storage plugin
The code had been here and we need help if anyone can contribute doing a code
review and performance testing
https://github.com/apache/drill/pull/201/files
Thanks,
Tirthankar
On Nov 20, 2015, at 12:25 PM, William Bell
mail
Thanks
It seems to work when there is no security.json, so perhaps there's some typo
in the initial version.
I notice that the version you sent is different from the documentation at
cwiki.apache.org/confluence/display/solr/Authentication+and+Authorization+Plugins
in that the Wiki version has
Hi Shawn,
We have already switched the request method to POST.
I am going to try the term query parser soon. I will post the performance
difference against the IN syntax here later.
Thanks!
2015-11-20 15:23 GMT-08:00 Shawn Heisey :
> On 11/20/2015 4:09 PM, jichi wrote:
> > Thanks for the quick
On 11/20/2015 4:09 PM, jichi wrote:
> Thanks for the quick replies, Alex and Jack!
>
>> definitely can improve on the ORing the ids with
> Going to try that! But I guess it would still hit the maxBooleanClauses=1024
> threshold.
The terms query parser does not have a limit like boolean queries do.
This seems unrelated and more like a user error somewhere. Can you just
follow the steps, without any security settings i.e. not even uploading
security.json and see if you still see this? Sorry, but I don't have access
to the code right now, I try and look at this later tonight.
On Fri, Nov 20, 2
Thanks for the quick replies, Alex and Jack!
> definitely can improve on the ORing the ids with
Going to try that! But I guess it would still hit the maxBooleanClauses=1024
threshold.
> 1. Are you trying to retrieve a large number of documents, or simply
perform queries against a subset of the in
Thank you for opening SOLR-8326
As a side note, in the procedure you listed, even before adding the
collection-admin-edit authorization, I'm already hitting trouble: stopping and
restarting a node results in the following
INFO - 2015-11-20 22:48:41.275; [c:solr8326 s:shard2 r:core_node4
x:sol
1. Are you trying to retrieve a large number of documents, or simply
perform queries against a subset of the index?
2. How many unique queries are you expecting to perform against each
specific filter set of IDs?
3. How often does the set of IDs change?
4. Is there more than one filter set of ID
I don't know what to do about 30K ids, but you definitely can improve
on the ORing the ids with
https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-TermsQueryParser
Regards,
Alex.
Newsletter and resources for Solr beginners and intermediates:
http://www.solr-start.c
Hi,
I am using Solr 4.7.0 to search text with an id filter, like this:
id:(100 OR 2 OR 5 OR 81 OR 10 ...)
The number of IDs in the boolean filter are usually less than 100, but
could sometimes be very large (around 30k IDs).
We currently set maxBooleanClauses to 1024, partitioned the IDs
That could be. I had a flaky network and I had to rerun my script twice. Maybe
it got into some inconsistent state. I will keep an eye on this. If I am able
to reproduce, then I will create a JIRA.
Bosco
On 11/20/15, 10:34 AM, "Anshum Gupta" wrote:
>This uses the Collections API and should
This uses the Collections API and shouldn't have led to that state. Have
you had similar issues before?
I'm also wondering if you already had something from previous runs/installs
on the fs/zk.
On Fri, Nov 20, 2015 at 10:26 AM, Don Bosco Durai wrote:
> Anshum,
>
> Thanks for the workaround. It
Collections API were available before November of 2014, if that is when you
took the class. However, it was only with Solr 5.0 (released in Feb 2015)
that the only supported mechanism to create a collection was restricted to
Collections API.
Here are the list of steps that you'd need to run to see
Anshum,
Thanks for the workaround. It resolved my issue.
Here is the command I used. It is pretty standard and has worked for me almost
all the time (so far)...
bin/solr create -c my_collection -d /tmp/solr_configsets/my_collection/conf -s
3 -rf 1
Thanks
Bosco
On 11/20/15, 9:56 AM, "Ans
Thank you again for the reply.
Below is the Email I was about to send prior to your reply a moment ago: shall
I try again without "read" in the security.json?
The Collections API method was not discussed in the "Unleashed" class at the
conference in DC in 2014 (probably because it was not yet
>From my tests, it seems like the 'read' permission interferes with the
Replication and so the ADDREPLICA also fails. You're also bound to run into
issues if you have 'read' permission setup and restart your cluster,
provided you have a collection that has a replication factor > 1 for at
least one
You can manually update the cluster state so that the range for shard1 says
8000-d554. Also remove the "parent" tag from there.
Can you tell me how did you create this collection ? This shouldn't really
happen unless you didn't use the Collections API to create the collection.
On Fri,
I am using Solr 5.2 version.
Thanks
Bosco
On 11/20/15, 9:39 AM, "Don Bosco Durai" wrote:
>I created a 3 shard cluster, but seems for one of the shard, the range is
>empty. Anyway to fix it without deleting and recreating the collection?
>
>2015-11-20 08:59:50,901 [solr,writer=0] ERROR
>a
I created a 3 shard cluster, but seems for one of the shard, the range is
empty. Anyway to fix it without deleting and recreating the collection?
2015-11-20 08:59:50,901 [solr,writer=0] ERROR
apache.solr.client.solrj.impl.CloudSolrClient (CloudSolrClient.java:902) -
Request to collection my_col
How is performance on Calcite?
On Fri, Nov 20, 2015 at 5:12 AM, Joel Bernstein wrote:
> After reading https://calcite.apache.org/docs/tutorial.html, I think it
> should be possible to use the Solr's JDBC Driver with Calcites JDBC
> adapter.
>
> If you give it a try and run into any problems, ple
Steve,
Another thing debugQuery will give you is a breakdown of how much each
field contributed to the final score of each hit. That's going to give you
a nice shopping list of qf to weed out.
k/r,
Scott
On Fri, Nov 20, 2015 at 9:26 AM, Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
> H
Hi, I have a index that contains products and tags as nested children,
like below:
{
"id":"413863",
"ProductID":"413863",
"type":"product",
"MainProductID":"",
"SKU":"8D68595B",
"Name":"mere",
"alltext":["mere",
"",
"",
Todd,
With the DIH request, are you specifying "cacheDeletePriorData=false". Looking
at the BerkleyBackedCache code if this is set to true, it deletes the cache and
assumes the current update is to fully repopulate it. If you want to do an
incremental update to the cache, it needs to be false
Hi Ian,
Yes you are right. This feature is only available in 5.3.0. It may be
possible to backport the patches on SOLR-4212 to the 4.x release
branch but I am afraid you are on your own on this one.
On Fri, Nov 20, 2015 at 7:16 PM, Ian Harrigan wrote:
> Hi guys
>
> So i have a question about usi
Hello Steve,
debugQuery=true shows whether it's facets or query, whether it's query
parsing or searching (prepare vs process), cache statistics can tell about
its' efficiency; sometimes a problem is obvious from request parameters.
Simple sampling with jconsole or even by jstack can point on a smo
Hi all,
I have a working suggester compnenet and requesthandler in my Solr 5.2.1
instance. It is working as I expected but I need a solution which handles
multiple query terms "correctly".
I have string field title. Let's see the following case:
title 1: Green Apple Color
title 2: Apple the maste
Hi guys
So i have a question about using facet queries but getting stats for each
facet item, it seems this is possible on solr v5+. Something like this:
q=*:*&facet=true&facet.pivot={!stats=t1}servicename&stats.field={!tag=t1}dur
ation&rows=0&stats=true&wt=json&indent=true
It also seems t
hi,
i was wondering if anyone has already create java8 streams
from a CloudSolrClient, a la
SolrQuery query = new SolrQuery();
query.setQuery("id:*");
java.util.stream.Stream docs = cloudSolrClient.stream(query);
any pointer greatly appreciated.
wkr j
*Jürgen Jakobitsch*
Inn
On 11/20/2015 5:21 AM, Alexandre Rafalovitch wrote:
> Actually I think / is a special character as of recent version of Solr.
> Can't remember why though.
Surrounding a query with slashes, at least when using the standard
parser, makes it a regex query. I don't know if this happens with any
of th
On 11/20/2015 12:33 AM, Midas A wrote:
> As we are this server as a master server there are no queries running on
> it . in that case should i remove these configuration from config file .
The following cache info says that there ARE queries being run on this
server:
> QueryResultCache:
>
> lo
Thanks Erick.
The 1500 fields is a design that I inherited. I'm trying to figure out why
it was done as such and what it will take to fix it.
What about my other question: how does one go about debugging performance
issues in Solr to find out where time is mostly spent? How do I know my
Solr pa
Actually I think / is a special character as of recent version of Solr.
Can't remember why though.
This could be the kind of things that would trigger an edge case bug.
What happens if you request id3,id2,id1? In the opposite order? Are the
same documents missing? Or same by request position? If
After reading https://calcite.apache.org/docs/tutorial.html, I think it
should be possible to use the Solr's JDBC Driver with Calcites JDBC adapter.
If you give it a try and run into any problems, please create a jira.
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, Nov 19, 2015 at 7:58 PM,
Hi,
Since this is master node, and not expected to have queries, you can
disable caches completely. However, from numbers cache autowarm is not
an issue here but probably frequency of commits and/or warmup queries.
How do you do commits? Since master-slave, I don't see reason to have
them too
Thanks for your answer...
I don't think it's a problem due tp special characters. All my IDs as the same
format : "/ABCDZ123/123456" with no character with need to be escaped.
And when I use a "normal" query on the key field, it works fine, SolR found the
document...
Cordialement,
Monsinjon Je
35 matches
Mail list logo