rJ against a newer Solr server (or
> vice versa).
>
> Mike
>
> On Fri, Nov 20, 2020 at 2:25 PM Susmit Shukla
> wrote:
>
> > Hi,
> > got this error using streaming with solrj 8.6.3 . does it use noggit-0.8.
> > It was not mentioned in dependencies
>
Hi,
got this error using streaming with solrj 8.6.3 . does it use noggit-0.8.
It was not mentioned in dependencies
https://github.com/apache/lucene-solr/blob/branch_8_6/solr/solrj/ivy.xml
Caused by: java.lang.NoSuchMethodError: 'java.lang.Object
org.noggit.ObjectBuilder.getValStrict()'
at org.apa
i used json facet api for a similar requirement. it can ignore filters from
main query if needed and roll up the hit counts to any field ..
> On Feb 11, 2020, at 6:19 PM, Fischer, Stephen
> wrote:
>
> Thanks very much! By the way, we are using eDisMax, and the queries our UI
> supports do
Hi,
Trying to use solr streaming 'gatherNodes' function. It is for extracting
email graph based on from and to fields.
It requires 'to' field to be a single value field with docvalues enabled
since it is used internally for sorting and unique streams
The 'to' field can contain multiple email addr
ranges are for router.field so it does not find the doc.
I could think of a workaround by computing shards on client and send shards
parameter with query.
Would it also impact replica sync?
Thanks,
Susmit
Hi Aroop,
i created a utility using solrzkclient api to read state.json, enumerated (one)
replica for each shard and used /replication handler for size and added them
up..
Sent from my iPhone
> On Jun 25, 2018, at 7:24 PM, Aroop Ganguly wrote:
>
> Hi Team
>
> I am not sure how to ascertain
Hi,
This may be expected if one of the streams is closed early - does not reach to
EOF tuple
Sent from my iPhone
> On Jun 14, 2018, at 9:53 AM, Christian Spitzlay
> wrote:
>
> Here ist one I stripped down as far as I could:
>
> innerJoin(sort(search(kmm,
> q="sds_endpoint_uuid:(2f927a0b\-f
fault limits. Could
be useful to turn on debug logging and check.
Thanks,
Susmit
On Thu, Nov 9, 2017 at 8:35 PM, Lanny Ripple wrote:
> First, Joel, thanks for your help on this.
>
> 1) I have to admit we really haven't played with a lot of system tuning
> recently (before DocVal
/
Thanks,
Susmit
On Wed, Sep 6, 2017 at 10:45 PM, Imran Rajjad wrote:
> My only concern is the performance as the cursor moves forward in
> resultset with approximately 2 billion records
>
> Regards,
> Imran
>
> Sent from Mail for Windows 10
>
> From: Joel Bernstein
>
a new constructor on
master branch.
I guess it is all good going forward on master.
Thanks,
Susmit
On Tue, Jun 27, 2017 at 10:14 AM, Joel Bernstein wrote:
> Ok, I see where it's not set the stream context. This needs to be fixed.
>
> I'm curious about where you're seei
(true){
read();
break after 2 iterations
}
ps.close()
//close() reads through the end of tupleStream.
I tried with HttpClient created by *org**.**apache**.**http**.**impl**.*
*client**.HttpClientBuilder.create()* and close() is working for that.
Thanks,
Susmit
On Wed, May 17, 2017 at 7:33 AM
Hi,
Which version of solr are you on?
Increasing memory may not be useful as streaming API does not keep stuff in
memory (except may be hash joins).
Increasing replicas (not sharding) and pushing the join computation on
worker solr cluster with #workers > 1 would definitely make things faster.
Are
t, it can't override the hardcoded 8kb buffer on
sun.nio.cs.StreamDecoder
Thanks,
Susmit
On Wed, May 17, 2017 at 5:49 AM, Joel Bernstein wrote:
> Susmit,
>
> You could wrap a LimitStream around the outside of all the relational
> algebra. For example:
>
> parallel(limit
number of replicas?
Thanks,
Susmit
On Tue, May 16, 2017 at 5:59 AM, Joel Bernstein wrote:
> Your approach looks OK. The single sharded worker collection is only needed
> if you were using CloudSolrStream to send the initial Streaming Expression
> to the /stream handler. You are not doing
grading if number was 3 and above. That is
counter intuitive since the joins are huge and putting more workers should
have improved the performance.
Thanks,
Susmit
On Mon, May 15, 2017 at 6:47 AM, Joel Bernstein wrote:
> Ok please do report any issues you run into. This is quite a good bug
&g
n a ticket for this?
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Sat, May 13, 2017 at 2:51 PM, Susmit Shukla
> wrote:
>
> > Hi Joel,
> >
> > I was using CloudSolrStream for the above test. Below is the call stack.
> >
> >
sun.nio.cs.StreamDecoder.close(StreamDecoder.java:193)
at java.io.InputStreamReader.close(InputStreamReader.java:199)
at
org.apache.solr.client.solrj.io.stream.JSONTupleStream.close(JSONTupleStream.java:91)
at
org.apache.solr.client.solrj.io.stream.SolrStream.close(SolrStream.java:186)
Thanks,
Susmit
On Sat, May 13
think there should be an abort() API for solr streams that hooks into
httpmethod.abort() . That would enable client to disconnect early and
probably that would disconnect the underlying socket so there would be no
leaks.
Thanks,
Susmit
On Sat, May 13, 2017 at 7:42 AM, Joel Bernstein wrote:
>
Hi Joel,
Thanks for the insight. How can this exception be thrown/forced from client
side. Client can't do a System.exit() as it is running as a webapp.
Thanks,
Susmit
On Fri, May 12, 2017 at 4:44 PM, Joel Bernstein wrote:
> In this scenario the /export handler continues to export
.
Another option would be to use /select handler and get into business of
managing a custom cursor mark that is based on the stream sort and is reset
until it fetches the required records at topmost level.
Any thoughts.
Thanks,
Susmit
If you constrain random sample to fixed number instead of percentage ,
reservoir sampling can be used without even calculating the total match count.
this can be done on client side. you could stop sampling after a max e.g 10
million.
> On Sep 28, 2016, at 10:15 AM, Pushkar Raste wrote:
>
>
Hi,
I'm using a string field in sort parameters of a solr query. The query is
used with /export handler to stream data using CloudSolrStream. When the
data in field contains a double quote, the cloudSolrStream fails to read
data and throws this error -
field data = "first (alias) last"
org.nogg
, 2016 at 1:09 PM, Joel Bernstein wrote:
> This sounds like a bug. I'm pretty sure there are no tests that use
> collapse with the export handler.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Fri, Jun 10, 2016 at 3:59 PM, Susmit Shukla
> wrote:
>
> &g
Hi,
I'm running this export query, it is working fine. f1 is the uniqueKey and
running solr 5.3.1
/export?q=f1:term1&sort=f1+desc&fl=f1,f2
if I add collapsing filter, it is giving NullPointerException
/export?q=f1:term1&sort=f1+desc&fl=f1,f2&fq={!collapse field=f2}
does collapsing filter work
d for CloudSolrStream params?
Thanks,
Susmit
On Wed, Jun 8, 2016 at 8:58 AM, Joel Bernstein wrote:
> CloudSolrStream doesn't really understand the concept of paging. It just
> sees a stream of Tuples coming from a collection and merges them.
>
> If you're using the default /selec
*sending with correct subject*
Does solr streaming aggregation support pagination?
Some documents seem to be skipped if I set "start" parameter for
CloudSolrStream for a sharded collection.
Thanks,
Susmit
Does solr streaming aggregation support pagination?
Some documents seem to be skipped if I set "start" parameter for
CloudSolrStream for a sharded collection.
Thanks,
Susmit
Please take a look at this blog, specifically "Leapfrog Anyone?" section-
http://yonik.com/advanced-filter-caching-in-solr/
Thanks,
Susmit
On Thu, May 5, 2016 at 10:54 PM, Bastien Latard - MDPI AG <
lat...@mdpi.com.invalid> wrote:
> Hi guys,
>
> Just a quick questio
Hi Prasanna,
What is the exact number you set it to?
What error did you get on solr console and in the solr logs?
Did you reload the core/restarted solr after bumping up the solrconfig?
Thanks,
Susmit
On Wed, May 4, 2016 at 9:45 PM, Prasanna S. Dhakephalkar <
prasann...@merajob.in> wrote:
Which solrj version are you using? could you try with solrj 6.0
On Tue, Apr 26, 2016 at 10:36 AM, sudsport s wrote:
> @Joel
> >Can you describe how you're planning on using Streaming?
>
> I am mostly using it for distirbuted join case. We were planning to use
> similar logic (hash id and join) i
I have done it by extending the solr join plugin. Needed to override 2
methods from join plugin and it works out.
Thanks,
Susmit
On Thu, Apr 21, 2016 at 12:01 PM, Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
> Hello,
>
> There is no much progress on
> https://issu
id to=doc_id}parent_doc_id:*&group.limit=10
Thanks,
Susmit
On Tue, Apr 19, 2016 at 10:29 AM, Shamik Bandopadhyay
wrote:
> Hi,
>
>I have a set of documents indexed which has a pseudo parent-child
> relationship. Each child document had a reference to the parent document.
&g
nt out what's not
> as you expect.
>
> You might want to review:
> http://wiki.apache.org/solr/UsingMailingLists
>
> Best,
> Erick
>
> On Sat, Apr 16, 2016 at 9:54 AM, Jack Krupansky
> wrote:
> > Remove that line of code from your client, or... add the remove blank
file a jira.
Thanks,
Susmit
On Sat, Apr 16, 2016 at 8:56 AM, Jack Krupansky
wrote:
> "UUID processor factory is generating uuid even if it is empty."
>
> The processor will generate the UUID only if the id field is not specified
> in the input document. Empty value and
Hi Chris/Erick,
Does not work in the sense the order of documents does not change on
changing sort from asc to desc.
This could be just a trivial bug where UUID processor factory is generating
uuid even if it is empty.
This is on solr 5.3.0
Thanks,
Susmit
On Thu, Apr 14, 2016 at 2:30 PM
mit the id field from the SolrInputDocument .
SolrInputDocument
solrDoc.addField("id", "");
...
I am using schema similar to below-
id
id
uuid
Thanks,
Susmit
for a uuid field.
The issues do not happen if I omit the id field from the SolrInputDocument .
SolrInputDocument
solrDoc.addField("id", "");
...
I am using schema similar to below-
id
id
uuid
Thanks,
Susmit
"Response","some resp"],
"http://server2/solr/multishard_shard2_replica1/";,[
"QTime","1",
"ElapsedTime","6",
"RequestPurpose","GET_TOP_IDS",
"NumFound","0",
"Response","some"]],
"GET_FIELDS":[
"http://server1/solr/multishard_shard1_replica1/";,[
"QTime","0",
"ElapsedTime","4",
"RequestPurpose","GET_FIELDS,GET_DEBUG",
"NumFound","1",
Thanks,
Susmit
file, they are correctly added to the cloud as
replicas.
Is there a way to automatically do it? since solr documentation says so..
https://cwiki.apache.org/confluence/display/solr/Nodes%2C+Cores%2C+Clusters+and+Leaders
in Leaders and Replicas section
Thanks,
Susmit
The url for solr atomic update documentation should contain json in the end.
Here is the page -
https://wiki.apache.org/solr/UpdateJSON#Solr_4.0_Example
curl http://localhost:8983/solr/update/*json* -H 'Content-type:application/json'
40 matches
Mail list logo