Hi,
I have configured solr schema to generate unique id for a collection using
UUIDUpdateProcessorFactory
I am seeing a peculiar behavior - if the unique 'id' field is explicitly
set as empty string in the SolrInputDocument, the document gets indexed. I
can see in the solr query console a good uu
Hi,
I have configured solr schema to generate unique id for a collection using
UUIDUpdateProcessorFactory
I am seeing a peculiar behavior - if the unique 'id' field is explicitly
set as empty string in the SolrInputDocument, the document gets indexed
with UUID update processor generating the id.
are equal, so the tiebreaker is
> : the internal Lucene doc ID, which may change as merges
> : happen. You can specify secondary sort fields to make the
> : sort predictable (the field is popular for this).
> :
> : Best,
> : Erick
> :
> : On Thu, Apr 14, 2016 at 12:18 PM, Sus
value not present are not the same
> thing.
>
> So, please clarify your specific situation.
>
>
> -- Jack Krupansky
>
> On Thu, Apr 14, 2016 at 7:20 PM, Susmit Shukla
> wrote:
>
> > Hi Chris/Erick,
> >
> > Does not work in the sense the order of documen
nt out what's not
> as you expect.
>
> You might want to review:
> http://wiki.apache.org/solr/UsingMailingLists
>
> Best,
> Erick
>
> On Sat, Apr 16, 2016 at 9:54 AM, Jack Krupansky
> wrote:
> > Remove that line of code from your client, or... add the remove blank
Hi Shamik,
you could try solr grouping using group.query construct. you could discard
the child match from the result i.e. any doc that has parent_doc_id field
and use join to fetch the parent record
q=*:*&group=true&group.query=title:title2&group.query={!join
from=parent_doc_id to=doc_id}parent_
I have done it by extending the solr join plugin. Needed to override 2
methods from join plugin and it works out.
Thanks,
Susmit
On Thu, Apr 21, 2016 at 12:01 PM, Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
> Hello,
>
> There is no much progress on
> https://issues.apache.org/jira/brow
Which solrj version are you using? could you try with solrj 6.0
On Tue, Apr 26, 2016 at 10:36 AM, sudsport s wrote:
> @Joel
> >Can you describe how you're planning on using Streaming?
>
> I am mostly using it for distirbuted join case. We were planning to use
> similar logic (hash id and join) i
Hi Prasanna,
What is the exact number you set it to?
What error did you get on solr console and in the solr logs?
Did you reload the core/restarted solr after bumping up the solrconfig?
Thanks,
Susmit
On Wed, May 4, 2016 at 9:45 PM, Prasanna S. Dhakephalkar <
prasann...@merajob.in> wrote:
> Hi
Please take a look at this blog, specifically "Leapfrog Anyone?" section-
http://yonik.com/advanced-filter-caching-in-solr/
Thanks,
Susmit
On Thu, May 5, 2016 at 10:54 PM, Bastien Latard - MDPI AG <
lat...@mdpi.com.invalid> wrote:
> Hi guys,
>
> Just a quick question, that I did not find an easy
Does solr streaming aggregation support pagination?
Some documents seem to be skipped if I set "start" parameter for
CloudSolrStream for a sharded collection.
Thanks,
Susmit
*sending with correct subject*
Does solr streaming aggregation support pagination?
Some documents seem to be skipped if I set "start" parameter for
CloudSolrStream for a sharded collection.
Thanks,
Susmit
port for the
> OFFSET SQL clause.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Tue, Jun 7, 2016 at 5:08 PM, Susmit Shukla
> wrote:
>
> > *sending with correct subject*
> >
> > Does solr streaming aggregation support pagination?
> > Some doc
Hi,
I'm running this export query, it is working fine. f1 is the uniqueKey and
running solr 5.3.1
/export?q=f1:term1&sort=f1+desc&fl=f1,f2
if I add collapsing filter, it is giving NullPointerException
/export?q=f1:term1&sort=f1+desc&fl=f1,f2&fq={!collapse field=f2}
does collapsing filter work
, 2016 at 1:09 PM, Joel Bernstein wrote:
> This sounds like a bug. I'm pretty sure there are no tests that use
> collapse with the export handler.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Fri, Jun 10, 2016 at 3:59 PM, Susmit Shukla
> wrote:
>
> &g
Hi,
I'm using a string field in sort parameters of a solr query. The query is
used with /export handler to stream data using CloudSolrStream. When the
data in field contains a double quote, the cloudSolrStream fails to read
data and throws this error -
field data = "first (alias) last"
org.nogg
, Susmit Shukla
wrote:
> Thanks Joel, will try that.
> Binary response would be more performant.
> I observed the server sends responses in 32 kb chunks and the client reads
> it with 8 kb buffer on inputstream. I don't know if changing that can
> impact anything on performance.
u also mentioned that the SolrStream and the SolrClientCache were using
> the same approach to create the client. In that case changing the
> ParallelStream to set the streamContext shouldn't have any effect on the
> close() issue.
>
>
>
>
>
>
>
>
> Joel B
you could use filter clause to create a custom cursor since the results
are sorted. I had used the approach with raw cloudsolr stream, not with
parallelSQL though.
This would be useful-
https://lucidworks.com/2013/12/12/coming-soon-to-solr-efficient-cursor-based-iteration-of-large-result-sets/
Th
Hi Lanny,
For long running streaming queries with many shards and huge resultsets,
solrj's default settings for http max connections/connections per host may
not be enough. If you are using the worker collection (/stream), it depends
on dispensing http clients using SolrClientCache with default li
Hi,
I have a question regarding solr /export handler. Here is the scenario -
I want to use the /export handler - I only need sorted data and this is the
fastest way to get it. I am doing multiple level joins using streams using
/export handler. I know the number of top level records to be retrieve
results until it
> encounters a "Broken Pipe" exception. This exception is trapped and ignored
> rather then logged as it's not considered an exception if the client
> disconnects early.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Fri, May 12, 20
If the client closes the connection to the export handler then this
> exception will occur automatically on the server.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Sat, May 13, 2017 at 1:46 AM, Susmit Shukla
> wrote:
>
> > Hi Joel,
> >
> > Than
//joelsolr.blogspot.com/
>
> On Sat, May 13, 2017 at 12:28 PM, Susmit Shukla
> wrote:
>
> > Hi Joel,
> >
> > I did not observe that. On calling close() on stream, it cycled through
> all
> > the hits that /export handler calculated.
> > e.g. with a *:*
n a ticket for this?
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Sat, May 13, 2017 at 2:51 PM, Susmit Shukla
> wrote:
>
> > Hi Joel,
> >
> > I was using CloudSolrStream for the above test. Below is the call stack.
> >
> >
SolrStream that
> expression will be sent to each shard to be run and each shard will be
> duplicating the work and return duplicate results.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Joel Bernstein
> http://joelsolr.blogspot.
ifferent partitions of the streams will be served by
> different replicas.
>
> If performance doesn't improve with the NullStream after increasing both
> workers and replicas then we know the bottleneck is the network.
>
> Joel Bernstein
> http://joelsolr.blogspot.c
> >> I created https://issues.apache.org/jira/browse/SOLR-10698 to track the
> >> issue
> >>
> >> @Susmit looking at the stack trace I see the expression is using
> >> JSONTupleStream
> >> . I wonder if you tried using JavabinTupleStreamParser co
Hi,
Which version of solr are you on?
Increasing memory may not be useful as streaming API does not keep stuff in
memory (except may be hash joins).
Increasing replicas (not sharding) and pushing the join computation on
worker solr cluster with #workers > 1 would definitely make things faster.
Are
The url for solr atomic update documentation should contain json in the end.
Here is the page -
https://wiki.apache.org/solr/UpdateJSON#Solr_4.0_Example
curl http://localhost:8983/solr/update/*json* -H 'Content-type:application/json'
Hi solr experts,
I am building out a solr cluster with this configuration
3 external zookeeprs
15 solr instances (nodes)
3 shards
I need to start out with 3 nodes and remaining 12 nodes would be added to
cluster. I am able to create a collection with 3 shards. This process works
fine using colle
Hi,
I'm building out a multi shard solr collection as the index size is likely
to grow fast.
I was testing out the setup with 2 shards on 2 nodes with test data.
Indexed few documents with "id" as the unique key.
collection create command -
/solr/admin/collections?action=CREATE&name=multishard&num
Hi,
Trying to use solr streaming 'gatherNodes' function. It is for extracting
email graph based on from and to fields.
It requires 'to' field to be a single value field with docvalues enabled
since it is used internally for sorting and unique streams
The 'to' field can contain multiple email addr
Hi,
got this error using streaming with solrj 8.6.3 . does it use noggit-0.8.
It was not mentioned in dependencies
https://github.com/apache/lucene-solr/blob/branch_8_6/solr/solrj/ivy.xml
Caused by: java.lang.NoSuchMethodError: 'java.lang.Object
org.noggit.ObjectBuilder.getValStrict()'
at org.apa
rJ against a newer Solr server (or
> vice versa).
>
> Mike
>
> On Fri, Nov 20, 2020 at 2:25 PM Susmit Shukla
> wrote:
>
> > Hi,
> > got this error using streaming with solrj 8.6.3 . does it use noggit-0.8.
> > It was not mentioned in dependencies
>
35 matches
Mail list logo