In my quest to support indexing from files located in azure storage (as
opposed to standard disk based files), .. I have the following question
The SOLR request parser (request is configured for remote load but
CommonParams.STREAM_FILE is still required as it references the relative
azure path) al
ectories to see if there are duplicates
> and/or if the expected jars are in them.
>
> Not a great deal of help, but "jar hell" is ugly.
>
> Best,
> Erick
>
> On Thu, Dec 29, 2016 at 12:49 PM, Vinay B, wrote:
> > Thanks,
> >
> > I think I already t
hu, Dec 29, 2016 at 12:51 PM, Emir Arnautovic <
emir.arnauto...@sematext.com> wrote:
> Hi Vinay,
>
> You need to include libs using lib directives in Solr config:
> https://cwiki.apache.org/confluence/display/solr/Lib+Directi
> ves+in+SolrConfig.
>
> Regrads,
> Emir
>
I'm modifying out custom update handler and the modifications needs access
to a third party jar (microsoft azure).
For what it's worth, I use mvn as my build / packaging tool.
During runtime, I've been encountering class not found errors in the plugin
related to the azure library.
1. Is there so
For a remote streaming scenario (opening / reading / processing / closing a
file contents on the SOLR server based upon the information passed in the
request), I'd like to be able to handle the following use case
SOLR currently supports three concrete implementations for ContentStream -
one for Fi
Yes, that works (apart from the typo in PatternReplaceCharFilterFactory)
Here is my config
On Wed, Nov 30, 2016 at 2:08 PM, Steve Rowe wrote:
> Hi Vinay,
>
> You should be able to use a char filter to conv
Prior discussion at
http://stackoverflow.com/questions/40877567/using-standardtokenizerfactory-with-currency
I'd like to maintain other aspects of the StandardTokenizer functionality
but I'm wondering if to do what I want, the task boils down to be able to
instruct the StandardTokenizer not to dis
s extremely fast. The
> search results requires child documents but faceting has to be done on text
> attributes which belong to parents. So we do this mapping by customizing
> the FacetComponent.
>
>
>
>
>
>
> On 18 July 2014 04:11, Vinay B, wrote:
>
> > Som
Some Background info :
In our application, we have a requirement to update large number of records
often. I investigated solr child documents but it requires updating both
the child and the parent document . Therefore, I'm investigating adding
frequently updated information in an "auxillary docume
ot;defType": "edismax" } }, "response": { "numFound": 0, "start":
0, "docs": [] } }
Debug-enabled query at
https://gist.github.com/anonymous/625e7669918deba4a071
Thanks
On Tue, Jun 24, 2014 at 7:35 PM, Alexandre Rafalovitch
wrote:
&g
When I edit a child document, a block join query for the parent no longer
returns any hits. I thought I read that this was the way things worked but
needed to know for sure.
If so, is there any other way to achieve this functionality (I can deal
with creating the child doc with the parent, but wou
Sorry, previous post got sent prematurely.
Here is the complete post:
This is easy if I only reqdefine a custom field to identify the desired
patterns (numbers, in my case)
For example, I could define a field thus:
Input:
hello, world bye 123-45 abcd sdfssdf -
This is easy if I only reqdefine a custom field to identify the desired
patterns (numbers, in my case)
For example, I could define a field thus:
Input:
hello, world bye 123-45 abcd sdfssdf --- aaa
Output:
123-45 ,
However, I also want to retain the behavi
t; you didn't it yet.. I r'lly dunno
>
> stored="true" required="false" multiValued="true" />
>
> Just a hint to debug block join is using wt=csv that shows the block
> alignment pretty well.
>
>
>
> On Tue, Jun 24, 2014 at 10:3
> On Tue, Jun 24, 2014 at 9:43 PM, Vinay B, wrote:
>
> > Hi,
> > Yes, the query ATTRIBUTES.STATE:TX returns the child doc (see response
> > below) . Is there something else that I'm missing to link the parent and
> > the child? I followed your advice from my
d":"1-A",
"ATTRIBUTES.STATE":["LA",
"TX"]},
{
"id":"1",
"content_type":"parentDocument",
"_version_":1471814208097091584}]
}}
On Tue, Jun 24, 2014
ent
which="content_type:parentDocument"}ATTRIBUTES.STATE:TX&wt=json&indent=true
On Mon, Jun 23, 2014 at 4:04 PM, Erick Erickson
wrote:
> Well, what do you mean by "not working"? You might review:
> http://wiki.apache.org/solr/UsingMailingLists
>
> Best,
> E
Hi,
I've been trying to experiment with block joins and parent / child docs as
described in this thread (input described in my first post of the thread,
.. and block join in my second post, as per the suggestions given). What
else am I missing?
Thanks
http://lucene.472066.n3.nabble.com/Why-aren-t
ut you need
> https://issues.apache.org/jira/browse/SOLR-5285
>
>
> On Thu, Jun 19, 2014 at 3:20 AM, Vinay B, wrote:
>
> > Probably a silly error. Can someone point out my mistake? Code and output
> > gists at https://gist.github.com/anonymous/fb9cdb5b44e76b2
SolrJ allows a direct linkage between parent and child document using
SolrInputDocument.addChildDocument(...) .
We, however, construct our request via a raw UpdateRequest() as that gives
us a bit more flexibility. I'm investigating how best to add nested docs
using this approach.
>From my underst
Probably a silly error. Can someone point out my mistake? Code and output
gists at https://gist.github.com/anonymous/fb9cdb5b44e76b2c308d
Thanks
Code:
SolrInputDocument solrDoc = new SolrInputDocument();
solrDoc.addField("id", documentId);
solrDoc.addField("content_type",
As I understand it, the "lots of cores" features enables dynamic loading
and unloading of cores
This is how I set up my solr.xml for a test where I created more cores than
the transientCacheSize.
Here is a link to the config in case it doesn't format well via this post.
https://gist.github.com/ano
What we need is similar to what is discussed here, except not as a filter
but as an actual query:
http://lucene.472066.n3.nabble.com/filter-query-from-external-list-of-Solr-unique-IDs-td1709060.html
We'd like to implement a query parser/scorer that would allow us to combine
SOLR searches with sear
As I understand, SOLR allows us to plug in language detection
processors: http://wiki.apache.org/solr/LanguageDetection
GIven that our use case involves a collection of mixed language documents,
Q1: Assume that we plug in language detection, will this affect the
stemming and other language specifi
I'm trying to explore Parts-Of-Speech tagging with SOLR. Firstly, am I
right in assuming that OpenNLP integration is the right direction in
which to proceed?
With respect to getting OpenNLP to work with SOLR (
http://wiki.apache.org/solr/OpenNLP ) , I tried following the
instructions , only to be
25 matches
Mail list logo