Hi Mohan,
I haven’t looked at the latest problems, but the ICU folding filter should be
the last filter, to allow the Arabic normalization and stemming filters to see
the original words.
--
Steve
www.lucidworks.com
> On Feb 8, 2017, at 10:58 PM, mohanmca01 wrote:
>
> Hi Steve,
>
> Thanks fo
Hi Steve,
Thanks for your continues investigation on this issue.
I added ICU Folding Filter in schema.xml file and re-indexed all the data
again. i noticed some improvements in search but its not really as expected.
below is the configuration changed in schema file:
-
Tom Evans-2 wrote
> I don't think there is such a thing as an interval JSON facet.
> Whereabouts in the documentation are you seeing an "interval" as JSON
> facet type?
>
>
> You want a range facet surely?
>
> One thing with range facets is that the gap is fixed size. You can
> actually do your
As per the documentation:
"So all downstream components (faceting, highlighting, etc...) will
work with the collapsed result set."
So, no, you cannot facet on expanded group. Partially because it is
not really fully expanded (there is a limit of items in each group).
But also, are trying to facet
/update/json expects Solr JSON update format.
/update is an auto-route that should be equivalent to /update/json
with the right content type/extension.
/update/json/docs expects random JSON and tries to extract fields for
indexing from it.
https://cwiki.apache.org/confluence/display/solr/Transform
Alexander - thanks! It seems to work great.
I still have a question - if I want to do a facet query that includes also
the documents in the expanded area. Is this possible? If I apply a facet
query like "facet=true&facet.field=modality" it counts only the head
documents.
Thanks,
Cristian.
On Sun
> Thank you I will follow Erick's steps
> BTW I am also trying to ingesting using Flume , Flume uses Morphlines along
> with Tika
> Even Flume SolrSink will have the same issue?
Yes, when using Tika you run the risk of it choking on a document, eating CPU
and/or RAM until everything dies. This i
dear solr users,
can somebody explain the exact difference between the to update handlers? I’m
asking cause with some curl commands solr fails to identify the fields of the
json doc and indexes everything in _str_:
Those work perfectly:
curl 'http://localhost:8983/solr/testcore2/update/json?comm
Shawn,
Thank you I will follow Erick's steps
BTW I am also trying to ingesting using Flume , Flume uses Morphlines along
with Tika
Even Flume SolrSink will have the same issue?
Currently my SolrSink does not ingest the data and also I do not see any error
in my logs.
I am seeing lot of issues w
On 2/8/2017 9:08 AM, Anatharaman, Srinatha (Contractor) wrote:
> Thank you for your reply
> Other archive message you mentioned is posted by me only
> I am new to Solr, When you say process outside Solr program. What exactly I
> should do?
>
> I am having lots of text document which I need to inde
In my requirement when a Solr search finds the string it has to return the
entire text document(emails in RTF format). If I process it outside the Solr
how do I achieve this?
When you say process outside, what do I process with rtf document? And also
search result have to return original documen
>It is *strongly* recommended to *not* use >the Tika that's embedded within
>Solr, but >instead to do the processing outside of Solr >in a program of your
>own and index the results.
+1
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201601.mbox/%3CBY2PR09MB11210EDFCFA297528940B07C
Yes, all three fields should be docValues. The point of docValues is
to keep from "uninverting" the docValues structure in Java's heap. Any
time you have to answer the question "What is the value in
docX.fieldY" it should be a docValues field. The way facets (and
funciton queries for tha t matter w
Hi -
I have a schema looks like:
(text_nost and text_st are just defined field type without/with stopwords...
irrelevant to the issues here)
these 3 fields are parallel in means of their values. I want to be able to
match these values and be able to search something like :
give me all attach
Erick,
I have tested it on Solr Stand-alone mode and it works perfectly fine
To answer your other question, Yes I have uploaded all my config files
including tikaConfig file to Zookeeper using solr upconfig command as below
./solr zk -upconfig -n gsearch -d
/app/platform/solr1/server/solr/confi
Shawn,
Thank you for your reply
Other archive message you mentioned is posted by me only
I am new to Solr, When you say process outside Solr program. What exactly I
should do?
I am having lots of text document which I need to index, what should I apply to
these document before loading it to Sol
On 2/5/2017 9:21 PM, Arun Kumar wrote:
> We are facing an error "Cannot write to config directory
> /var/solr/data/marketing_prod_career_all_index/conf; switching to use
> InMemory storage instead." on our SOLR box. As it's occur, SOLR
> service stopped to response and we have to restart it again.
On 2/6/2017 3:45 PM, Anatharaman, Srinatha (Contractor) wrote:
> I am having below error while trying to index using dataImporthandler
>
> Data-Config file is mentioned below. zookeeper is not able to read
> "tikaConfig.xml" on below statement
>
> processor="TikaEntityProcessor" tikaConfig="tika
Dear Apache Enthusiast,
This is your FINAL reminder that the Call for Papers (CFP) for ApacheCon
Miami is closing this weekend - February 11th. This is your final
opportunity to submit a talk for consideration at this event.
This year, we are running several mini conferences in conjunction with
t
I've been trying to figure out how exactly docValues help with facet
queries, and I only seem to find mention that they are beneficial to facet
performance without many specifics. What I'd like to know is whether it
applies to all fields used in the facet or just fields that are faceted on.
For ex
Can you post the final iteration of the model?
Also the expression you used to train the model?
How much training data do you have? Ho many positive examples and negatives
examples?
Joel Bernstein
http://joelsolr.blogspot.com/
On Tue, Feb 7, 2017 at 2:14 PM, Susheel Kumar wrote:
> Hello,
>
>
Hi all,
thanks to Andrea Gazzarini suggestion I solved it using local params ( which
is different from macro expansion even if conceptually similar).
Local params were available in Solr 4.10.x
I appended this filter query in the request handler of interest:
{!lucene df=filterField v=$allow
On Tue, Feb 7, 2017 at 8:54 AM, deniz wrote:
> Hello,
>
> I am trying to run JSON facets with on interval query as follows:
>
>
> "json.facet":{"height_facet":{"interval":{"field":"height","set":["[160,180]","[180,190]"]}}}
>
> And related field is stored="true" />
>
> But I keep seeing errors li
Hi John,
let me try to recap :
Your Solr Document is an Item with a price as one of the fields, a
purchaseGroupId a groupId.
You filter by purchaseGroupId and then you group by ( or collapse by the
groupId).
At this point how do you want to assign the score ?
For each document in a groupId you want
Hi Mugeesh,
my fault: a point is missing there, as suggested
/"//*-ea *//was not specified but "/
//
You need to add the "-ea" VM argument. If you are in Eclipse,
/Run >> Run Configurations/
then in the dialog that appears, select the run configuration
corresponding to that class (StartD
Thank you Fuad,
with dbcp2 BasicDataSource it is working
1st i need to add the libraries to server/lib/ext
commons-dbcp2-2.1.1.jar
commons-logging-1.2.jar
commons-pool2-2.4.2.jar
The current version i've found in http://mvnrepository.com/search?q=dbcp
Then my DataSource looks like this
26 matches
Mail list logo