Hi,It seems that using the DataImportHandler with a XPathEntityProcessor config with a managed-schema setup, only import the id and version field.data-config.xml processor="XPathEntityProcessor" stream="true" forEach="/posts/row/" url=""
I have a couple of filtersthat is text input based, where user will
input a value into the text boxes of these filters.
The condition is these filters will only be display if the facets exists
in the search result.
Eg. Min Order Qty filter will be displayif theMin Order Qty facet exists
in theso
Do you have the actual fields defined? If not, then I am guessing that
your 'post' test was against a different collection that had
schemaless mode enabled and your DIH one is against one where
schemaless mode is not enabled (look for
'add-unknown-fields-to-the-schema' in the solrconfig.xml to conf
Hi All,
We have 3 ZK VM's and 3 Solr VM's with SOLR 6 and we have implemented CDCR.
(windows) A dedicated drive has been setup for SOLR & ZK separately.
The issue we are facing is 2 nodes are showing together and 1 node separately
in the same external zookeeper. Please note that restarting ZK w
Hi Alex,
thanks for your answer.
Yes my solrconfig.xml contains the add-unknown-fields-to-the-schema.
add-unknown-fields-to-the-schema
I created my core using this command:
curl
http://192.168.99.100:8999/solr/admin/cores?action=CREATE&name=solrexchange&instanceDir=/opt/s
Your initParams section does not apply to /dataimport handler as
defined. Try modifying it to say:
path="/update/**,/dataimport"
Hopefully, that's all that takes.
Managed schema is enabled by default, but schemaless mode is the next
layer on top. With managed schema, you can use the API to add yo
It did not work,
I tried many things and ended up trying this:
solr-data-config.xml
add-unknown-fields-to-the-schema
Regards,
Pierre
> On 10 Aug 2016, at 18:08, Alexandre Rafalovitch wrote:
>
> Your initParams section does not apply to /dataimp
Ok, to reduce the magic, you can just stick "update.chain" parameter
inside the defaults of the dataimport handler directly.
You can also pass it just as a URL parameter. That's what 'defaults'
section mean.
And, just to be paranoid, you did reload the core after each of those
changes to test it?
I am rebuilding a new docker image with each change on the config file so solr
starts fresh every time.
add-unknown-fields-to-the-schema
solr-data-config.xml
still having document like such:
"response":{"numFound":8,"start":0,"docs":[
{
"id":"38
Hi Midas,
According to your autocommit configuration and your worry about commit
time I assume that you are doing explicit commits from client code and
that 1.3s is client observed commit time. If that is the case, than it
might be opening searcher that is taking time.
How do you index data
Seem you might be right, according to the source:
https://github.com/apache/lucene-solr/blob/master/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/DocBuilder.java#L662
Sometimes, the magic (and schemaless is rather magical) fails when
combined with older assumptions (an
Hi All,
We have 3 ZK VM's and 3 Solr VM's with SOLR 6 and we have implemented CDCR.
(windows) A dedicated drive has been setup for SOLR & ZK separately.
The issue we are facing is 2 nodes are showing together and 1 node separately
in the same external zookeeper. Please note that restarting ZK w
Hi,
I am using solr 5.2.1 in cloud mode with 3 shards on 3 different servers.
Each server is having 20 GB of data size . Total memory on each server is
around 50 GB.
Continuos updates and queries are being fired to solr.
We have been facing OOM issues due to heap issues.
args we use: giving 3 GB
Hi Preeti,
3GB heap is too small for such setup. I would try 10-15GB, but that
depends on usage patterns. You have 50GB machine and assuming that you
do not run anything other than solr you have 30GB to spare on Solr and
still leave enough to OS to cache entire index.
The best way to do heap
Hi Derek,
Not sure if there is some shortcut but you could try setting
facet.sort=index and for sure use facet.limit=1.
Regards,
Emir
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/
On 10.08.2016 09:32, Derek Poh
Hi Mayur,
Not sure if I get your case completely, but if you need query but not
sorted by score, you can use boost factors 0 in your edismax definition
(e.g. qf=title^0) or you can order by doc id (sort= _docid_ asc)
HTH,
Emir
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Ma
On 09/08/2016 18:11, Rose, John B wrote:
We are looking at Solr for a Drupal web site. We have never installed Solr.
From my readings it is not clear exactly what we need to implement a search in
Drupal with Solr. Some sites have implied Lucene and/or Tomcat are needed.
Can someone point me
Thanks Alexandre,
I solved the problem using the xslt transform and the /update handler.
I attach the xsl that I put in conf/xslt/ (for documentation)
Then the command:
curl
"http://192.168.99.100:8999/solr/solrexchange/update?commit=true&tr=updateXmlSolrExchange.xsl";
-H "Content-Type: text/x
Hello,
I wonder if solr offers a feature (class) to handle different orthogaphy
versions?
For the German language for example ... in order to find the same documents
when searching after "Foto" or "Photo".
I appreachiate any help!
Rainer
Rainer Gn
ICU normalization (ICUFoldingFilterFactory) will at least handle "ß" -> "ss"
(IIRC) and some other language-general variants that might get you close.
There are, of course, language specific analyzers
(https://wiki.apache.org/solr/LanguageAnalysis#German) , but I don't think
they'll get you Fo
Can you use Solr's synonym feature? You can find a German synonym file here:
https://sites.google.com/site/kevinbouge/synonyms-lists
Alexandre Drouin
-Original Message-
From: Rainer Gnan [mailto:rainer.g...@bsb-muenchen.de]
Sent: Wednesday, August 10, 2016 10:21 AM
To: solr-user@lucene
BeiderMorse supports these phonetics variations like Foto / Photo and have
support for many languages including German. Please see
https://cwiki.apache.org/confluence/display/solr/Phonetic+Matching
Thanks,
Susheel
On Wed, Aug 10, 2016 at 2:47 PM, Alexandre Drouin <
alexandre.dro...@orckestra.com
Hi All,
Please provide some help in this issue.
Ritesh K
Infrastructure Sr. Engineer - Jericho Team
Sales & Marketing Digital Services
t +91-7799936921 v-kur...@microsoft.com
-Original Message-
From: Ritesh Kumar (Avanade) [mailto:v-kur...@microsoft.com]
Sent: 10 August 2016 15:38
To
I still haven't found the reason for the NPE in my post filter when it runs
against a sharded collection, so I'm posting my code in the hopes that a
seasoned Solr pro might notice something. I thought perhaps not treating the
doc values as multi doc values when indexes are segmented might have been
Quick update: the NPE was related to the way in which I passed params into
the Query via solrconfig.xml. It works fine for single sharded, but
something about it was masking the unique ID field in a multisharded
environment. Anyway, I was able to fix that by cleaning up the request
handler config:
Hi All,
Thanks so much for your inputs. We have a MYSQL data source and i think we
will try to re-index using the MYSQL data.
I wanted something where i can export all my current data say to an excel
file or some data source and then import it on another node with the same
collection with empty d
Right... SOLR doesn't work quite that way...
Keep in mind the value of the data import jar if you have the data from
MySQL stored in a text file, although that would require a little
programming to get the data into the proper format..
But once you get everything into a text file or similar, you
Hi,
I find this error when solr index. After this error appear, all
collection cannot index to solr.
org.apache.lucene.index.CorruptIndexException: checksum failed (hardware
problem?) : expected=7a1d93c3 actual=c44ec423
(resource=BufferedChecksumIndexInput(RAMInputStream(name=_27zi.tvd)))
Hi ,
we are indexing to 2 core say core1 and core2 with help of curl post .
when we post core1 is taking very less time than core2.
while doc size is same in both server .
it causes core2 indexing very slow . the only difference is core2 has heavy
indexing rate. we indexing more docs/sec on core
Emir,
we post json documents through the curl it takes the time (same time i
would like to say that we are not hard committing ). that curl takes time
i.e. 1.3 sec.
On Wed, Aug 10, 2016 at 2:29 PM, Emir Arnautovic <
emir.arnauto...@sematext.com> wrote:
> Hi Midas,
>
> According to your autocommi
Emir,
other queries:
a) Solr cloud : NO
b)
c)
d)
e) we are using multi threaded system.
On Thu, Aug 11, 2016 at 11:48 AM, Midas A wrote:
> Emir,
>
> we post json documents through the curl it takes the time (same time i
> would like to say that we are not hard committing ). that curl takes
31 matches
Mail list logo