Looks like it is available through http post request as given in
https://lucidworks.com/blog/2014/08/12/indexing-custom-json-data/
Hence I assume corresponding json data import from mysql should also be
available. Can someone point me to related docs?
Thanks,
Sriram
--
View this message in co
Hi All,
I am having an use case where I want to index a json field from mysql into
solr. The Json field will contain entries as key value pairs. The Json can
be nested, but I want to index only the first level field value pairs of
Jsons into solr keys and nested levels can be present as value of
c
Thanks Gora for your Suggestions. Since my table contains lot of fields, and
all the other fields have same name mapping for solr and mysql, I thought I
can give mapping for the one which is different and leave remaining as is.
But is not selecting the id field in the returned query the only way to
Hi,
I am using Solr 4.6.1 and I am trying to import my data from mysql to solr.
In mysql, I have a table with columns,
id, legacyid, otherfields...
In solr I have columns : id, other fields. I want to map the legacyid field
in my mysql table with Solr'r id column and skip the "id" field of mysql
Hi Jeyaprakash,
Thanks for your suggestions. Are you referring to Dataimporthandler
properties, to configure this in data-config.xml? Since I am currently
referring to partial date search in my online queries, I am not sure whether
this will help achieve that. Can you please explain bit more?
Tha
Hi Benedetti Alessandro,
Thanks for your comments. In our application, Solr search is used in
multiple places. With respect to using a middle layer, our online requests
go through the search API (Middle layer) which is built on top of solr,
whereas the editorial tool, along with few other custom t
I am actually using one such component to take in the partial dates like
2015-10 and create full UTC dates out of it and query using that. But since
I was checking on that wiki about partial date search and since I couldn't
find that it is available only from 5.x, I was curious to know if by some
w
Yes Eric. I am using that full date form based date range query till now and
we have a requirement change to search based on partial date ranges. Hence
was looking at these options.
Kind Regards,
-Sriram
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-partial-date-rang
Probably, I should not have mentioned, it cannot be achieved, as still we can
achieve that by using multiple OR queries with regex matching on that String
field, though it doesn't look good :-)
-Sriram
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-partial-date-range-
Thanks Shawn for providing more info. The looks like for supporting partial
date range search, I would need to rely on String regex search like
fieldName:2016-01*
Though this can support part of the functionality, but if I would like to
search between start and end date, this might not come good,
Hi,
I am using solr 4.6.1. I have a date field (TrieDateField) in my schema and
I am trying to perform partial date range search as given in
https://cwiki.apache.org/confluence/display/solr/Working+with+Dates
Query = date_field:[2016-01-11 TO NOW]
But I am getting,
"error": {
"msg": "Invalid
Hi Eric,
Thanks for your response. I was planning to do the same, to store the data
in a single collection with site parameter differentiating duplicated
content for different sites. But my use case is that in future the content
would run into millions and potentially there could be large number o
Hi All,
Consider this scenario : I am having around 100K content and I want to
launch 5 sites with that content. For example, around 50K content for site1,
40K content for site2, 30K for site3, 20K for site4, and 10K for site5.
As seen from this example, these sites have few overlapping content a
Great! Thanks for providing more info Toke Eskildsen
Thanks,
Sriram
--
View this message in context:
http://lucene.472066.n3.nabble.com/Best-way-to-dump-out-entire-solr-content-tp4192734p4192892.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks Alex for explanation. Actually since I am scraping all the contents
from Solr, I am doing a generic query of *:* So I think it should not take
so much time right?
But as you say probably the internal skips using the cursor might be more
efficient than the skip done with increasing the start
Thanks Alex for quick response. I wanted to avoid reading the lucene index to
prevent complications of merging deleted info. Also I would like to do this
on very frequent basis as well like once in two or three days.
I am wondering if the issues that I faced while scraping the index towards
higher
Hi All,
I am having a solr cloud cluster of 20 nodes with each node having close to
20 Million records and total index size is around 400GB ( 20GB per node X 20
nodes ). I am trying to know the best way to dump out the entire solr data
in say CSV format.
I use successive queries by incrementing
Thanks Chris for additional info.
Thanks,
Sriram
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-date-retrieve-back-UTC-tp4187449p4187503.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks Chris for your quick reply. As you said, I need to do some conversion
to get the UTC back which I thought might not be required as already the
cDate field in that Date class is having the UTC date.
The toString() doesn't actually give me timestamp in UTC format. It gives,
Mon Sep 15 12:52:
Hi,
I am having a date field in my solr schema and I am indexing a proper UTC
date to that field. If I am directly querying Solr, I am able to see the
field with UTC time in that in the JSON response.
But when I use SolrJ and get it as object, I am seeing that the UTC date is
of type Date and I a
Thanks Anshum for additional info.
- Sriram
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solrcloud-open-new-searcher-not-happening-in-slave-for-deletebyID-tp4182439p4185196.html
Sent from the Solr - User mailing list archive at Nabble.com.
I tried with deleteByQuery but still in replicas, new searcher is not opened.
Hence I configured solr to issue soft commit every one second. Didn't try
this with latest solr 4.10.3
Thanks,
V.Sriram
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solrcloud-open-new-searcher-
Thanks Shawn. Not sure whether I will be able to test it out with 4.10.3. I
will try the workarounds and update.
Thanks,
V.Sriram
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solrcloud-open-new-searcher-not-happening-in-slave-for-deletebyID-tp4182439p4182757.html
Sent
Hi All,
I am using Solrcloud 4.6.1 In that if I use CloudSolrServer to add a record
to solr, then I see the following commit update command in both master and
in slave node :
2015-01-27 15:20:23,625 INFO org.apache.solr.update.UpdateHandler: start
commit{,optimize=false,openSearcher=true,waitSear
Yes Erick. Correctly pointed.
Thanks,
Sriram
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-And-query-tp4166685p4166789.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks Eric. I tried q.op=AND and noticed that it is equivalent to
specifying,
q=f1:"word1 word2" AND f2:"word3 word4" AND f3:"word5 word6"
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-And-query-tp4166685p4166760.html
Sent from the Solr - User mailing list archive at
Actually I found out how to form the query. I just need to use,
q=f1:(word1 word2) AND f2:(word3 word4) AND f3:(word5 word6)
Thanks,
V.Sriram
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-And-query-tp4166685p4166744.html
Sent from the Solr - User mailing list archiv
Hi All,
This might be a simple question. I tried to find a solution, but not exactly
finding what I want. I have the following fields f1, f2 and f3. I want to do
an AND query in these fields.
If I want to search for single word in these 3 fields, then I am facing no
problem. I can simply constru
28 matches
Mail list logo