date_created field is being indexed as a string and text general making it
difficult to apply date range queries
date_created_s and date_created_t are the extensions and trying to copy them
to *_tdate is throwing an error.
"copyFields can take glob-type source specifications if that helps. "
Can
Hello,
Kindly check the Solr logs when you are hitting the query. Attach the same
here, that I could gave more insight.
For me it looks like the OOM, but check the Solr logs I hope we could get
more information from there.
On Sat, Jul 29, 2017, 14:35 SOLR6932 wrote:
> Hey all,
> I am using Sol
bq: We want to use a copy field as a source for another copy field.
As asked, this is not supported. You can copy the same source field to
multiple copy fields however.
copyFields can take glob-type source specifications if that helps.
It would help if you gave concrete examples. You say "date c
This is a variant of "the sizing question", here's the long
explanation of why there's no hard rule as Aman says:
https://lucidworks.com/2012/07/23/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-answer/
One variant of your question is "when do you have to shard a
collection". SolrC
There is no solid rule. Honestly stand alone solr can handle quite a bit, I
don't think there's a valid reason to go to cloud unless you are starting from
scratch and want to use the newest buzz word, stand alone can handle well over
half a terabyte index at sub second speeds all day long.
>
Hello Sara,
There is hard n fast rule, performance depends on caches, RAM, hdd etc.and
how much resourced you could invest to keep the acceptable performance.
Information on Number of Indexed documents, number of dynamic fields can be
viewed from the below link. I hope this helps.
http://lucene.4
We want to use a copy field as a source for another copy field.
The problem is source field is from a text (dynamic field ) and
destination field should be date
tried changing the dynamic field datatype but it throws error.
ideally my search index data source is outlook PST files as the index
Hey all,
I am using Solr 4.10.3 and my collection consists around 2300 large
documents that are distributed across a number of shards. Each document is
estimated to be around 50-70 megabytes. The queries that I run are
sophisticated, involve a range of parameters and diverse query filters.
Whenever
hi all,
I want to know when standalone solr can't be sufficient for storing data
and we need to migrate to solr cloud?for example standalone solr take too
much time to return query result or to store document or etc.
in other word ,what is best capacity and data index size in standalone
solr tha