The data displayed when doing a query is correct case. The fieldType
doesn't do any case manipulation and the requestHandler/searchComponent
don't have any settings declared that I can see.
Why is my spellcheck returning results that are all lower case?
Is there a way for me to stop this from hap
Hi all
We have some issues with our Solr servers spending too much time
paused doing GC. From turning on gc debug, and extracting numbers from
the GC log, we're getting an idea of just how much of a problem.
I'm currently doing this in a hacky, inefficient way:
grep -h 'Total time for which appl
Greetings, I'm new using Solr. I have problem to create a client
application. As I do, if I need to use frameworks, or Solr has a way to
create client applications.
Welcome to the Solr world.
Yes, usually you use a client application. If you are working in Java,
you use SolrJ or you can look into Spring Data. For other languages,
there are libraries too. You can see a reasonable list at:
https://wiki.apache.org/solr/IntegratingSolr . Be aware that not all
cli
Greetings, I'm new using Solr. I have problem to create a client
application. As I do, if I need to use frameworks, or Solr has a way to
create client applications.
On 11/13/2015 8:00 AM, Tom Evans wrote:
> We have some issues with our Solr servers spending too much time
> paused doing GC. From turning on gc debug, and extracting numbers from
> the GC log, we're getting an idea of just how much of a problem.
Try loading your gc log into gcviewer.
https://git
Also, what GC settings are you using? We may be able to make some suggestions.
Cumulative GC pauses aren’t very interesting to me. I’m more interested in the
longest ones, 90th percentile, 95th, etc.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On N
Let's see
1> the fieldType. Possibly you're missing something there
2> The fact that you see the doc return without lowercasing means
nothing, it's returning the _stored_ field which is a verbatim copy.
The spellcheck is returning an _indexed_ value.
Best,
Erick
On Fri, Nov 13, 2015 at 5:39 AM, Q
We have an existing collection with a field called lastpublishdate of type
tdate. It already has a lot of data indexed, and we want to add docValues to
improve our sorting performance on the field.
The old field definition was:
We we recently changed it to
Is that considered a breaking c
Hi Devansh,
Yes you'd need to reindex your data in order to use DocValues. It's
highlighted here @ the official ref guide :
https://cwiki.apache.org/confluence/display/solr/DocValues
On Fri, Nov 13, 2015 at 10:00 AM, Dhutia, Devansh
wrote:
> We have an existing collection with a field called l
Hi Gurus,
I am trying to use the Solr DIH CachedSqlEntityProcessor. In my case, I
also need to reference another column from the parent entity in the child
entity. How can that be done?
Observe the product_name column from the parent entity that I am trying to
use in the child entity.
Thi
Ugh! I totally missed the highlight.
Thanks for clarifying.
On 11/13/15, 1:07 PM, "Anshum Gupta" wrote:
>Hi Devansh,
>
>Yes you'd need to reindex your data in order to use DocValues. It's
>highlighted here @ the official ref guide :
>
>https://cwiki.apache.org/confluence/display/solr/DocVa
Hi,
Is there a way to make solr not cache the results when we send the query?
(mainly for query result cache). I need to still enable doc and filter
caching.
Let me know if this is possible,
Thanks
Nitin
Please do push your script to github - I (re)-compile custom code
infrequently and never remember how to setup the environment.
On Thu, Nov 12, 2015 at 5:14 AM, Upayavira wrote:
> Okay, makes sense. As to your question - making a new ValueSourceParser
> that handles 'equals' sounds pretty straig
Hi Alessandro,
Thanks for answering. Unfortunately bq is not enough as I have several roles
that I need to score in different ways. I was thinking of building a custom
function that reads the weights of the roles from solr config and applies them
at runtime. I am a bit concerned about performanc
Why do you want to do this? Worst-case testing?
But you can always set your size parameter for the queryResultCache to 0.
On Fri, Nov 13, 2015 at 10:31 AM, KNitin wrote:
> Hi,
>
> Is there a way to make solr not cache the results when we send the query?
> (mainly for query result cache). I need
It depends on what you want to do with it. There are tons of ways to skin the
proverbial cat, and it all depends on what you want to do with it. You really
need to start by learning the very basics of Solr, and then move forward from
there. Once you understand Solr, you must also understand y
Hi Davis, I wanted to thank you for suggesting Ansible as one of the
automation tool as it has been working very well in automating the
deployments of Zookeeper, Solr on our clusters.
Thanks,
Susheel
On Wed, Oct 21, 2015 at 10:47 AM, Davis, Daniel (NIH/NLM) [C] <
daniel.da...@nih.gov> wrote:
>
We currently index using DIH along with the SortedMapBackedCache cache
implementation which has worked well until recently when we needed to index
a much larger table. We were running into memory issues using the
SortedMapBackedCache so we tried switching to the BerkleyBackedCache but
appear to hav
Does q={!cache=false}foo:bar&... work in this case?
On Fri, Nov 13, 2015 at 9:31 PM, KNitin wrote:
> Hi,
>
> Is there a way to make solr not cache the results when we send the query?
> (mainly for query result cache). I need to still enable doc and filter
> caching.
>
> Let me know if this is p
Yes for worst case analysis. I usually do this by setting it in zk config
but wanted to check if we can do this at runtime. We tried the
q={!cache=false} but it does not help.
On Fri, Nov 13, 2015 at 12:53 PM, Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
> Does q={!cache=false}foo:bar&..
Hello Todd,
"External merge" join helps to avoid boilerplate caching in such simple
cases.
it should be something
On Fri, Nov 13, 2015 at 10:54 PM, Todd Long wrote:
> We currently index using DIH along with the SortedMapBackedCache cache
> implementation which has worked wel
Hi,
Is there a way to use CloudSolrClient and connect to a Zookeeper instance where
ACL is enabled and resources/files like /live_nodes, etc are ACL protected?
Couldn’t find a way to set the ACL credentials.
Thanks,
Kevin
We've had success with LZ4 compression in a custom ShardHandler to reduce
network overhead, getting ~25% compression with low CPU impact. LZ4 or
Snappy seem like reasonable choices[1] for maximizing compression +
transfer + decompression times in the data center.
Would it make sense to integrate c
Hello:
I am seeing following Exception during call export handler, is anyone
familiar with it?
at org.apache.lucene.util.BitSetIterator.(BitSetIterator.java:58)
at
org.apache.solr.response.SortingResponseWriter.write(SortingResponseWriter.java:138)
at
org.apache.solr.response.QueryResponseWrite
Hi,
I have documents with very large fields which I want to highlight.
When a word hits in the far end of those fields, the FastVectorHighlighter
is unable to pull any fragment.
I'm able to pull fragments using the original highlighter and setting
maxAnalyzedChars to a very high value, but I can't
Hi Tom,
SPM for SOLR should be helpful here. See http://sematext.com/spm
Otis
> On Nov 13, 2015, at 10:00, Tom Evans wrote:
>
> Hi all
>
> We have some issues with our Solr servers spending too much time
> paused doing GC. From turning on gc debug, and extracting numbers from
> the GC log,
How about we just add a new function called equals() and put into the
solution?
On Fri, Nov 13, 2015 at 11:36 AM, simon wrote:
> Please do push your script to github - I (re)-compile custom code
> infrequently and never remember how to setup the environment.
>
> On Thu, Nov 12, 2015 at 5:14 AM,
28 matches
Mail list logo