Hi Deepthi,
I never tried case with multivalue field.
One note about join - your dictionary index must be single shard index and must
be replicated on all Solr nodes where your other index is.
Let us know how it worked out for you.
Thanks,
Emir
--
Monitoring - Log Management - Alerting - Anomaly
Hi all,
We would like to perform a benchmark of
https://issues.apache.org/jira/browse/SOLR-11831
The patch improves the performance of grouped queries asking only for one
result per group (aka. group.limit=1).
I remember seeing a page showing a benchmark of the query performance on
Wikipedia,
I have observed this behavior with several versions of solr (4.10, 6.1, and
now 7.2)
I look in the admin console and look at a core and see that it is not
"current"
I also notice that there are lots of segments etc...
I look at the latest timestamp on a record in the collection and see that
it is
On 2/9/2018 8:44 AM, Webster Homer wrote:
I look at the latest timestamp on a record in the collection and see that
it is over 24 hours old.
I send a commit to the collection, and then see that the core is now
current, and the segments are fewer. The commit worked
This is the setting in solrcon
I we do have autoSoftcommit set to 3 seconds. It is NOT the visibility of
the records that is my primary concern. I am concerned about is the
accumulation of uncommitted tlog files and the larger number of deleted
documents.
I am VERY familiar with the Solr documentation on this.
On Fri, Feb 9, 2
A little more background. Our production Solrclouds are populated via CDCR,
CDCR does not replicate commits, Commits to the target clouds happen via
autoCommit settings
We see relvancy scores get inconsistent when there are too many deletes
which seems to happen when hard commits don't happen.
On
We need sum of experiences from each child documents. Using this expression
"exp:[4 TO 7]" we are directly querying the single child document with the
experience between 4 to 7 but not sum of experiences of all child
documents.For that we have to do something. Could you please help me out
with tha
Hi Shawn
We tried one more round of testing after We increased CPU cores on each
server from 4 to 16 which amounts to 64 cores in total against 4 slaves.But
Still CPU Usage was higher.So We took the thread dumps on one of our slaves
and found that threads were blocked. Have also attached them.
As
I am trying to use synonyms expansion as a feature in LTR
Any input on a feature using synonym expansion providing a field and the
synonym file would be helpful.
Thanks,
Roopa
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi,
I am trying multi words query time synonyms with Solr 6.6.2and
SynonymGraphFilterFactory filter as explain in this article
https://lucidworks.com/2017/04/18/multi-word-synonyms-solr-adds-query-time-support/
My field type is :
Thanks, Shawn, Eric. I see that same using swapon -s. Looks like during
the OS setup, it was set as 2 GB (Solr 6.0) and other 16GB (Solr 6.6)
Our 6.0 instance has been running since 1+ year but recently our monit
started reporting swap usage above 30% and Solr dashboard showing the
same. I have
Sorry, I meant Emir.
On Fri, Feb 9, 2018 at 3:15 PM, Susheel Kumar wrote:
> Thanks, Shawn, Eric. I see that same using swapon -s. Looks like during
> the OS setup, it was set as 2 GB (Solr 6.0) and other 16GB (Solr 6.6)
>
> Our 6.0 instance has been running since 1+ year but recently our monit
Shawn, Eric,
Were you able to look at the thread dump ?
https://github.com/mohsinbeg/datadump/blob/master/threadDump-7pjql_1.zip
Or is there additional data I may provide.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi all,
Any response on the below would be really helpful.
Thanks,
Deepak
-Original Message-
From: Deepak Udapudi [mailto:dudap...@delta.org]
Sent: Thursday, February 08, 2018 2:02 PM
To: solr-user@lucene.apache.org
Subject: lat/long (location ) field context filters for autosuggestion
Do you by any chance have buffering turned on for CDCR? That parameter
is misleading. If true, tlogs will accumulate forever. The blanket
recommendation is becoming turn buffering off and leave it off, the
original intention there has been replaced really by bootstrapping.
Buffering was there for m
On 2/8/2018 5:50 PM, mmb1234 wrote:
I then removed my custom element from my solrconfig.xml and
both hard commit and /solr/admin/core hang issues seemed to go way for a
couple of hours.
The mergeScheduler config you had is not likely to be causing any
issues. It's a good config. But removin
On 2/9/2018 9:29 AM, Webster Homer wrote:
A little more background. Our production Solrclouds are populated via CDCR,
CDCR does not replicate commits, Commits to the target clouds happen via
autoCommit settings
We see relvancy scores get inconsistent when there are too many deletes
which seems t
On 2/9/2018 12:48 AM, rubi.hali wrote:
As threads are blocked , our cpu is reaching to 90% even even when heap or
os memory is not getting used at all.
One of the BLOCKED Thread Snippet
Most of the threads are blocked either on Jetty Connector or FieldCacheIMPL
for DOC values. Slave1ThreadDump5
No worries, I don't mind being confused with Erick ;)
Emir
On Feb 9, 2018 9:16 PM, "Susheel Kumar" wrote:
> Sorry, I meant Emir.
>
> On Fri, Feb 9, 2018 at 3:15 PM, Susheel Kumar
> wrote:
>
> > Thanks, Shawn, Eric. I see that same using swapon -s. Looks like during
> > the OS setup, it was s
Ran /solr/58f449cec94a2c75-core-248/admin/luke at 7:05pm PST
It showed "lastModified: 2018-02-10T02:25:08.231Z" indicating commit blocked
for about 41 mins.
Hard commit is set as 10secs in solrconfig.xml
Other cores are also now blocked.
https://jstack.review analysis of the thread dump says "Po
Hi,
When a segment is flushed to disk because it is exceeding available memory,
is it sill updated when new documents are added? I also read somewhere that
a segment is not committed even if it is flushed. How is a
flushed-but-not-committed segment different from a committed segment?
For example,
For me the SOLR sort was working fine with boost increments of 1. But after
some recent changes to do atomic updates to existing SOLR document now when
I try to get ordered results I get it in improper order. I have tried to
keep large difference in the boosting values between individual terms but
It is possible to restore a snapshot without exporting it using the
Collection API ?
We have a very large index and want to provide a quick option for restore
(instead of exporting + restoring + fixing the topology).
I think about developing a script that would read the snapshot's XML
description
I further tried to understand the results I see since the idf value is
smaller higher boost is not helping. Can we ensure that idf value should
not impact the ordering.
73690.7 = sum of:
73690.7 = weight(pageID:5d368d4f-0c16-11e8-a4e0-067beceeb88f in 36)
[SchemaSimilarity], result of:
7
I am experiencing this too. For me the "solr" user is running "fs-manager"
from with the directory "/var/tmp/.X1M-Unix. There is a "config.json",
"out.log" and "xmrig.log" file present. The json looks like this:
{
"algo": "cryptonight",
"av": 0,
"background": true,
"colors": fals
25 matches
Mail list logo