Hi,
A rookie question. We have a Solr cluster that doesn't get too much
traffic. We see that our queries take long time unless we run a script to
send more traffic to Solr.
We are indexing data all the time and use autoCommit.
I am wondering if there is a way to warmup new searcher on commit by
Hi,
Is it possible to add a custom comparator to a field for sorting. e.g.
let's say I have field 'name' and following documents
{
id : "doc1",
name : "1"
}
{
id : "doc2",
name : "S1"
}
{
id : "doc2",
name : "S2"
}
if I sort using field 'name', the order would be : ["doc1", "doc2"
Hi,
In master/slave we can send queries to slaves only, now that we have tlog
and pull replicas can we send queries to those replicas to achieve similar
scaling like master/slave for large search volumes?
--
— Pushkar Raste
d make a single-shard collection in SolrCloud,
> copy the index to the right place (I’d shut down Solr while
> I copied it), and then use SPLITSHARD on it, but that implies
> you’d be going to SolrCloud.
>
> Best,
> Erick
>
> > On May 21, 2020, at 10:35 AM, Pushkar Raste
Hi,
Does Solr support shard split in the master/slave setup. I understand that
there is no shard concept is master/slave and we just have cores but can we
split a core into two.
If yes is there way to specify new mapping based on the unique key.
--
— Pushkar Raste
Hi,
Can some help me with my question.
On Tue, Nov 12, 2019 at 10:20 AM Pushkar Raste
wrote:
> Hi,
> How about in the master/slave set up. If I enable ssl in master/slave
> setup would the segment and config files be copied using TLS.
>
> On Sat, Nov 9, 2019 at 3:31 PM Jan
luding replication will be secure (https). This it is
> still tcp but using TLS :)
>
> Jan Høydahl
>
> > 6. nov. 2019 kl. 00:03 skrev Pushkar Raste :
> >
> > Hi,
> > When slaves/pull replicas copy index files from master is done using an
> > secure protocol or just over tcp?
> > --
> > — Pushkar Raste
>
Hi,
When slaves/pull replicas copy index files from master is done using an
secure protocol or just over tcp?
--
— Pushkar Raste
at 3:08 PM Shawn Heisey wrote:
> On 8/27/2019 8:22 AM, Pushkar Raste wrote:
> > I am trying to run Solr 4 on JDK11, although this version is not
> supported
> > on JDK11 it seems to be working fine except for the error/exception
> "Unmap
> > hack not supported on th
Can someone help me with this?
On Tue, Aug 27, 2019 at 10:22 AM Pushkar Raste
wrote:
> Hi,
> I am trying to run Solr 4 on JDK11, although this version is not supported
> on JDK11 it seems to be working fine except for the error/exception "Unmap
> hack not supported on this plat
Hi,
I am trying to run Solr 4 on JDK11, although this version is not supported
on JDK11 it seems to be working fine except for the error/exception "Unmap
hack not supported on this platform".
What the risks/downsides of running into this.
but in this case
> it’d be OK since they’re simple types) by examining the inverted index and
> pulling out the values. Painfully slow and you’d have to write custom code
> probably at the Lucene level to make it all work.
>
> Best,
> Erick
>
> > On May 22, 2019, at 8:11 AM
these fields
can be retrieved while iterating over index while reading the documents.
--
— Pushkar Raste
Hi,
In the master/slave setup, as soon as I start a new slave it starts to
serve request. Often the searches result in no documents being found as
index has not been replicated yet. Is there a way to stop replica from
serving request (marking node unhealthy) until the index is replicated for
the fi
ed at:
> https://issues.apache.org/jira/browse/SOLR-445
>
> I have _not_ worked with this personally in prod SolrCloud systems, so
> I can't say much more
> than it exists. It's only available in Solr 6.1+
>
> Best,
> Erick
>
> On Wed, Jan 23, 2019 at 5:55 PM Pushka
You mean I can use SolrJ 7.x for both indexing documents to both Solr 4 and
Solr 7 as well as the SolrInputDocument class from Solrj 7.x
Wouldn’t there be issues if there are any backwards incompatible changes.
On Wed, Jan 23, 2019 at 8:09 PM Shawn Heisey wrote:
> On 1/23/2019 5:49 PM, Push
Thanks for the quick response Shawn. It is migrating ion from Solr 4.10
master/slave to Solr Cloud 7.x
On Wed, Jan 23, 2019 at 7:41 PM Shawn Heisey wrote:
> On 1/23/2019 5:05 PM, Pushkar Raste wrote:
> > We are setting up cluster with new version Solr and going to reindex
> data.
Hi,
We are setting up cluster with new version Solr and going to reindex data.
However, until all the data is indexed I need keep indexing data in the old
cluster as well. We are currently using the Solrj client and constructing
SolrInputDocument objects to index data.
To avoid conflicts with the
Or let me rephrase the question. What is the minimum Solr version that is
JDK11 compatible.
On Tue, Jan 15, 2019 at 10:27 AM Pushkar Raste
wrote:
> I probably already know the answer for this but was still wondering.
>
I probably already know the answer for this but was still wondering.
Hi,
I have questions about the IndexUpgrader tool.
- I want to upgrade from Solr 4 to Solr 7. Can I run upgrade the index from
4 to 5 then 5 to 6 and finally 6 to 7 using appropriate version of the
IndexUpgrader but without loading the Index in the Solr at all during the
successive upgrades.
- Th
As mentioned in the JIRA, exception seems to be coming from a log
statement. The issue was fixed in 6.3, here is relevant line f rom 6.3
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.3.0/solr/core/src/java/org/apache/solr/update/PeerSync.java#L707
On Wed, Nov 22, 2017 at 1:18
t take any action when DC1 comes
> back, we are still operational with 5 nodes quorum. Isn't it? Or I am
> missing something.
>
>
>
> On Fri, May 26, 2017 at 10:07 AM, Pushkar Raste
> wrote:
>
> > Damn,
> > Math is hard
> >
> > DC1 : 3 non o
Damn,
Math is hard
DC1 : 3 non observers
DC2 : 2 non observers
3 + 2 = 5 non observers
Observers don't participate in voting = non observers participate in voting
5 non observers = 5 votes
In addition to the 2 non observer, DC2 also has an observer, which as you
pointed out does not participat
chitect
Cominvent AS - www.cominvent.com
> 25. mai 2017 kl. 00.35 skrev Pushkar Raste :
>
> A setup I have used in the past was to have an observer I DC2. If DC1 one
> goes boom you need manual intervention to change observer's role to make
it
> a follower.
>
> When DC1
A setup I have used in the past was to have an observer I DC2. If DC1 one
goes boom you need manual intervention to change observer's role to make it
a follower.
When DC1 comes back up change on instance in DC2 to make it a observer
again
On May 24, 2017 6:15 PM, "Jan Høydahl" wrote:
> Sure, Z
What version are you on. There was a bug where if you use cache size 0, it
would still create a cache with size 2 (or may be just 1). It was fixed
under https://issues.apache.org/jira/browse/SOLR-9886?filter=-2
On Apr 3, 2017 9:26 AM, "Nilesh Kamani" wrote:
> @Yonik even though the code change
Hi Walter,
We have been using G1GC for more than a year now and are very happy with
it.
The only flag we have enabled is 'ParallelRefProcEnabled'
On Jan 23, 2017 3:00 PM, "Walter Underwood" wrote:
> We have a workload with very long queries, and that can drive the CMS
> collector into using abo
I think we should add the suggestion about docValues to the cursormark wiki
(documentation), we too ran in the same problem.
On Jan 18, 2017 5:52 PM, "Erick Erickson" wrote:
> Is your ID field docValues? Making it a docValues field should reduce
> the amount of JVM heap you need.
>
>
> But the e
Try bouncing the overseer for your cluster.
On Jan 17, 2017 12:01 PM, "Kelly, Frank" wrote:
> Solr Version: 5.3.1
>
> Configuration: 3 shards, 3 replicas each
>
> After running out of heap memory recently (cause unknown) we’ve been
> successfully restarting nodes to recover.
>
> Finally we did o
Seems like you have enabled only console appender. I remember there was a
changed made to disable console appender if Solr is started in background
mode.
On Jan 10, 2017 5:55 AM, "Markus Jelsma" wrote:
> Hello,
>
> I used to enable debug logging in my Maven project's unit tests by just
> setting
You should probably have as small a swap as possible. I still feel long GCs
are either due to swapping or thread contention.
Did you try to remove all other G1GC tuning parameters except for the
ParallelRefProcEnabled?
On Dec 19, 2016 1:39 AM, "forest_soup" wrote:
> Sorry for my wrong memory. T
This kind of separation is not supported yet. There however some work
going on, you can read about it on
https://issues.apache.org/jira/browse/SOLR-9835
This unfortunately would not support soft commits and hence would not be a
good solution for near real time indexing.
On Dec 16, 2016 7:44 AM,
We use jdeb maven plugin to build the debian packages, we use it for Solr
as well
On Dec 12, 2016 9:03 AM, "Adjamilton Junior" wrote:
> Hi folks,
>
> I am new here and I wonder to know why there's no Solr 6.x packages for
> ubuntu/debian?
>
> Thank you.
>
> Adjamilton Junior
>
1MaxNewSizePercent=5 \
>
> The aim here is to only limit the maximum and still allow some adaptation.
>
> --Ere
>
> 8.12.2016, 16.07, Pushkar Raste kirjoitti:
>
>> Disable all the G1GC tuning your are doing except for
>> ParallelRefProcEnabled
>>
>> G1GC i
Disable all the G1GC tuning your are doing except for ParallelRefProcEnabled
G1GC is an adaptive algorithm and would keep tuning to reach the default
pause goal of 250ms which should be good for most of the applications.
Can you also tell us how much RAM you have on your machine and if you have
s
Did you index any documents while node was being restarted? There was a
issue introduced due to IndexFingerprint comparison. Check SOLR-9310. I am
not sure if fix made it to Solr6.2
On Nov 25, 2016 3:51 AM, "Arkadi Colson" wrote:
> I am using SolrCloud on version 6.2.1. I will upgrade to 6.3.0 n
Did you turn on/off docValues on a already existing field?
On Nov 16, 2016 11:51 AM, "Jaco de Vroed" wrote:
> Hi,
>
> I made a typo. The Solr version number in which this error occurs is 5.5.3.
> I also checked 6.3.0, same problem.
>
> Thanks, bye,
>
> Jaco.
>
> On 16 November 2016 at 17:39, Jac
Try commit with expungeDeletes="true"
I am not sure if it will merge old segments that have deleted documents.
In the worst case you can 'optimize' your index which should take care of
removing deleted document
On Oct 27, 2016 4:20 AM, "Arkadi Colson" wrote:
> Hi
>
> As you can see in the scre
Nodes will still go into recovery but only for a short duration.
On Oct 26, 2016 1:26 PM, "jimtronic" wrote:
It appears this has all been resolved by the following ticket:
https://issues.apache.org/jira/browse/SOLR-9446
My scenario fails in 6.2.1, but works in 6.3 and Master where this bug has
This is due to leader initiated recovery. When Take a look at
https://issues.apache.org/jira/browse/SOLR-9446
On Oct 24, 2016 1:23 PM, "jimtronic" wrote:
> We are running into a timing issue when trying to do a scripted deployment
> of
> our Solr Cloud cluster.
>
> Scenario to reproduce (someti
You should look into using docValues. docValues are stored off heap and
hence you would be better off than just bumping up the heap.
Don't enable docValues on existing fields unless you plan to reindex data
from scratch.
On Oct 25, 2016 3:04 PM, "Susheel Kumar" wrote:
> Thanks, Toke. Analyzin
Did you look into the heap dump ?
On Mon, Oct 24, 2016 at 6:27 PM, Susheel Kumar
wrote:
> Hello,
>
> I am seeing OOM script killed solr (solr 6.0.0) on couple of our VM's
> today. So far our solr cluster has been running fine but suddenly today
> many of the VM's Solr instance got killed. I had
The reason node is in recovery for long time could be related to
https://issues.apache.org/jira/browse/SOLR-9310
On Tue, Oct 4, 2016 at 9:14 PM, Rallavagu wrote:
> Solr Cloud 5.4.1 with embedded Jetty - jdk 8
>
> Is there a way to disable incoming updates (from leader) during startup
> until "fi
This error is thrown when you add (or remove) on an existing field but do
not reindex you data from scratch. It is result of removing field cache
from Lucene. Although you were not getting error with Solr 4.8, I am pretty
sure that you were getting incorrect results.
Stand up a small test cluster
If Solr has GC pauses greater than 15 seconds, zookeeper is going to assume
node is down and hence would send it into recovery when node comes out of a
GC pause and reconnects to zookeeper.
You should look into keeping GC pause as short as possible.
Using G1GC with ParallelRefProcEnabled has help
Hi Dominique,
Unfortunately Solr doesn't support metrics you are interested in. You can
however have another process that makes jmx queries on the solr process, do
required transformation and store data in some kind of data store.
Just make sure you are not DDOSing your Solr instances :-)
On Oct
A couple of questions/suggestions
- This normally happens after leader election, when new leader gets
elected, it will force all the nodes to sync with itself.
Check logs to see when this happens, if leader was changed. If that is true
then you will have to investigate why leader change takes place
One of the tricks I had read somewhere was to cat all files in the index
directory and OS will have file in the disk cache.
On Thu, Oct 6, 2016 at 11:55 AM, Rallavagu wrote:
> Looking for clues/recommendations to help warm up during startup. Not
> necessarily Solr caches but mmap as well. I have
Purely of algorithmic point of view - look into reservoir sampling for
unbiased sampling.
On Sep 28, 2016 11:00 AM, "Yongtao Liu" wrote:
Alexandre,
Thanks for reply.
The use case is customer want to review document based on search result.
But they do not want to review all, since it is costly.
Solr is RAM hungry. Make sure that you have enough RAM to have most if the
index of a core in the RAM itself.
You should also consider using really good SSDs.
That would be a good start. Like others said, test and verify your setup.
--Pushkar Raste
On Sep 23, 2016 4:58 PM, "Jeffery
If you are creating a collection these warnings are harmless. There is
patch being worked on under SOLR-9446 (although for a different scen) it
would help suppressing this error.
Markus,
Can you pick up one of the values in the facets and try running query using
that. Ideally numFound should match the facet count. If those aren't
matching I guess your index is still sorta damaged but you aren't really
noticing it.
On Sep 15, 2016 4:44 AM, "Markus Jelsma" wrote:
> Mikhail
Damn I didn't put comments in the ticket but replied to question " Is it
safe to upgrade an existing field to docvalues?" on the mailing list.
Check that out
On Sep 14, 2016 5:59 PM, "Pushkar Raste" wrote:
> We experienced exact opposite issue on Solr 4.10
&g
We experienced exact opposite issue on Solr 4.10
Check my comments in https://issues.apache.org/jira/browse/SOLR-9437
I am not sure if issue was fixed in Solr 6
I do be interested in tracking down patch for this.
On Sep 14, 2016 3:04 PM, "Erick Erickson" wrote:
> Weird indeed. Optimize _shoul
Hi Ronald,
Turning on docValues for existing field works in Solr 4. As you mentioned
it will use un-inverting method if docValues are nit found on existing
document. This all works fine until segments that have documents without
docValues merge with segment that have docValues for the field. In the
It would be worth looking into iostats of your disks.
On Aug 22, 2016 10:11 AM, "Alessandro Benedetti"
wrote:
> I agree with the suggestions so far.
> The cache auto-warming doesn't seem the problem as the index is not massive
> and the auto-warm is for only 10 docs.
> Are you using any warming
-- Forwarded message --
From: "Pushkar Raste"
Date: Jul 20, 2016 11:08 AM
Subject: About SOLR-9310
To:
Cc:
Hi,
https://issues.apache.org/jira/browse/SOLR-9310
PeerSync replication in SOLR seems to be completely broken since
fingerprint check was introduced. (or m
If you have GC logs, check if you have long GC pauses that make zookeeper
think that node(s) are going down. If this is the cases then your nodes are
going into recovery and and based on your settings in in
solr.xml you may end up in situation when no nodes gets promoted to be a
leader.
On 22 D
You must have this field in your schema with some default value assigned to
it (most probably default value is NOW). This field is usually used to
determine latest timestamp when this document was last indexed.
On 17 December 2015 at 04:51, Guillermo Ortiz wrote:
> I'm indexing documents in solr
Hi Philippa,
Try taking a heap dump (when heap usage is high) and then using a profiler
look at which objects are taking up most of the memory. I have seen that if
you are using faceting/sorting on large number of documents then fieldCache
grows very big and dominates most of of the heap. Enabling
is
> zero.
>
> Another option might be to add {!cache=false} to your fq clauses on
> the client in this case if that is possible/convenient.
>
> Best,
> Erick
>
> On Thu, Dec 3, 2015 at 11:19 AM, Pushkar Raste
> wrote:
> > Hi,
> > I want to make turning filt
Will 'wget http://host;port//solr/admin/collections?action=LIST' help?
On 3 December 2015 at 12:12, rashi gandhi wrote:
> Hi all,
>
> I have setup two solr-4.7.2 server instances on two diff machines with 3
> zookeeper severs in solrcloud mode.
>
> Now, I want to retrieve list of all the collect
Hi,
I want to make turning filter cache on/off configurable (I really have a
use case to turn off filter cache), can I use properties placeholders like
${someProperty} in the filter cache config. i.e.
In short, can I use properties placeholders for attributes for xml node in
solrconfig. Follow u
HI,
To minimize GC pauses, try using G1GC and turn on 'ParallelRefProcEnabled'
jvm flag. G1GC works much better for heaps > 4 GB. Lowering
'InitiatingHeapOccupancyPercent'
will also help to avoid long GC pauses at the cost of more short pauses.
On 3 November 2015 at 12:12, Björn Häuser wrote:
>
I may be wrong but I think 'delete' and 'optimize' can not be executed
concurrently on a Lucene index
On 4 November 2015 at 15:36, Shawn Heisey wrote:
> On 11/4/2015 1:17 PM, Yonik Seeley wrote:
> > On Wed, Nov 4, 2015 at 3:06 PM, Shawn Heisey
> wrote:
> >> I had understood that since 4.0, Solr
.
Is there special operation I have to set to turn on segment merging
information?
-- Pushkar Raste
0-alpha)? How do you swap a zookeeper
> instance from being an observer to a voting member?
>
> On 30 October 2015 at 09:34, Matteo Grolla
> wrote:
>
> > Pushkar... I love this solution
> > thanks
> > I'd just go with 3 zk nodes on each side
> >
&
short outage as far as indexing is concerned but queries
should continue to work and you don't have to take all the zookeeper nodes
down.
-- Pushkar Raste
On Oct 29, 2015 4:33 PM, "Matteo Grolla" wrote:
> Hi Walter,
> it's not a problem to take down zk for a short
add "-Dsolr.log=" to your command line
On 27 October 2015 at 08:13, Steven White wrote:
> How do I specify a different log directory by editing "log4j.properties"?
>
> Steve
>
> On Mon, Oct 26, 2015 at 9:08 PM, Pushkar Raste
> wrote:
>
> > It depe
It depends on your case. If you don't mind logs from 3 different instances
inter-mingled with each other you should be fine.
You add "-Dsolr.log=" to make logs to go different
directories. If you want logs to go to same directory but different files
try updating log4j.properties.
On 26 October 201
Do you have GC logging turned on? If yes can you provide excerpt from the
GC log for a pause that took > 30sec
On 19 October 2015 at 04:16, Jeff Wu wrote:
> Hi all,
>
> we are using solr4.7 on top of IBM JVM J9 Java7, max heap to 32G, system
> RAM 64G.
>
> JVM parameters: -Xgcpolicy:balanced -ve
Once you have GC logs, look for string "Total time for which application
threads were stopped" to check if you have long pauses (you may get long
pauses even with young generation GC).
-- Pushkar Raste
On Wed, Oct 14, 2015 at 11:47 AM, Lorenzo Fundaró <
lorenzo.fund...@dawandama
- turn on GC logging and see if there are
any long and tune GC strategy accordingly.
-- Pushkar Raste
On Wed, Oct 14, 2015 at 5:03 AM, Lorenzo Fundaró <
lorenzo.fund...@dawandamail.com> wrote:
> Hello,
>
> I have following conf for filters and commits :
>
> Concurrent
Thu, Sep 10, 2015 at 5:43 PM, Pushkar Raste
> wrote:
>
> Did you see my previous response to you today?
> http://markmail.org/message/wt6db4ocqmty5a42
>
> Try querying a different way, like from the command line using curl,
> or from your browser, but not through the solr admi
I am trying following add document (value for price.long is Long.MAX_VALUE)
411
one
9223372036854775807
However upon querying my collection value I get back for "price.long"
is 9223372036854776000
(I got same behavior when I used JSON file)
Definition for 'price.l
Hi,
I am trying to following add document (value for price.long is
Long.MAX_VALUE)
411
one
9223372036854775807
However upon querying my collection value I get back for "price.long" is
9223372036854776000
Definition for 'price.long' field and 'long' look like follo
77 matches
Mail list logo