Hi All,
I have a cluster that has the overseer leader gone. This is on Solr 4.10.3
version.
Its completely gone from zookeeper and bouncing any instance does not start a
new election process.
Anyone experience this issue before and any ideas to fix this.
Thanks,
Rishi.
g up disk space
On 5/5/2015 7:29 AM, Rishi Easwaran wrote:
> Worried about data loss makes
sense. If I get the way solr behaves, the new directory should only have
missing/changed segments.
> I guess since our application is extremely write
heavy, with lot of inserts and deletes, almost eve
Hi All,
Aol is hosting a meetup in Dulles VA. The topic this time is Solr/ Solr Cloud.
http://www.meetup.com/Code-Brew/events/53217/
Thanks,
Rishi.
still come up with a
mechanism to drop existing files, but those
won't hold good in case of serious
issues with the cloud, you could end up
losing data. That's worse than using a
bit more disk space!
On 4 May 2015 11:56, "Rishi Easwaran"
wrote:
Thanks for the responses Mark
: Mon, May 4, 2015 9:11 am
Subject: Re: Solr Cloud reclaiming disk space from deleted documents
On 5/4/2015 4:55 AM, Rishi Easwaran wrote:
> Sadly with the size of our
complex, spiting and adding more HW is not a viable long term solution.
> I
guess the options we have are to run op
failures more often
because it always merges the larges segment.
Walter
Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my
blog)
On May 4, 2015, at 3:53 AM, Rishi Easwaran
wrote:
> Thanks for the responses Mark and Ramkumar.
>
> The question I
had was, why does Solr need
copy of the index at any
point. There's no way around this, I would
> suggest you provision the
additional disk space needed.
> On 20 Apr 2015 23:21, "Rishi Easwaran"
wrote:
>
> > Hi All,
> >
> > We are seeing this
problem with solr 4.6 and solr 4.10.3.
between
existing
and new machine.
On Apr 20, 2015 9:10 PM, "Rishi Easwaran"
wrote:
> So is there anything that can be done from
a tuning perspective, to
> recover a shard that is 75%-90% full, other that get
rid of the index and
> rebuild the data?
> Also to prevent this i
Hi All,
We are seeing this problem with solr 4.6 and solr 4.10.3.
For some reason, solr cloud tries to recover and creates a new index directory
- (ex:index.20150420181214550), while keeping the older index as is. This
creates an issues where the disk space fills up and the shard never ends up
factor
Thanks,
Rishi.
-Original Message-
From: Shawn Heisey
To: solr-user
Sent: Mon, Apr 20, 2015 11:25 am
Subject: Re: Solr Cloud reclaiming disk space from deleted documents
On 4/20/2015 8:44 AM, Rishi Easwaran wrote:
> Yeah I noticed that. Looks like
optimize won't work
il optimization
completes.
Is it a problem? Not if optimization occurs over shards serially and
your
index is broken to many small shards.
On Apr 18, 2015 1:54 AM, "Rishi
Easwaran" wrote:
> Thanks Shawn for the quick
reply.
> Our indexes are running on SSD, so 3 should be ok.
> A
-user
Sent: Fri, Apr 17, 2015 6:22 pm
Subject: Re: Solr Cloud reclaiming disk space from deleted documents
On 4/17/2015 2:15 PM, Rishi Easwaran wrote:
> Running into an issue and wanted
to see if anyone had some suggestions.
> We are seeing this with both solr 4.6
and 4.10.3 code.
> We ar
Hi All,
Running into an issue and wanted to see if anyone had some suggestions.
We are seeing this with both solr 4.6 and 4.10.3 code.
We are running an extremely update heavy application, with millions of writes
and deletes happening to our indexes constantly. An issue we are seeing is
that so
you have German, a filter length of 25 might be too low (Because of
compounding). You might want to analyze a sample of your German text to
find a good length.
Tom
http://www.hathitrust.org/blogs/Large-scale-Search
On Wed, Feb 25, 2015 at 10:31 AM, Rishi Easwaran
wrote:
> Hi Alex,
>
> Thank
Hi Alex,
Thanks for the suggestions. These steps will definitely help out with our use
case.
Thanks for the idea about the lengthFilter to protect our system.
Thanks,
Rishi.
-Original Message-
From: Alexandre Rafalovitch
To: solr-user
Sent: Tue, Feb 24, 2015 8:50 am
Subject:
ind of do OK with a language-insensitive approach. But
> it hits the wall pretty fast.
>
> One thing that does work pretty well is trademarked names (LaserJet, Coke,
> etc). Those are spelled the same in all languages and usually not inflected.
>
> wunder
> Walter Underwood
&
der
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
On Feb 23, 2015, at 8:00 PM, Rishi Easwaran wrote:
> Hi Alex,
>
> There is no specific language list.
> For example: the documents that needs to be indexed are emails or any
> messages
for a
Solr resources newsletter at http://www.solr-start.com/
On 23 February 2015 at 16:19, Rishi Easwaran wrote:
> Hi All,
>
> For our use case we don't really need to do a lot of manipulation of incoming
text during index time. At most removal of common stop words, tokenize emails
Hi All,
For our use case we don't really need to do a lot of manipulation of incoming
text during index time. At most removal of common stop words, tokenize emails/
filenames etc if possible. We get text documents from our end users, which can
be in any language (sometimes combination) and we c
search behaviour when upgrading to 4.10.3
On 2/20/2015 4:24 PM, Rishi Easwaran wrote:
> Also, the tokenizer we use is very similar to the following.
> ftp://zimbra.imladris.sk/src/HELIX-720.fbsd/ZimbraServer/src/java/com/zimbra/cs/index/analysis/UniversalTokenizer.java
> ftp://zimbra.im
/20/2015 9:37 AM, Rishi Easwaran wrote:
> We are trying to upgrade from Solr 4.6 to 4.10.3. When testing search 4.10.3
search results are not being returned, actually looks like only the first word
in a sentence is getting indexed.
> Ex: inserting "This is a test message" only retu
On 2/20/2015 9:37 AM, Rishi Easwaran wrote:
> We are trying to upgrade from Solr 4.6 to 4.10.3. When testing search 4.10.3
search results are not being returned, actually looks like only the first word
in a sentence is getting indexed.
> Ex: inserting "This is a test message" only
Hi,
We are trying to upgrade from Solr 4.6 to 4.10.3. When testing search 4.10.3
search results are not being returned, actually looks like only the first word
in a sentence is getting indexed.
Ex: inserting "This is a test message" only returns results when searching for
content:this*. search
All,
There is a tech talk on AOL Dulles campus tomorrow. Do swing by if you can and
share it with your colleagues and friends.
www.meetup.com/Code-Brew/events/192361672/
There will be free food and beer served at this event :)
Thanks,
Rishi.
14 5:51 pm
Subject: Re: SOLR Cloud 4.6 - PERFORMANCE WARNING: Overlapping onDeckSearchers=2
On 3/30/2014 2:59 PM, Rishi Easwaran wrote:
> RAM shouldn't be a problem.
> I have a box with 144GB RAM, running 12 instances with 4GB Java heap each.
> There are 9 instances wrting to 1TB of SSD
-user
Sent: Fri, Mar 28, 2014 8:35 pm
Subject: Re: SOLR Cloud 4.6 - PERFORMANCE WARNING: Overlapping onDeckSearchers=2
On 3/28/2014 4:07 PM, Rishi Easwaran wrote:
>
> Shawn,
>
> I changed the autoSoftCommit value to 15000 (15 sec).
> My index size is pretty small ~4GB and
on SSD drives is a long time to
handle a 4GB index.
Thanks,
Rishi.
-Original Message-
From: Shawn Heisey
To: solr-user
Sent: Fri, Mar 28, 2014 3:28 pm
Subject: Re: SOLR Cloud 4.6 - PERFORMANCE WARNING: Overlapping onDeckSearchers=2
On 3/28/2014 1:03 PM, Rishi Easwaran wrote:
>
me to avoid issues with this in the future.
Dmitry
On Thu, Mar 27, 2014 at 11:16 PM, Rishi Easwaran wrote:
> All,
>
> I am running SOLR Cloud 4.6, everything looks ok, except for this warn
> message constantly in the logs.
>
>
> 2014-03-27 17:09:03,982 WARN [commitSchedule
All,
I am running SOLR Cloud 4.6, everything looks ok, except for this warn message
constantly in the logs.
2014-03-27 17:09:03,982 WARN [commitScheduler-15-thread-1] [] SolrCore -
[index_shard16_replica1] PERFORMANCE WARNING: Overlapping onDeckSearchers=2
2014-03-27 17:09:05,517 WARN [commi
, which is very small for a
server app.
One of our install script had changed.
I had to up the ulimits - -n,-u,-v and for now no other issues seen.
-Original Message-
From: Rishi Easwaran
To: solr-user
Sent: Tue, Jun 18, 2013 10:40 am
Subject: Re: Solr Cloud Hangs consistently
its (with openSearcher
true or false) to be quite long. Do you have any proof at
all that the tlogs are placing enough load on the system
to go down this road?
Best
Erick
On Tue, Jun 18, 2013 at 10:49 AM, Rishi Easwaran wrote:
> SolrJ already has access to zookeeper cluster state. Network I/O
more
slower network IO. The transaction log is a append-only log -- it is not
pretty cheap especially so if you compare it with the indexing process.
Plus your write request/sec will drop a lot once you start doing
synchronous replication.
On Tue, Jun 18, 2013 at 2:18 AM, Rishi Easwaran
hat could be a first report afaik.
>
> - Mark
>
> On Jun 17, 2013, at 5:52 PM, Rishi Easwaran mailto:rishi.easwa...@aol.com)> wrote:
>
> > Update!!
> >
> > This happens with replicationFactor=1
> > Just for kicks I created a collection with a 24 shards, replicati
shutdown one node that host a replica of the shard to recover the indexation
capability.
--
Yago Riveiro
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Monday, June 17, 2013 at 6:44 PM, Rishi Easwaran wrote:
>
>
> Hi All,
>
> I am trying to benchmark SOLR Cloud and
Hi All,
With the economy the way it is and many folks still looking.
Figured this is a good place as any to publish this.
Just today, we got an opening for mid-senior level Software Engineer in our
team.
Experience with SOLR is a big+.
Feel free to have a look at this position.
http://www.link
13 3:43 pm
Subject: Re: SOLR Cloud - Disable Transaction Logs
It is also necessary for near real-time replication, peer sync and recovery.
On Tue, Jun 18, 2013 at 1:04 AM, Rishi Easwaran wrote:
> Hi,
>
> Is there a way to disable transaction logs in SOLR cloud. As far as I can
> tell
Hi,
Is there a way to disable transaction logs in SOLR cloud. As far as I can tell
no.
Just curious why do we need transaction logs, seems like an I/O intensive
operation.
As long as I have replicatonFactor >1, if a node (leader) goes down, the
replica can take over and maintain a durable state
FYI..you can ignore http4ClientExpiryService thread in the stack dump.
Its a dummy executor service, i created to test out something, unrelated to
this issue.
-Original Message-
From: Rishi Easwaran
To: solr-user
Sent: Mon, Jun 17, 2013 2:54 pm
Subject: Re: Solr Cloud Hangs
l hope to look at it soon).
- Mark
On Jun 17, 2013, at 1:44 PM, Rishi Easwaran wrote:
>
>
> Hi All,
>
> I am trying to benchmark SOLR Cloud and it consistently hangs.
> Nothing in the logs, no stack trace, no errors, no warnings, just seems stuck.
>
> A little bit about
Hi All,
I am trying to benchmark SOLR Cloud and it consistently hangs.
Nothing in the logs, no stack trace, no errors, no warnings, just seems stuck.
A little bit about my set up.
I have 3 benchmark hosts, each with 96GB RAM, 24 CPU's and 1TB SSD. Each host
is configured to have 8 SOLR cloud
>From my understanding.
In SOLR cloud the CompositeIdDocRouter uses HashbasedDocRouter.
CompositeId router is default if your numShards>1 on collection creation.
CompositeId router generates an hash using the uniqueKey defined in your
schema.xml to route your documents to a dedicated shard.
You c
this is a bug or a feature or simply undefined.
-- Jack Krupansky
-Original Message-----
From: Rishi Easwaran
Sent: Tuesday, May 28, 2013 3:54 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Composite Unique key from existing fields in schema
I thought the same, but that doesn't seem
order of the field
names in the processor configuration:
docid_s
userid_s
-- Jack Krupansky
-Original Message-
From: Rishi Easwaran
Sent: Tuesday, May 28, 2013 2:54 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Composite Unique key from existing fields in schema
Jack,
No sure if
e
update processor, and pick your composite key field name as well. And set
the delimiter string as well in the concat update processor.
I managed to reverse the field order from what you requested (userid,
docid).
I used the standard Solr example schema, so I used dynamic fields for the
two id
. And set
the delimiter string as well in the concat update processor.
I managed to reverse the field order from what you requested (userid,
docid).
I used the standard Solr example schema, so I used dynamic fields for the
two ids, but use your own field names.
-- Jack Krupansky
-Orig
Hi All,
Historically we have used a single field in our schema as a uniqueKey.
docid
Wanted to change this to a composite key something like
userid-docid.
I know I can auto generate compositekey at document insert time, using custom
code to generate a new field, but wanted to know if th
Results.
Hi Rishi,
Have you done any tests with Solr 4.3 ?
Regards,
Cordialement,
BOUHLEL Noureddine
On 17 May 2013 21:29, Rishi Easwaran wrote:
>
>
> Hi All,
>
> Its Friday 3:00pm, warm & sunny outside and it was a good week. Figured
> I'd share some good
We use commodity H/W which we procured over the years as our complex grew.
Running on jdk6 with tomcat 5. (Planning to upgrade to jdk7 and tomcat7 soon).
We run them with about 4GB heap. Using CMS GC.
-Original Message-
From: adityab
To: solr-user
Sent: Sat, May 18, 2013 10:37
, Rishi Easwaran wrote:
>
>
> Hi All,
>
> Its Friday 3:00pm, warm & sunny outside and it was a good week. Figured
> I'd share some good news.
> I work for AOL mail team and we use SOLR for our mail search backend.
> We have been using it since pre-SOLR 1.4 and stro
Hi All,
Its Friday 3:00pm, warm & sunny outside and it was a good week. Figured I'd
share some good news.
I work for AOL mail team and we use SOLR for our mail search backend.
We have been using it since pre-SOLR 1.4 and strong supporters of SOLR
community.
We deal with millions indexes and b
will
get evenly assigned to the shards. As of now, the replication factor is
not
persisted.
On Wed, May 15, 2013 at 1:07 AM, Rishi Easwaran
wrote:
Ok looks like...I have to go to every node, add a replica
individually,
create the cores and add them to the collection.
ex:
http://newNode1
://newNode2:port/solr/admin/cores?action=CREATE&name=testCloud1_shard2_replica3&collection=testCloud1&shard=shard2&collection.configName=myconf
Is there an easier way to do this.
Any ideas.
Thanks,
Rishi.
-Original Message-
From: Rishi Easwaran
To: solr-user
Sent: T
Hi,
I am beginning to work on SOLR cloud implementation.
I created a collection using the collections API
http://myhost:port/solr/admin/collections?action=CREATE&name=testCloud1&numShards=6&replicationFactor=2&collection.configName=myconf&maxShardsPerNode=1
My cluster now has 6 shards and 2 rep
53 matches
Mail list logo