,
http://localhost:8983/solr/collection1/select?q=solr&wt=xml
However, what if my Solr core has both a collection1 and collection2, yet I
desire the XML files to only be posted to collection2 only?
If possible, please advise.
Thanks,
Mark
IMPORTANT NOTICE: This e-mail message is intended t
Yes, more details…
Solr version, which garbage collector, how does heap usage look, cpu, etc.
- Mark
On Nov 21, 2013, at 6:46 PM, Erick Erickson wrote:
> How real time is NRT? In particular, what are you commit settings?
>
> And can you characterize "periodic slowness"? Q
So then,
$ java -jar post.jar Durl=http://localhost:8983/solr/collection2/update
solr.xml monitor.xml
On 11/21/13, 8:14 AM, "xiezhide" wrote:
>
>add Durl=http://localhost:8983/solr/collection2/update when run post.jar,
>此邮件发送自189邮箱
>
>"Reyes, Mark" wrote
generally.
To your question though - it is fine to send a commit while updates are coming
in from another source - it’s just not generally necessary to do that anyway.
- Mark
On Nov 24, 2013, at 1:01 PM, adfel70 wrote:
> Hi everyone,
>
> I am wondering how commit operation works in
as
they are still loading is simply when you soft commit and how many docs have
been indexed when the soft commit happens.
- Mark
On Nov 25, 2013, at 1:03 AM, adfel70 wrote:
> Hi Mark, Thanks for the answer.
>
> One more question though: You say that if I get a success from t
system? only those which got processed before the fail or non of the docs in
> this batch?
Generally, it will be those processed before the fail if you are using the bulk
add methods. Somewhat depends on impls and such - for example CloudSolrServer
can use multiple threads to route documents
t.
We talked about wanting a similar option for SolrCloud and local filesystem a
while back. If there is no JIRA issue for it, please file one!
- Mark
Are there any GOOD client-side solutions to proxy a Solr 4.5.0 instance so that
the end-user can see their queries w/o being able to directly access :8983?
Applications/frameworks used:
- Solr 4.5.0
- AJAX Solr (javascript library)
Thank you,
Mark
IMPORTANT NOTICE: This e-mail message is
use Nginx, it is very
>fast and very feature rich. Its config scripting is usually enough to
>restrict access and limit input parameters. We also use Nginx's embedded
>Perl and Lua scripting besides its config scripting to implement more
>difficult logic.
>
>
>
>-
hardware,
setting it to 3MB and uploading your 2MB syn file is not going to be a problem.
Solr doesn’t read and write those files often, nor use ZooKeeper much at all in
a stable state. Upping that limit and putting in a few config files that are a
few MB is not going to break anything.
- Mark
Are there any good tutorials that touch base on how to integrate the suggested
PHP proxy for JavaScript framework AJAX Solr?
Here is the proxy, https://gist.github.com/evolvingweb/298580
Also on Stackoverflow,
http://stackoverflow.com/questions/20338073/proxy-php-tutorials-for-ajax-solr
IMPORT
Keep in mind, there have been a *lot* of bug fixes since 4.3.1.
- Mark
On Dec 4, 2013, at 7:07 PM, Tim Vaillancourt wrote:
> Hey all,
>
> Now that I am getting correct results with "distrib=false", I've identified
> that 1 of my nodes has just 1/3rd of the total da
cates collection1?
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Replicating-from-the-correct-collections-in-SolrCloud-on-solr-start-tp4105754.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
--
- Mark
Sounds like a bug. If you are seeing this happen in 4.6, I'd file a JIRA
issue.
- Mark
On Sun, Dec 8, 2013 at 3:49 PM, William Bell wrote:
> Any thoughts? Why are we getting duplicate items in solr.xml ?
>
> -- Forwarded message --
> From: William Bell
&
Hmm…I think we shipped 4.4.0 on a pre hadoop 2.2.0 version? I’d assume that is
the problem - for instance, the goog protobuf lib prob has to be updated to
match what’s expected by 2.2.0 at the least if I remember right.
- Mark
On Dec 13, 2013, at 4:41 AM, javozzo wrote:
> Hi,
> I
of
thing I think we should look at a JIRA issue at a time.
- Mark
On Dec 14, 2013, at 2:15 PM, Alan Woodward wrote:
> Evening all,
>
> I discovered the Apache Curator project yesterday
> (http://curator.apache.org/index.html), which seems to make interaction with
> Zookee
even then
there are many difficulties in testing a distributed system.
If it was an all or nothing change (I'm not convinced it is), I would certainly
vote against at this point in time. We have years of hardening around the
current zk code at this point
Mark
> On Dec 14, 2013, at
ainly for oversharding (you
might have more than one core on a node from the same collection and shard).
Probably better to use a naming scheme that is friendly to this in the long
run, and the default naming scheme is. Also less confusing to match the default
naming scheme.
- Mark
As long as you are not using the bulk or streaming API’s. Solrj does not
currently respect delete/add ordering in those cases, though each of the two
types are ordered. For the standard update per request, as long as it’s the
same client, this is a guarantee.
- Mark
On Dec 17, 2013, at 9:54
Sounds like you need to raise your ZooKeeper connection timeout.
Also, make sure you are using a concurrent garbage collector as a side note -
stop the world pauses should be avoided. Just good advice :)
- Mark
On Dec 18, 2013, at 5:48 AM, Anca Kopetz wrote:
> Hi,
>
> In our
(cores are not defined in solr.xml)?
- Mark
On Dec 18, 2013, at 6:30 PM, Ryan Wilson wrote:
> Hello all,
>
> I am currently in the process of building out a solr cloud with solr 4.5 on
> 4 nodes with some pretty hefty hardware. When we create the collection we
> have a replicat
What Solr version?
- Mark
On Dec 20, 2013, at 1:14 PM, Fred Drake wrote:
> Here's another sequence of messages I frequently see where replication
> isn't happening with no clearly identified cause:
>
> INFO org.apache.solr.handler.SnapPuller; Starting repl
(rolling starting nodes one by one), all nodes start properly and all cores
> are properly loaded ("active"). But after that, first restart of any Solr
> node causes issues on that node.
>
> Any ideas about possible cause? And shouldn't Solr maybe try to recover
> from such situation?
>
> Thanks,
>
> Bojan
>
--
- Mark
be less strict
(eg not start rejecting updates for a few seconds).
- Mark
On Mon, Dec 23, 2013 at 12:49 PM, Christine Poerschke (BLOOMBERG/ LONDON) <
cpoersc...@bloomberg.net> wrote:
> Hello.
>
> The behaviour we observed was that a zookeeper election took about 2s plus
> 1.
If you are seeing an NPE there, sounds like you are on to something. Please
file a JIRA issue.
- Mark
> On Dec 26, 2013, at 1:29 AM, YouPeng Yang wrote:
>
> Hi
> Merry Christmas.
>
> Before this mail,I am in trouble with a weird problem for a few days
> when to c
Can you file a JIRA issue?
- Mark
> On Dec 24, 2013, at 2:57 AM, YouPeng Yang wrote:
>
> Hi users
>
> Solr supports for writing and reading its index and transaction log files
> to the HDFS distributed filesystem.
> **I am curious about that there are any other futher im
Cloudera has plans here. I'll be working on further hdfs / Solrcloud options in
the near future.
- Mark
> On Dec 26, 2013, at 11:33 AM, Greg Walters wrote:
>
> YouPeng,
>
> While I'm unable to help you with the issue that you're seeing I did want to
>
This would be bad.
4. You really want to use the solr.hdfs.home setting described in the
documentation IMO.
- Mark
> On Dec 26, 2013, at 1:56 PM, Greg Walters wrote:
>
> Mark,
>
> I'd be happy to but some clarification first; should this issue be about
> creating core
Do you have auto commit (not softAutoCommit) on? At what value? Are you
ever opening a new searcher?
- Mark
On 12/27/2013 05:17 AM, YouPeng Yang wrote:
> Hi
> There is a failed core in my solrcloud cluster(solr 4.6 with hdfs 2.2)
> when I start my solrcloud . I noticed that there ar
It can fail because it may contain a partial record - that is why that
is warn level rather than error. A fail does not necessarily indicate a
problem.
- Mark
On 12/29/2013 09:04 AM, YouPeng Yang wrote:
> Hi Mark Miller
>
>How can a log replay fail .
> And I can not fi
ng akin to the HBase region server model. A shared
index among leader and replicas is a longer term item we might explore.
A lot of trade offs depending on what we try and do, so nothing set in
stone yet, but I'll start on phase 1 impl any day now.
- Mark
On 12/29/2013 09:08 AM, YouPeng Yang wro
Thanks - that is more bad code that needs to be removed. I'll reopen that
issue and add to it.
- Mark
On Sun, Dec 29, 2013 at 8:56 AM, YouPeng Yang wrote:
> Hi Mark Miller
>
>It's great that you have fixed the bug. By the way,there is another
> point i want to remind
Just an FYI, newer version of Solr will deploy the proper error message rather
than that cryptic one.
- Mark
On Jan 3, 2014, at 12:54 AM, Shawn Heisey wrote:
> On 1/2/2014 10:22 PM, gpssolr2020 wrote:
>> Caused by: java.lang.RuntimeException: Invalid version (expected 2, but 60)
Check suffix-urlfilter.txt in your conf directory for Nutch. You might be
prohibiting those filetypes from the crawl.
- Mark
On 1/3/14, 10:29 AM, "Teague James" wrote:
>I am using Nutch 1.7 with Solr 4.6.0 to index websites that have links to
>binary files, such as Wor
changed.
- Mark
On Jan 5, 2014, at 5:33 PM, Gopal Patwa wrote:
> I gave another try with Solr 4.4 which ship with Cloudera VM as Cloudera
> Search but same result. It seems there is compitability issue with protobuf
> library dependecy in haddop java client and HDFS server it self.
>
area needs
to bake.
- Mark
On Jan 15, 2014, at 11:01 AM, Alan Woodward wrote:
> I think solr.xml is the correct place for it, and you can then set up
> substitution variables to allow it to be set by environment variables, etc.
> But let's discuss on the JIRA ticket.
>
What’s the benefit? So you can avoid having a simple core properties file? I’d
rather see more value than that prompt exposing something like this to the
user. It’s a can of warms that I personally have not seen a lot of value in yet.
Whether we mark it experimental or not, this adds a burden
You can configure the Solr client to use a replication factor of 1 for hdfs and
then let Solr replicate for you if you want to avoid this.
Other than that, we will be adding further options over time.
- Mark
On Jan 15, 2014, at 9:46 PM, longsan wrote:
> Hi, i'm newer for solr cloud.
We are currently voting on the first release candidate:
http://markmail.org/thread/6kzfs3l3z6jgtqpv
http://people.apache.org/~markrmiller/lucene_solr_4_6_1r1559132/
Vote runs 3 days. I’ll publish the artifacts once it passes.
- Mark
On Jan 18, 2014, at 4:37 PM, Bill Bell wrote:
> We j
What version are you running?
- Mark
On Jan 20, 2014, at 5:43 PM, Software Dev wrote:
> We also noticed that disk IO shoots up to 100% on 1 of the nodes. Do all
> updates get sent to one machine or something?
>
>
> On Mon, Jan 20, 2014 at 2:42 PM, Software Dev
> wrote:
&g
If that is the case, we could probably use a JIRA issue Svante. The component
should really give a nice user error in this scenerio.
- Mark
On Jan 21, 2014, 8:00:55 PM, Tim Potter wrote: Hi
Svante,
It seems like the TermVectorComponent is in the search component chain of your
/select
, and this will not work out.
By setting solr.hdfs.home and leaving the relative defaults, all of the
locations are correctly set for each different collection under solr.hdfs.home
without any effort on your part.
- Mark
On Jan 22, 2014, 7:22:22 AM, Lajos wrote: Uugh. I just
realised I
Yonik has brought up this feature a few times as well. I’ve always felt about
the same as Shawn. I’m fine with it being optional, default to off. A cluster
reload can be a fairly heavy operation.
- Mark
On Jan 22, 2014, 4:36:19 AM, Mohit Jain wrote: Thanks
Shawn. I appreciate you sharing
Looking at the list of changes on the 21st and 22nd, I don’t see a smoking gun.
- Mark
On Jan 22, 2014, 11:13:26 AM, Markus Jelsma wrote:
Hi - this likely belongs to an existing open issue. We're seeing the stuff
below on a build of the 22nd. Until just now we used builds of the 20t
I just created a JIRA issue for the first bit I’ll be working on:
- Mark
On Jan 22, 2014, 12:57:46 PM, Lajos wrote: Thanks Mark ...
indeed, some doc updates would help.
Regarding what seems to be a popular question on sharding. It seems that
it would be a Good Thing that the shards
Whoops, hit the send keyboard shortcut.
I just created a JIRA issue for the first bit I’ll be working on:
SOLR-5656: When using HDFS, the Overseer should have the ability to reassign
the cores from failed nodes to running nodes.
- Mark
On Jan 22, 2014, 12:57:46 PM, Lajos wrote
What version of Solr are you running?
- Mark
On Jan 22, 2014, 5:42:30 PM, Utkarsh Sengar wrote: I am
not sure what happened, I updated merchant collection and then
restarted all the solr machines.
This is what I see right now: http://i.imgur.com/4bYuhaq.png
merchant collection looks fine
can google for.
Many, many SolrCloud bug fixes (we are about to release 4.6.1) since 4.4, so
you might consider an upgrade if possible at some point soon.
- Mark
On Jan 22, 2014, 6:14:10 PM, Utkarsh Sengar wrote: solr
4.4.0
On Wed, Jan 22, 2014 at 3:12 PM, Mark Miller wrote:
> W
Yeah, I think we removed support in the new solr.xml format. It should still
work with the old format.
If you have a good use case for it, I don’t know that we couldn’t add it back
with the new format.
- Mark
On Jan 23, 2014, 3:26:05 AM, Per Steffensen wrote: Hi
In Solr 4.0.0 I used
.
Trappy default for standard use unfortunetly.
- Mark
On Jan 23, 2014, at 9:20 AM, stevenNabble wrote:
> Hello,
>
> I am finding that if any fields in a document returned by a Solr query
> (*wt=json* to get a JSON response) contain backslash *'\'* characters, they
>
mode and being in ZooKeeper in SolrCloud
mode.
There are also low level API’s you could use, but I wouldn’t normally recommend
that.
- Mark
On Jan 24, 2014, at 11:16 AM, Ugo Matrangolo wrote:
> Hi,
>
> we have a quite large SOLR 3.6 installation and we are trying to update t
done before).
- Mark
http://about.me/markrmiller
On Jan 24, 2014, at 12:01 PM, Nathan Neulinger wrote:
> I have an environment where new collections are being added frequently
> (isolated per customer), and the backup is virtually guaranteed to be missing
> some of them.
>
&g
the state to know the real state.
- Mark
http://www.about.me/markrmiller
> On Jan 28, 2014, at 12:31 PM, Greg Preston wrote:
>
> ** Using solrcloud 4.4.0 **
>
> I had to kill a running solrcloud node. There is still a replica for that
> shard, so everything is functional
What's in the logs of the node that won't recover on restart after clearing the
index and tlog
- Mark
On Jan 29, 2014, at 11:41 AM, Greg Preston wrote:
>> If you removed the tlog and index and restart it should resync, or
> something is really crazy.
>
> It doesn
for each release.
I wrote a little about the 4.6.1 as it relates to SolrCloud here:
https://plus.google.com/+MarkMillerMan/posts/CigxUPN4hbA
- Mark
http://about.me/markrmiller
On Jan 31, 2014, at 10:13 AM, David Santamauro
wrote:
>
> Hi,
>
> I have a strange situation.
m itself.
I actually when expect that to easily corrupt the index. It’s easy enough to
check though. Simply try starting a Solr instance against it and take a look.
- Mark
http://about.me/markrmiller
On Jan 31, 2014, at 10:31 AM, Mark Miller wrote:
> Seems unlikely by the way. Sounds like what probably happened is that for
> some reason it thought when you restarted the shard that you were creating it
> with numShards=2 instead of 1.
No, that’s not right. Sorry.
It must
http://about.me/markrmiller
On Jan 31, 2014, at 11:11 AM, David Santamauro
wrote:
>
> There is nothing of note in the zookeeper logs. My solr.xml (sanitized for
> privacy) and identical on all 4 nodes.
>
> zkHost="xx.xx.xx.xx:2181,xx.xx.xx.xx:2181,xx.xx.xx.xx:2181,xx.xx.xx.xx:2181,xx.xx.xx.
On Jan 31, 2014, at 11:15 AM, David Santamauro
wrote:
> On 01/31/2014 10:22 AM, Mark Miller wrote:
>
>> I’d also highly recommend you try moving to Solr 4.6.1 when you can though.
>> We have fixed many, many, many bugs around SolrCloud in the 4 releases since
>>
docs via commitWithin.
- Mark
http://about.me/markrmiller
On Jan 31, 2014, at 12:45 PM, Software Dev wrote:
> Is there a way to disable commit/hard-commit at runtime? For example, we
> usually have our hard commit and soft-commit set really low but when we do
> bulk indexing we woul
def add some javadoc, but this sends updates to shards in
parallel rather than with a single thread. Can really increase update speed.
Still not as powerful as using CloudSolrServer from multiple threads, but a
nice improvement non the less.
- Mark
http://about.me/markrmiller
>
> I
max).
- Mark
http://about.me/markrmiller
On Jan 31, 2014, at 3:50 PM, Software Dev wrote:
> Which of any of these settings would be beneficial when bulk uploading?
>
>
> On Fri, Jan 31, 2014 at 11:05 AM, Mark Miller wrote:
>
>>
>>
>> On Jan 31, 2014, a
days before the collections api, if you wanted
to then create the collection again, just create a solrcore, same way a
collection was created initially.
Without the zk=cluster_truth mode, we have to support both collections api
collections, and pre configured implicit collections, created
be removed.
--
- Mark
http://about.me/markrmiller
On Sun, Feb 2, 2014 at 12:01 AM, Mark Miller wrote:
> It's expected behaviour. It's expected to change with the zk=cluster_truth
> mode that I hope we can start squeezing into 4.7.
>
> In the first couple releases of Sol
You should contribute that and spread the dev load with others :)
We need something like that at some point, it’s just no one has done it. We
currently expect you to aggregate in the monitoring layer and it’s a lot to ask
IMO.
- Mark
http://about.me/markrmiller
On Feb 3, 2014, at 10:49 AM
satisfy all use cases though.
At some point, multi data center support will happen.
I can’t remember where ZooKeeper’s support for it is at, but with that and some
logic to favor nodes in your data center, that might be a viable route.
- Mark
http://about.me/markrmiller
On Feb 3, 2014, at 11
- *everyone* wants stats
that make sense for the collections and cluster on top of the per shard stats.
*Everyone* wouldn’t mind seeing these without having to setup a monitoring
solution first.
If you want more than that, then you can fiddle with your monitoring solution.
- Mark
http://about.me
Based on our current use of it and the nature of the issue, I don’t think we
have anything to worry about.
- Mark
http://about.me/markrmiller
On Jan 27, 2014, 9:52:05 PM, Shawn Heisey wrote: The
Internet is buzzing about the change in Java 7u51 that breaks Google
Guava. Guava is used in
for this type of
thing.
- Mark
http://about.me/markrmiller
On Feb 7, 2014, 7:01:24 PM, Brett Hoerner wrote: I
have Solr 4.6.1 on the server and just upgraded my indexer app to SolrJ
4.6.1 and indexing ceased (indexer returned "No live servers for shard" but
the real root fro
If that is the case we really have to dig in. Given the error, the first thing
I would assume is that you have an old solrj jar or something before 4.6.1
involved with a 4.6.1 solrj jar or install.
- Mark
http://about.me/markrmiller
On Feb 7, 2014, 7:15:24 PM, Mark Miller wrote: Hey
If you look at the stack trace, the line numbers match 4.6.0 in the src, but
not 4.6.1. That code couldn’t have been 4.6.1 it seems.
- Mark
http://about.me/markrmiller
On Feb 8, 2014, at 11:12 AM, Brett Hoerner wrote:
> Hmmm, I'm assembling into an uberjar that forces uniqueness of
Doing a standard commit after every document is a Solr anti-pattern.
commitWithin is a “near-realtime” commit in recent versions of Solr and not a
standard commit.
https://cwiki.apache.org/confluence/display/solr/Near+Real+Time+Searching
- Mark
http://about.me/markrmiller
On Feb 12, 2014, at
Can you share the full stack trace dump?
- Mark
http://about.me/markrmiller
On Feb 17, 2014, at 7:07 AM, Pawel Rog wrote:
> Hi,
> I have quite annoying problem with Solr cloud. I have a cluster with 8
> shards and with 2 replicas in each. (Solr 4.6.1)
> After some time cluster doe
to
receive updates, it has to maintain a connection with ZooKeeper. You can either
raise the timeout, or dig into why the connection heartbeat cannot be
maintained (its very lightweight).
- Mark
http://about.me/markrmiller
on an unrelated issue I think.
But that was not the correct fix (though its better than the previous dangerous
behavior) - it really just needs to be more selective.
Can you file a JIRA issue?
- Mark
http://about.me/markrmiller
On Feb 21, 2014, at 12:25 AM, Chia-Chun Shih wrote:
> Hi
Thanks Guido - any chance you could file a JIRA issue for this?
- Mark
http://about.me/markrmiller
On Feb 26, 2014, at 6:28 AM, Guido Medina wrote:
> I think it would need Guava v16.0.1 to benefit from the ported code.
>
> Guido.
>
> On 26/02/14 11:20, Guido Medina wrote:
&
with
SolrCloud.
- Mark
http://about.me/markrmiller
I’m pretty sure the default config will unlock on startup.
- Mark
http://about.me/markrmiller
On Feb 28, 2014, at 3:50 AM, Chen Lion wrote:
> Dear all,
> I hava a problem i can't understand it.
>
> I use solr 4.6.1, and 2 nodes, one leader and one follower, both have the
a background thread that periodically
checks some local readings and depending on the results, pulls itself out of
the mix as best it can (remove itself from clusterstate.json or simply closes
it’s zk conneciton).
- Mark
http://about.me/markrmiller
On Mar 2, 2014, at 3:42 PM, Gregg Donovan
Yeah, sorry :( the fix applied is only for compatibility in one direction.
Older code won’t know what this type 19 is.
- Mark
http://about.me/markrmiller
On Mar 4, 2014, at 2:42 AM, Thomas Scheffler
wrote:
> Am 04.03.2014 07:21, schrieb Thomas Scheffler:
>> Am 27.02.2014 09:15
Are you using an old version?
- Mark
http://about.me/markrmiller
On Mar 6, 2014, at 11:50 AM, KNitin wrote:
> Hi
>
> When restarting a node in solrcloud, i run into scenarios where both the
> replicas for a shard get into "recovering" state and never come up causing
&
It sounds like the distributed update deadlock issue.
It’s fixed in 4.6.1 and 4.7.
- Mark
http://about.me/markrmiller
On Mar 6, 2014, at 3:10 PM, Avishai Ish-Shalom wrote:
> Hi,
>
> We've had a strange mishap with a solr cloud cluster (version 4.5.1) where
> we observed hi
On Mar 6, 2014, at 5:37 PM, Martin de Vries wrote:
> IndexSchema is using 62% of the memory but we don't know if that's a
> problem:
That seems odd. Can you see what objects are taking all the RAM in the
IndexSchema?
- Mark
http://about.me/markrmiller
Would probably need to see some logs to say much. Need to understand why they
are inoperable.
What version is this?
- Mark
http://about.me/markrmiller
On Mar 6, 2014, at 6:15 PM, Nazik Huq wrote:
> Hello,
>
>
>
> I have a question from a colleague who's managing a 3
me, username "MarkSun" as a contributor to the wiki?
Thank you!
Cheers,
Mark Sun
CTO
MotionElements Pte Ltd
190 Middle Road, #10-05 Fortune Centre
Singapore 188979
mark...@motionelements.com
www.motionelements.com
=
Asia-inspired Stock Animation | Video
On August 19, 2014 at 2:39:32 AM, Lee Chunki (lck7...@coupang.com) wrote:
> > the sooner the better? i.e. version 4.9.0.
Yes, certainly.
--
Mark Miller
about.me/markrmiller
with SolrCloud, sounds
like we should write a test and make it work.
--
Mark Miller
about.me/markrmiller
On August 19, 2014 at 1:20:54 PM, Timothy Potter (thelabd...@gmail.com) wrote:
> Hi,
>
> Using the coreAdmin mergeindexes command to merge an index into a
> leader (SolrCloud m
On August 19, 2014 at 1:33:10 PM, Mark Miller (markrmil...@gmail.com) wrote:
> > sounds like we should write a test and make it work.
Keeping in mind that when using a shared filesystem like HDFS or especially if
using the MapReduce contrib, you probably won’t want this new behavior.
--
The state is actually a combo of the state in clusterstate and the live nodes.
If the live node is not there, it's gone regardless of the last state it
published.
- Mark
> On Aug 23, 2014, at 6:00 PM, Nathan Neulinger wrote:
>
> In particular, a shard being 'active'
Sounds like you should file 3 JIRA issues. They all look like legit stuff we
should dig into on a glance.
--
Mark Miller
about.me/markrmiller
On August 24, 2014 at 12:35:13 PM, ralph tice (ralph.t...@gmail.com) wrote:
> Hi all,
>
> Two issues, first, when I issue an ADDREPLICA cal
h user configuration. With a little
effort, there is a lot of great information and summarization that can be
pulled out of Solr logs.
https://github.com/markrmiller/SolrLogReader
--
- Mark
http://about.me/markrmiller
n of the cloud is not done parallel or distributed. Is this
> already addressed by https://issues.apache.org/jira/browse/SOLR-5473 or is
> there more needed?
2. No, but it should have been fixed by another issue that will be in 4.10.
- Mark
http://about.me/markrmiller
to drop the container
thread pool too much. There are other control points though.
- Mark
http://about.me/markrmiller
On Sun, Aug 31, 2014 at 11:53 AM, Ramkumar R. Aiyengar <
andyetitmo...@gmail.com> wrote:
> On 31 Aug 2014 13:24, "Mark Miller" wrote:
> >
> >
&g
lr instead of storing
in traditional databases?
Thanks in advance
*Nipen Mark *
Solr team, I am indexing geographic points in dec degrees lat lon using the
location_rpt type in my index. The type is setup like this
my field definition is this
my problem is the return is a very narrow but tall ellipse likely due
to the degrees and geo true... but when I change those
I think it's just cruft I left in and never ended up using anywhere. You can
ignore it.
- Mark
> On Oct 13, 2014, at 8:42 PM, Martin Grotzke
> wrote:
>
> Hi,
>
> can anybody tell me the meaning of ZkStateReader.SYNC? All other state
> related constants are clea
Best is to pass the Java cmd line option that kills the process on OOM and
setup a supervisor on the process to restart it. You need a somewhat recent
release for this to work properly though.
- Mark
> On Oct 14, 2014, at 9:06 AM, Salman Akram
> wrote:
>
> I know th
> On Oct 28, 2014, at 9:31 AM, Shawn Heisey wrote:
>
> exceed a 15 second zkClientTimeout
Which is too low even with good GC settings. Anyone with config still using 15
or 10 seconds should move it to at least 30.
- Mark
http://about.me/markrmiller
If someone wants to file a JIRA, we really should detect and help the user
on that.
- Mark
On Wed Nov 19 2014 at 10:39:56 AM Robert Kent
wrote:
> Yes, Alan's comment was correct. Using the correct Zookeeper string made
> things work correctly, e.g.:
>
> SOLR_ZK_ENSEMBL
bq. esp. since we've set max threads so high to avoid distributed
dead-lock.
We should fix this for 5.0 - add a second thread pool that is used for
internal requests. We can make it optional if necessary (simpler default
container support), but it's a fairly easy improvement I think.
401 - 500 of 2249 matches
Mail list logo