Hi Chris,
Thanks for the link to Patrick's github (looks like some good stuff in there).
One thing to try (and this isn't the final word on this, but is helpful) is to
go into the tree view in the Cloud panel and find out which node is hosting the
Overseer (/overseer_elect/leader). When restart
Hi Metin,
I think removing the softCommit=true parameter on the client side will
definitely help as NRT wasn't designed to re-open searchers after every
document. Try every 1 second (or even every few seconds), I doubt your users
will notice. To get an idea of what threads are running in your J
Hi Daniel,
I'm not sure how this would apply to an existing collection (in your case
collection1). Try using the collections API to create a new collection and pass
the router.field parameter. Grep'ing over the code, the parameter is named:
router.field (not routerField or routeField).
Cheers,
Apologies for chiming in late on this one ... just wanted to mention what I've
used with good success in the past is supervisord (http://supervisord.org/).
It's easy to install and configure and has the benefit of restarting nodes if
they crash (such as due to an OOM). I'll also mention that you
Hi Michael,
Can you /get clusterstate.json again to see the contents? Also, maybe just a
typo but you have `cate clusterstate.json` vs. `cat ..`
Timothy Potter
Sr. Software Engineer, LucidWorks
www.lucidworks.com
From: michael.boom
Sent: Wednesday, Dece
I'm not sure at this point as what you're describing seems fine to me ... I'm
not too familiar with Solr's UI implementation, but I suspect the cloud graph
stuff may be client side, so are you seeing any JavaScript errors in the dev
console in your browser?
Timothy Potter
Sr. Software Engineer,
Hi Chen,
I'm not aware of any direct integration between the two at this time. You might
ping the Hive user list with this question too. That said, I've been thinking
whether it makes sense to build a Hive StorageHandler for Solr? That at least
seems like a quick way to go. However, it might al
Hi Chris,
The easiest approach is to just create a new core on the new machine that
references the collection and shard you want to migrate. For example, say you
split shard1 of a collection named "cloud", which results in having: shard1_0
and shard1_1. Now let's say you want to migrate shard 1
?
--
Yago Riveiro
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Monday, December 16, 2013 at 4:23 PM, Tim Potter wrote:
> Hi Chris,
>
> The easiest approach is to just create a new core on the new machine that
> references the collection and shard you want to migrate. For example
There have been some recent refactorings in this area of the code. The
following class name should work:
org.apache.solr.spelling.suggest.tst.TSTLookupFactory
Cheers,
Timothy Potter
Sr. Software Engineer, LucidWorks
www.lucidworks.com
From: Trevor Handl
Cloud Suggester java ClassNotFoundException:
org.apache.solr.suggest.tst.TSTLookup
Brilliant, thanks Timothy!
Changing the solrconfig.xml lookupImpl (not className) to the
org.apache.solr.spelling.suggest.tst.TSTLookupFactory fixed this issue for me.
Thanks, Trevor
-Original Message-
From
Yes, SolrCloud uses a transaction log to keep track of ordered updates to a
document. The latest update will be immediately visible from the real-time get
handler /get?id=X even without a commit.
Cheers,
Timothy Potter
Sr. Software Engineer, LucidWorks
www.lucidworks.com
___
Any chance you still have the logs from the servers hosting 1 & 2? I would open
a JIRA ticket for this one as it sounds like something went terribly wrong on
restart.
You can update the /clusterstate.json to fix this situation.
Lastly, it's recommended to use an OOM killer script with SolrClou
I'm using logstash4solr (http://logstash4solr.org) for something similar ...
I setup my Solr to use Log4J by passing the following on the command-line when
starting Solr:
-Dlog4j.configuration=file:///$SCRIPT_DIR/log4j.properties
Then I use a custom Log4J appender that writes to RabbitMQ:
htt
use I used Socketappender directly to write to
logstash and logstash disk got full.
that's why I moved to using AsyncAppender, and I plan on moving to using
rabbit.
but this is also why I wanted to filter some of the logs. indexing 150K docs
prodcued 50GB of logs.
this seemed too much.
Tim
ubject: RE: monitoring solr logs
And are you using any tool like kibana as a dashboard for the logs?
Tim Potter wrote
> We're (LucidWorks) are actively developing on logstash4solr so if you have
> issues, let us know. So far, so good for me but I upgraded to logstash
> 1.3.2 even though t
You can wire-in a custom UpdateRequestProcessor -
http://wiki.apache.org/solr/UpdateRequestProcessor
Timothy Potter
Sr. Software Engineer, LucidWorks
www.lucidworks.com
From: elmerfudd
Sent: Monday, December 30, 2013 10:26 AM
To: solr-user@lucene.apache.
Absolutely adding replicas helps you scale query load. Queries do not need to
be routed to leaders; they can be handled by any replica in a shard. Leaders
are only needed for handling update requests.
In general, a distributed query has two phases, driven by a controller node
(what you called c
Hi Svante,
It seems like the TermVectorComponent is in the search component chain of your
/select search handler but you haven't indexed docs with term vectors enabled
(at least from what's in the schema you provided). Admittedly, the NamedList
code could be a little more paranoid but I think t
Hi Josh,
Try adding: -XX:+CMSPermGenSweepingEnabled as I think for some VM versions,
permgen collection was disabled by default.
Also, I use: -XX:MaxPermSize=512m -XX:PermSize=256m with Solr, so 64M may be
too small.
Timothy Potter
Sr. Software Engineer, LucidWorks
www.lucidworks.com
___
Unfortunately, there is no out-of-the-box solution for this at the moment.
In the past, I solved this using a couple of different approaches, which
weren't all that elegant but served the purpose and were simple enough to allow
the ops folks to setup monitors and alerts if things didn't work.
Just my 2 cents on this while I wait for a build ... I think we have to ensure
that an older client will work with a newer server or newer client will work
with older server to support hot rolling upgrades. It's not unheard of these
days for an org to have 10's (or even 100's) of Solr cloud serv
Try adding shards.info=true and debug=track to your queries ... these will
give more detailed information about what's going behind the scenes.
On Mon, Oct 13, 2014 at 11:11 PM, S.L wrote:
> Erick,
>
> I have upgraded to SolrCloud 4.10.1 with the same toplogy , 3 shards and 2
> replication facto
jfyi - the bin/solr script does the following:
-XX:OnOutOfMemoryError="$SOLR_TIP/bin/oom_solr.sh $SOLR_PORT" where
$SOLR_PORT is the port Solr is bound to, e.g. 8983
The oom_solr.sh script looks like:
SOLR_PORT=$1
SOLR_PID=`ps waux | grep start.jar | grep $SOLR_PORT | grep -v grep | awk
'{print
A couple of things to check:
1) How many znodes are under the /overseer/queue (which you can see in the
Cloud Tree panel in the Admin UI)
2) How often are you committing? The general advice is that your indexing
client(s) should not send commits and instead rely on auto-commit settings
in solrconf
restarts and
> > troubleshooting manually
> >
> >
> > e.g. something like the following for a java_error.sh will drop an email
> > with a timestamp
> >
> >
> >
> > echo `date` | mail -s "Java Error: General - $HOSTNAME"
> not...@domain.com
> >
26 matches
Mail list logo