The config is stored in ZooKeeper.
/configs/myconf/velocity/pagination_bottom.vm is a ZooKeeper path, not a
filesystem path. The data on disk is in ZK's binary format. Solr uses the ZK
client library to talk to the embedded server and read config data.
On May 16, 2014, at 2:47 AM, Aman Tandon
Doing this doesn't avoid the need to configure and administrate ZK. Running a
special snowflake setup to avoid downloading a tar.gz doesn't seem like a good
trade-off to me.
On May 15, 2014, at 3:27 PM, Upayavira wrote:
> Hi,
>
> I need to set up a zookeeper ensemble. I could download Zookeep
Solr can add the filter for you:
timestamp:[* TO NOW-30SECOND]
Increasing soft commit frequency isn't a bad idea, though. I'd probably do
both. :)
On May 23, 2014, at 6:51 PM, Michael Tracey wrote:
> Hey all,
>
> I've got a number of nodes (Solr 4.4 Cloud) that I'm balanc
ZooKeeper allows clients to put watches on paths in the ZK tree. When the
cluster state changes, every Solr client is notified by the ZK server and then
each client reads the updated state. No polling is needed or even helpful.
In any event, reading from ZK is much more lightweight than writing,
Dedicated machines are a good idea. The main thing is to make sure that ZK
always has IOPS available for transaction log writes. That's easy to ensure
when each ZK instance has its own hardware. The standard practice, as far as I
know, is to have 3 physical boxes spread among racks/datacenters/c
Three fields: AllChamp_ar, AllChamp_fr, AllChamp_en. Then query them with
dismax.
On Jun 30, 2014, at 11:53 AM, benjelloun wrote:
> here is my schema:
>
> required="false" stored="false"/>
> required="false" multiValued="true"/>
>
> required="false" multiValued="true"/>
>
> required="fa
Seconding this. Solr works fine on Jetty. Solr also works fine on Tomcat. The
Solr community largely uses Jetty, so most of the resources on the Web are for
running Solr on Jetty, but if you have a reason to use Tomcat and know what
you're doing then Tomcat is a fine choice.
On Jun 30, 2014, at
Stored doesn't mean "stored to disk", more like "stored verbatim". When you
index a field, Solr analyzes the field value and makes it part of the index.
The index is persisted to disk when you commit, which is why it sticks around
after a restart. Searching the index, mapping from search terms t
Sure sounds like a socket bug, doesn't it? I turn to tcpdump when Solr starts
behaving strangely in a socket-related way. Knowing exactly what's happening at
the transport level is worth a month of guessing and poking.
On Jul 8, 2014, at 3:53 AM, Harald Kirsch wrote:
> Hi all,
>
> This is wha
Atomic updates fetch the doc with RealTimeGet, apply the updates to the fetched
doc, then reindex. Whether you use atomic updates or send the entire doc to
Solr, it has to deleteById then add. The perf difference between the atomic
updates and "normal" updates is likely minimal.
Atomic updates
Take a look at this update XML:
05991
Steve McKay
Walla Walla
Python
Let's say employeeId is the key. If there's a fourth field, salary, on the
existing doc, should it be deleted or retained? With this update it will
obviously be deleted:
05991
S
document" apply to the client side only? On
> the server side the add means that the entire document is re-indexed, right?
>
> Bill
>
>
> On Tue, Jul 8, 2014 at 7:32 PM, Steve McKay wrote:
>
>> Take a look at this update XML:
>>
>>
>>
&g
BTW, Ameya, jhighlight-1.0.jar is in the Solr binary distribution, in
contrib/extraction/lib. There are a bunch of different libraries that
Tika uses for content extraction, so this seems like a good time to make
sure that Tika has all the jars available that it might need to process
the files you'
Perhaps the requirement means a total of 10 years of experience spread across
Solr, HTML, XML, Java, Tomcat, JBoss, and MySQL. This doesn't seem likely, but
it is satisfiable, so if we proceed on the assumption that a job posting
doesn't contain unsatisfiable requirements then it's more reasonab
Solr is complaining about receiving a malformed HTTP request. What
happens when you send a correctly-formed multipart/form-data request?
Also, is there anything you can add about the circumstances? Who's
sending the requests that fail, is there any correlation between
requests that fail, how of
> -Original Message-
> From: Esteban Donato [mailto:esteban.don...@gmail.com]
> Sent: Monday, September 26, 2011 2:08 PM
> To: solr-user@lucene.apache.org
> Subject: aggregate functions in Solr?
>
> Hello guys,
>
> I need to implement a functionality which requires something similar
> t
16 matches
Mail list logo