because
they are not used anyway)
* java $SOLR_ZK_CREDS_AND_ACLS -Dsolr.solr.home=$SOLR_HOME/server/solr
-Dsolr.data.dir=$SOLR_HOME/server/solr/gettingstarted_shard1_replica1
-Dsolr.log=$SOLR_HOME/server/solr/logs -DzkHost=localhost:2181/solr
-Djetty.port=8983 -jar start.jar
Viola
R
in the
high-end of #replica and #docs, I guess
Regards, Per Steffensen
On 24/03/15 14:02, Ian Rose wrote:
Hi all -
I'm sure this topic has been covered before but I was unable to find any
clear references online or in the mailing list.
Are there any rules of thumb for how many cores (aka
On 25/03/15 15:03, Ian Rose wrote:
Per - Wow, 1 trillion documents stored is pretty impressive. One
clarification: when you say that you have 2 replica per collection on each
machine, what exactly does that mean? Do you mean that each collection is
sharded into 50 shards, divided evenly over al
I think it is very likely that it is due to Solr-nodes losing
ZK-connections (after timeout). We have experienced that a lot. One
thing you want to do, is to make sure your ZK-servers does not run on
the same machines as your Solr-nodes - that helped us a lot.
On 24/03/15 13:57, Gopal Jee wrot
On 23/03/15 20:05, Erick Erickson wrote:
you don't run a SQL engine from a servlet
container, why should you run Solr that way?
https://twitter.com/steff1193/status/580491034175660032
https://issues.apache.org/jira/browse/SOLR-7236?focusedCommentId=14383624&page=com.atlassian.jira.plugin.system.
two admins know a part each, so that they have
to both agree in order to operate as root. Be creative yourself.
Regards, Per Steffensen
On 13/04/15 12:13, Suresh Vanasekaran wrote:
Hi,
We are having the solr index maintained in a central server and multiple users
might be able to access the
That said, it might be nice with a wiki-page (or something) explaining
how it can be done, including maybe concrete cases about exactly how it
has been done on different installations around the world using Solr
On 14/04/15 14:03, Per Steffensen wrote:
Hi
I might misunderstand you, but if
opposite) that Solrs, upon graceful shutdown, OUGHT TO do a
(hard) commit, leaving tlogs empty (= nothing to replay when starting
again)?
Regards, Per Steffensen
Sorry, I did not see the responses here because I found out myself. I
definitely seems like a hard commit it performed when shutting down
gracefully. The info I got from production was wrong.
It is not necessarily obvious that you will loose data on "kill -9". The
tlog ought to save you, but it
Solr query, but the number of documents fulfilling it will
(potentially) be huge.
Regards, Per Steffensen
oc-value) to filter out
the ones among the 500-1000 that does not match the timestamp-part of
the query.
But what does Solr/Lucene actually do? Is it Solr- or Lucene-code that
make the decision on what to do? Can you somehow "hint" the
search-engine that you want one or the other method used?
Solr 4.4 (and corresponding Lucene), BTW, if that makes a difference
Regards, Per Steffensen
00-1000 docs and
"timestamp_dlng_doc_ind_sto" hit about 3-4 billion.
Regards, Per Steffensen
On 19/05/14 13:33, Per Steffensen wrote:
Hi
Lets say I have a Solr collection (running across several servers)
containing 5 billion documents. A.o. each document have
ing to do a facet search etc. Well, here is the full story:
http://solrlucene.blogspot.dk/2014/05/performance-of-and-queries-with-uneven.html
Regards, Per Steffensen
On 23/05/14 17:37, Toke Eskildsen wrote:
Per Steffensen [st...@designware.dk] wrote:
* It IS more efficient to just use the index for the
"n
lrlucene.blogspot.dk/2014/05/performance-of-and-queries-with-uneven.html.
Hope you do not mind that I reference you and the link you pointed out.
Thanks a lot!
Regards, Per Steffensen
On 23/05/14 18:13, Yonik Seeley wrote:
On Fri, May 23, 2014 at 11:37 AM, Toke Eskildsen
wrote:
Well, the only "search" i did, was ask this question on this
mailing-list :-)
On 26/05/14 17:05, Alexandre Rafalovitch wrote:
Did not follow the whole story but " post-query-value-filter" does exist in
Solr. Have you tried searching for pretty much that expression. and maybe
something about cos
easily achieved much faster than the 1-1 on
collection-level.
Any input is very much appreciated! Thanks
Regards, Per Steffensen
On 23/07/14 17:13, Erick Erickson wrote:
Per:
Given that you said that the field redefinition also includes routing
info
Exactly. It would probably be much faster to make sure that the new
collection have the same number of shards on each Solr-machine and that
the routing-ranges are identi
ust out of curiosity
* Will I have the same OOM problem using the CURSOR-feature in later Solrs?
* Will the "poor mans" cursor approach still be efficient if my
uniqueKey was DocValued, knowing that all values for uniqueKey (the
DocValue file) cannot fit in memory (OS file cache)?
Reg
Hi
Where can I find documentation on how to use Bloom filters in Solr
(4.4). http://wiki.apache.org/solr/BloomIndexComponent seems to be
outdated - there is no BloomIndexComponent included in 4.4 code.
Regards, Per Steffensen
still very much appreciated.
Regards, Per Steffensen
On 28/07/14 15:42, Lukas Drbal wrote:
Hi Per,
link to jira - https://issues.apache.org/jira/browse/SOLR-1375 Unresolved
;-)
L.
On Mon, Jul 28, 2014 at 1:17 PM, Per Steffensen wrote:
Hi
Where can I find documentation on how to use Bloom
nsactionlog and the
actual index (UpdateLog). We would like to use Bloom Filter to quickly
tell that a document with a particular id is NOT present.
Regards,
Jim
Regards, Per Steffensen
he bloom filter depends on how frequently you can live with
false positives (where you have to actually look it up in the real index).
Regards, Per Steffensen
On 30/07/14 10:05, Shalin Shekhar Mangar wrote:
Hi Per,
There's LUCENE-5675 which has added a new postings format for IDs. Trying
it
t does not already exist when we do this
duplicate check (using the unique-id feature), but it just takes
relatively long time to verify it, because you have to visit the index.
We can get a quick "document with this id does not exist" using
bloom-filter on id.
Regards, Per Steffensen
On
... Assume you are running
"cloud"-mode and that the shards belong to the same collection?
Any custom routing?
Regards, Per Steffensen
On 11/22/13 8:32 PM, Mingfeng Yang wrote:
BTW: it's a 4 shards solorcloud cluster using zookeeper 3.3.5
On Fri, Nov 22, 2013 at 11:07 AM, Ming
If you are upgrading from SolrCloud 4.x to a later version 4.y, and
basically want your end-system to seem as if it had been running 4.y (no
legacy mode or anything) all along, you might find some inspiration here
http://solrlucene.blogspot.dk/2014/01/upgrading-from-solrcloud-4x-to-4y-as-if.htm
mean that I
have to configure it somewhere else?
Thanks!
Regards, Per Steffensen
completed in almost the same time as before, so it is not
a big issue for us.
Regards, Per Steffensen
On 1/23/14 6:09 PM, Mark Miller wrote:
Yeah, I think we removed support in the new solr.xml format. It should still
work with the old format.
If you have a good use case for it, I don’t know
thing
that happened today. Any idea about what might be wrong? A solution?
Help to debug?
Regards Per Steffensen
--- console when running "ant eclipse" -
...
resolve:
[echo] Building solr-example-DIH...
ivy-availability-check:
[echo] Building solr-
org/maven2/ in our Artifactory.
Well never mind - it works for me now.
Thanks for the help!
Regards, Per Steffensen
On 1/30/14 1:11 PM, Steve Rowe wrote:
Hi Per,
You may be seeing the stale-Ivy-lock problem (see IVY-1388). LUCENE-4636
upgraded the bootstrapped Ivy to 2.3.0 to reduce the lik
requests. CloudSolrServer uses LBHttpSolrServer behind
the scenes. If you use CloudSolrServer as a client everything should be
smooth and transparent with respect to querying when servers are down.
CloudSolrServer will find out where to (and not to) route your requests.
Regards, Per Steffensen
On 19/02/14 07:57, Vineet Mishra wrote:
Thanks for all your response but my doubt is which *Server:Port* should the
query be made as we don't know the crashed server or which server might
crash in the future(as any server can go down).
That is what CloudSolrServer will deal with for you. It knows
ome other language (if the reason you do not want to use
CloudSolrServer, is that your client is not java). Else you need to do
other clever stuff, like e.g. what Shalin suggests.
Regards, Per Steffensen
, but I do not know, because we do not use replication.
I might be able to find something for you. Which version are you using -
I have some scripts that work on 4.0 and some other scripts that work
for 4.4 (and maybe later).
Regards, Per Steffensen
On 28/02/14 16:17, Jan Van Besien wrote:
Hi
Hi
Is there a way I can configure Solrs so that it handles its shared
completely in memory? If yes, how? No writing to disk - neither
transactionlog nor lucene indices. Of course I accept that data is lost
if the Solr crash or is shut down.
Regards, Per Steffensen
On 8/7/13 9:04 AM, Shawn Heisey wrote:
On 8/7/2013 12:13 AM, Per Steffensen wrote:
Is there a way I can configure Solrs so that it handles its shared
completely in memory? If yes, how? No writing to disk - neither
transactionlog nor lucene indices. Of course I accept that data is lost
if the
certain period of time, and also, for each
distinct "a", have the limited set of distinct "b"-values returned? I
guess this will beg grouping/faceting on multiple fields, but can you do
that? Other suggestions on how to achieve this?
Regards, Per Steffensen
der to make it possible? You do not have to hand me the solution, but
a few comments on how easy/hard it would be, and ideas on how to attack
the challenge would be nice.
Thanks!
Regards, Per Steffensen
of disabling the FieldCache (taking the performance penalty of
course) or make it behave in a nicer way where it only uses up to e.g.
80% of the memory available to the JVM? Or other suggestions?
Regards, Per Steffensen
point to
documentation where I will be able to read that I am wrong. Thanks!
Regards, Per Steffensen
On 9/11/13 1:38 PM, Erick Erickson wrote:
I don't know any more than Michael, but I'd _love_ some reports from the
field.
There are some restriction on DocValues though, I believe one
Thanks, guys. Now I know a little more about DocValues and realize that
they will do the job wrt FieldCache.
Regards, Per Steffensen
On 9/12/13 3:11 AM, Otis Gospodnetic wrote:
Per, check zee Wiki, there is a page describing docvalues. We used them
successfully in a solr for analytics
level, while still making sure that searches will perform fairly
well when data-amounts become big? (guess without merging you will end
up with lots and lots of "small" files, and I guess this is not good for
search response-time)
Regards, Per Steffensen
Maybe the fact that we are never ever going to delete or update
documents, can be used for something. If we delete we will delete entire
collections.
Regards, Per Steffensen
On 9/12/13 8:25 AM, Per Steffensen wrote:
Hi
SolrCloud 4.0: 6 machines, quadcore, 8GB ram, 1T disk, one Solr-node
on
Seems like the attachments didnt make it through to this mailing list
https://dl.dropboxusercontent.com/u/25718039/doccount.png
https://dl.dropboxusercontent.com/u/25718039/iowait.png
On 9/12/13 8:25 AM, Per Steffensen wrote:
Hi
SolrCloud 4.0: 6 machines, quadcore, 8GB ram, 1T disk, one Solr
ing out of swap space"-problems.
Regards, Per Steffensen
On 9/12/13 12:48 PM, Erick Erickson wrote:
Per:
One thing I'll be curious about. From my reading of DocValues, it uses
little or no heap. But it _will_ use memory from the OS if I followed
Simon's slides correctly. So I
On 9/12/13 3:28 PM, Toke Eskildsen wrote:
On Thu, 2013-09-12 at 14:48 +0200, Per Steffensen wrote:
Actually some months back I made PoC of a FieldCache that could expand
beyond the heap. Basically imagine a FieldCache with room for
"unlimited" data-arrays, that just behind the scen
On 9/12/13 4:26 PM, Shawn Heisey wrote:
On 9/12/2013 2:14 AM, Per Steffensen wrote:
Starting from an empty collection. Things are fine wrt
storing/indexing speed for the first two-three hours (100M docs per
hour), then speed goes down dramatically, to an, for us, unacceptable
level (max 10M per
IndexWriter.updateDocument
but not with IndexWriter.addDocument?
Regards, Per Steffensen
On 9/12/13 10:14 AM, Per Steffensen wrote:
Seems like the attachments didnt make it through to this mailing list
https://dl.dropboxusercontent.com/u/25718039/doccount.png
https://dl.dropboxusercontent.com/u/25718039/iowait.png
he penalty of more disk-IO as soon as the entire thing does not
fit in memory, but I would rather accept that than accept OOM's.
Regards, Per Steffensen
On 3/21/13 9:48 AM, Toke Eskildsen wrote:
On Thu, 2013-03-21 at 09:13 +0100, Per Steffensen wrote:
We have a lot of docs in Solr. Each particular Solr-node handles a lot
of docs distributed among several replica. When you issue a sort query,
it seems to me that, the value of the sort-field of
On 3/21/13 10:52 AM, Toke Eskildsen wrote:
On Thu, 2013-03-21 at 09:57 +0100, Per Steffensen wrote:
Thanks Toke! Can you please elaborate a little bit? How to use it? What
it is supposed to do for you?
Sorry, no, I only know about it on the abstract level. The release notes
for Solr 4.2 says
On 3/21/13 10:50 PM, Shawn Heisey wrote:
On 3/21/2013 4:05 AM, Per Steffensen wrote:
Can anyone else elaborate? How to "activate" it? How to make sure, for
sorting, that sort-field-value for all docs are not read into memory for
sorting - leading to OOM when you have a lot of docs
is a "real" OOM indicating no more space on java heap, but is more
an exception saying that OS has no more memory (in some interpretation
of that).
Regards, Per Steffensen
On 4/2/13 11:32 AM, Arkadi Colson wrote:
It is running as root:
root@solr01-dcg:~# ps aux | grep tom
root
... data.response.numFound ...
}
)
Go figure who to extract it in javascript without jQuery
Regards, Per Steffensen
On 4/5/13 3:20 PM, Alexandre Rafalovitch wrote:
I'd add rows=0, just to avoid the actual records serialization if size is
all that matters.
Rega
On 4/10/13 12:17 PM, Per Steffensen wrote:
"number of documents found" can be found in a field called "numFound"
in the response.
If you do use SolrJ you will likely have a QueryResponse qr and can
just do a qr.setNumFound().
qr.getResults().getNumFound() :-)
If you
some kind of
explanation but a reply to this mailing-list posting will also do.
Thanks!
Regards, Per Steffensen
ut) work going on on branch "solrcloud" getting us from
http://wiki.apache.org/solr/SolrCloud to
http://wiki.apache.org/solr/NewSolrCloudDesign.
Regards, Per Steffensen
Tomás Fernández Löbbe skrev:
Hi, AFAIK SolrCloud still doesn't support replication, that's why in the
example you have to copy the directory manually. Replication has to be
implemented by using the SolrReplication as you mentioned or use some kind
of distributed indexing (you'll have to do it you
c property between master and slaves
(primary and replica if you like). What is the easiest way? Will I just
have to make sure myself that indexing goes on directly on all "replica"
of a shard, and then drop using the replication explained on
http://wiki.apache.org/solr/SolrReplication?
Regards, Per Steffensen
Thanks for answering
Mark Miller skrev:
Guess that is the whole point. Guess that I do not have to replicate
configuration files, since SolrCloud (AFAIK) does not use local
configuration files but information in ZK. And the it gets a little hard to
guess how to do it, since the explanation o
Yonik Seeley skrev:
On Mon, Dec 5, 2011 at 6:23 AM, Per Steffensen wrote:
Will it be possible to maintain a how-to-use section on
http://wiki.apache.org/solr/NewSolrCloudDesign with examples, e.g. like to ones
on http://wiki.apache.org/solr/SolrCloud,
Yep, it was on my near-term
______
From: Per Steffensen
To: solr-user@lucene.apache.org
Sent: Monday, December 5, 2011 6:23 AM
Subject: Continuous update on progress of "New SolrCloud Design" work
Hi
My guess is that the work for acheiving http://wiki.apache.org/solr/NewSolrCloudDesign has begun on b
that different threads
can add documents to the same solr, and when one of them says "commit"
it is only the documents added by this thread that gets committed? Or is
it always "all documents added by any thread since last commit" that
gets committed?
Regards, Per Steffensen
Regards, Per Steffensen
Sushil jain skrev:
Hello Everyone,
I am a Solr beginner.
I just want to know if versioning of data is possible in Solr, if yes then
please share the procedure.
Thanks & Regards,
Sushil Jain
ormation asked for
above will also help others to help you. I will try to remember though.
Regards, Per Steffensen
adm1n skrev:
Hello,
I'm running solrcloud with 2 shards.
Lets assume I've 100 documents indexed in total, which are divided 55/45 by
the shards...
when I query, for e
In general you probably want to add a parameter "distrib=true" to your
search requests.
adm1n wrote:
I have 1 collection called index.
I created it like explained here: http://wiki.apache.org/solr/SolrCloud in
Example A: Simple two shard cluster section
here are the start up commands:
1)java -
It doesnt sound exactly like a problem we experienced some time ago,
where long request where mixed put during transport. Jetty was to blame.
I might be Jetty that f up you request too? SOLR-4031. Are you still
running 8.1.2?
Regards, Per Steffensen
Markus Jelsma skrev:
Hi,
We
Billy Newman skrev:
I have deployed the solr.war to my application server. On deploy I
can see the solr server and my core "general" start up.
I have a timer that fires every so ofter to go out and 'crawl' some
services and index into Solr. I am using Solrj in my application and
I am having tr
Per Steffensen skrev:
Billy Newman skrev:
I have deployed the solr.war to my application server. On deploy I
can see the solr server and my core "general" start up.
I have a timer that fires every so ofter to go out and 'crawl' some
services and index into Solr. I
compile CloudSolrServer to javascript (I would imagine it
will be hard to make it work though)
Regards, Per Steffensen
Luis Cappa Banda skrev:
Hello!
I've always used Java as the backend language to program search modules,
and I know that CloudSolrServer implementation is the way to int
Luis Cappa Banda skrev:
Do you know if SolrCloud replica shards have 100% the same data as the
leader ones every time? Probably wen synchronizing with leaders there
exists a delay, so executing queries to replicas won't be a good idea.
As long as the replica is in state "active" it will be 100
Luis Cappa Banda skrev:
Thanks a lot, Per. Now I understand the whole scenario. One last question:
I've been searching trying to find some kind of request handler that
retrieves cluster status information, but no luck. I know that there exists
a JSON called clusterstate.json, but I don't know the
hopefully we
will succeed in collaboration with the rest of the Solr community, and
hopefully Solr Cloud replication will be production ready within the
next half year.
Regards, Per Steffensen
On 12/18/12 3:28 PM, Otis Gospodnetic wrote:
Hi,
If you are not in a rush, I'd wait for
each Solr node in the cluster. We still do
not know how the system will behave when we have and cross-search many
(up to 24 since we are supposed to keep data for 2 years before we can
throw it away) collections with 1+ billion documents each.
Regards, Per Steffensen
On 12/18/12 8:20 PM, Scott
are searchable before "configured auto-commit time-period" has
passed since you indexed your last document.
Regards, Per Steffensen
On 12/20/12 6:37 PM, Lili wrote:
Mark, yes, they have unique ids. Most the time, after the 2nd json http
post, query will return complete results.
I b
any way to
cross-search X slices across many collections, than it is to
cross-search X slices under the same collection.
Besides that see my answer for topic "Will SolrCloud always slice by ID
hash?" a few days back.
Regards, Per Steffensen
On 12/24/12 1:07 AM, Erick Erickson wrote:
On 1/1/13 2:07 PM, hupadhyay wrote:
I was reading a solr wiki located at
http://wiki.apache.org/solr/NearRealtimeSearch
It says all commitWithin are now soft commits.
can any one explain what does it means?
Soft commit means that the documents indexed before the soft commit will
become searcha
Furthermore, if you plan to index "a lot" of data per application, and
you are using Solr 4.0.0+ (including Solr Cloud), you probably want to
consider creating a collection per application instead of a core per
application.
On 1/2/13 2:38 PM, Erick Erickson wrote:
This is a common approach to
There are defaults for both replicationFactor and maxShardsPerNode, so
non of them HAS to be provided - default is 1 in both cases.
int repFactor = msgStrToInt(message, REPLICATION_FACTOR, 1);
int maxShardsPerNode = msgStrToInt(message, MAX_SHARDS_PER_NODE, 1);
Remember than replica
On 1/3/13 3:05 AM, davers wrote:
This is what I get from the leader overseer log:
2013-01-02 18:04:24,663 - INFO [ProcessThread:-1:PrepRequestProcessor@419]
- Got user-level KeeperException when processing sessionid:0x23bfe1d4c280001
type:create cxid:0x58 zxid:0xfffe txntype:unknown
On 1/3/13 2:50 AM, Mark Miller wrote:
Unfortunately, for 4.0, the collections API was pretty bare bones. You don't
actually get back responses currently - you just pass off the create command to
zk for the Overseer to pick up and execute.
So you actually have to check the logs of the Overseer
iately
obvious what this is as long as it is called "Replica". A "Replica" is
basically a Solr Cloud managed Core and behind every Replica/Core lives
a physical Lucene index. So Replica=Core) contains/maintains Lucene
index behind the scenes. The term "Replica" also
n with
replicationFactor=1 - WTF!?!?). If we want to insist that you specify
the total number of cores at least use "replicaPerShard" instead of
"replicationFactor", or even better rename "Replica" to "Shard-instance"
and use "instancesPerShard" in
Ok, sorry. Easy to misunderstand, though.
On 1/3/13 3:58 PM, Mark Miller wrote:
MAX_INT is just a place holder for a high value given the context of this guy
wanting to add replicas for as many machines as he adds down the line. You are
taking it too literally.
- Mark
On 1/3/13 4:33 PM, Mark Miller wrote:
This has pretty much become the standard across other distributed systems and
in the literat…err…books.
Hmmm Im not sure you are right about that. Maybe more than one
distributed system calls them "Replica", but there is also a lot that
doesnt. But if you
On 1/3/13 4:55 PM, Mark Miller wrote:
Trying to forge our own path here seems more confusing than helpful
IMO. We have enough issues with terminology right now - where we can
go with the industry standard, I think we should. - Mark
Fair enough.
I dont think our biggest problem is whether we d
On 1/3/13 5:26 PM, Yonik Seeley wrote:
I agree - it's pointless to have two replicas of the same shard on a
single node. But I'm talking about having replicationFactor as a
target, so when you start up *new* nodes they will become a replica
for any shard where the number of replicas is currentl
On 1/3/13 5:58 PM, Walter Underwood wrote:
A "factor" is multiplied, so multiplying the leader by a replicationFactor of 1
means you have exactly one copy of that shard.
I think that recycling the term "replication" within Solr was confusing, but it
is a bit late to change that.
wunder
Yes, t
tomer = (as long as we do not consider replication) one
lucene index per customer = one data-disk-folder per customer. You
should be able to do join queries inside the specific customers shard.
Regards, Per Steffensen
I belive Alexandre Rafalovitch got his answer already :-) To the level a
clean answer exists at the moment.
Regards, Per Steffensen
On 1/4/13 2:54 PM, Jack Krupansky wrote:
Replication makes perfect sense even if our explanations so far do not.
A shard is an abstraction of a subset of the dat
n 4.0.0, but will be in 4.1)
Regards, Per Steffensen
On 1/5/13 11:55 AM, Hassan wrote:
Thanks Per and Otis,
It is much clearer now but I have a question about adding new solr
nodes and collections.
I have a dedicated zookeeper instance. Lets say I have uploaded my
configuration to zookeep
collection per customer, with one shard and many replicas, A query will be
handled by one shard (or replica) on one node only and scalability here is
really about load balancing queries between the replicas only. i.e no
distributed search. is this correct?
Hassan
On 05/01/13 15:47, Per Steffensen
successfully recover when recover situations occur, and we see like
4-times indexing times compared to non-redundancy (even though a max of
2-times should be expected).
Regards, Per Steffensen
On 1/7/13 6:14 PM, f.fourna...@gibmedia.fr wrote:
Hello,
I'm new in SOLR and I've a collecti
JIRA about the fix for 4.1: https://issues.apache.org/jira/browse/SOLR-4140
On 1/8/13 4:01 PM, Jay Parashar wrote:
Thanks Mark...I will use it with 4.1. For now, I used httpclient to call the
Collections api directly (do a Get on
http://127.0.0.1:8983/solr/admin/collections?action=CREATE etc). T
-memory ClusterState with changes.
Regards, Per Steffensen
On 1/9/13 4:38 PM, Shahar Davidson wrote:
Hi All,
I have a client app that uses SolrJ and which requires to collect the names
(and just the names) of all loaded cores.
I have about 380 Solr Cores on a single Solr server (net indices
ulated on server-side from the timestamp-interval in the
search-query. We handle this in a Solr SearchComponent which we place
"early" in the chain of SearchComponents. Maybe you can get some
inspiration by this approach, if it is also relevant for you.
Regards, Per Steffensen
we will always have a least
3 months of historic data, and last in a month close to 4 months of
history. It does not matter that we have a little to much history, when
we just do not go below the lower limit on lenght of historic data. We
also use the new Collection API for deletion.
Regards, Per
ding the credentials.
Does this just work out of the box, or ... ?
Regards, Per Steffensen
b-requests that he is not authorized to do. Forward of
credentials is a must. So what you are saying is that I should expect to
have to do some modifications to Solr in order to achieve what I want?
Regards, Per Steffensen
On 1/11/13 2:11 PM, Markus Jelsma wrote:
Hi,
If your credentials are fix
I will figure out. Essence of question was if it was there
out-of-the-box. Thanks!
Regards, Per Steffensen
On 1/11/13 5:38 PM, Markus Jelsma wrote:
Hmm, you need to set up the HttpClient in HttpShardHandlerFactory but you
cannot access the HttpServletRequest from there, it is only available
Regards, Per Steffensen
On 1/16/13 4:02 PM, mizayah wrote:
Is there a way to lock solr for writes?
I don't wona use solr integrated backup because i'm using ceph claster.
What I need is to have consistent data for few seconds to make backup.
--
View this message in context:
htt
1 - 100 of 135 matches
Mail list logo