Since SolrCloud is a master-free architecture, you can send both queries
and updates to ANY node and SolrCloud will assure that the data gets to
where it belongs
its way faster to send them to right node
is custom /org.apache.lucene.store.Directory ///supported in Solr? I
want to try infinispan.
//
Dne 27.4.2012 19:59, Jeremy Taylor napsal(a):
DataStax offers a Solr integration that isn't master/slave and is
NearRealTimes.
its rebranded solandra?
Dne 6.5.2012 14:02, András Bártházi napsal(a):
We're currently evaluating Solr as a Sphinx replacement. The
development is almost done, and it seems to be working fine
why you want to replace sphinx with solr?
My company is thinking to buy search algorithm from famous expert in
searching Petr Hejl - http://www.milionovastranka.net/
but i see RankingAlgorithm has fantastic results too and looking at its
reference page it even powers sites like oracle.com and ebay.com.
What reference page are you referring to?
http://tgels.com/wiki/en/Sites_using/downloaded_RankingAlgorithm_or_Solr-RA
Mark,
You are certainly not using the Solr mark in an approved manner and I'd hope if
you are going to take advantage of our mailing list for promotion of your
product, that you would not violate our trademark.
Apache Foundation do not own SOLR (R) trademark. I looked into registry
(USA and Wo
Trunk do not compiles due to javadoc warnings. Should i create ticket and
submit patch or these trivial errors are fixed quickly?
[javadoc] Building tree for all the packages and classes...
[javadoc]
/usr/local/jboss/.jenkins/jobs/Solr/workspace/solr/core/src/java/org/apache/solr/cloud/Leade
Dne 10.9.2012 18:28, Jack Krupansky napsal(a):
How are you compiling trunk?
Jenkins - ant task - arguments:
clean compile dist create-package package-local-src-tgz
Anyway, I've removed the empty @return tags in the files mentioned in
the below report, in trunk r1382976.
it builds fine now.
> After investigating more, here is the tomcat log herebelow. It is
indeed the same problem: "exceeded limit of maxWarmingSearchers=2,".
could not be solr able to close oldest warming searcher and replace it
by new one?
could not be solr able to close oldest warming searcher and replace it by
new one?
That approach can easily lead to starvation (i.e. you never get a new
searcher usable for queries).
It will not. If there is more then 1 warming searcher. Look at this schema:
1. current in use searcher
2. 1st
I cant compile SOLR 4.0, but i can compile trunk fine.
ant create-package fails with:
BUILD FAILED
/usr/local/jboss/.jenkins/jobs/Solr/workspace/solr/common-build.xml:229:
The following error occurred while executing this line:
/usr/local/jboss/.jenkins/jobs/Solr/workspace/solr/common-build.xml
I believe the problem is not that you need BSF -- because you are using
java6 (you have to to compile Solr) so BSF isn't needed. The problem (i
think) is that FreeBSD's JDK doesn't include javascript by default - i
believe you just need to install the rhino "js.jar"
You are right. FreeBSD
its possible to use solrcloud but without real-time features? In my
application I do not need realtime features and old style processing
should be more efficient.
Dne 24.9.2012 14:05, Erick Erickson napsal(a):
I'm pretty sure all you need to do is disable autoSoftCommit. Or rather
don't un-comment it from solrconfig.xml
and what about solr.NRTCachingDirectoryFactory? Is
solr.MMapDirectoryFactory faster if there is no NRT search requirements?
Its possible to use SOLR binary protocol instead of xml for taking TO
SOLR? I know that it can be used in Solr reply.
Have you looked javabin?
http://wiki.apache.org/solr/javabin
i checked that page:
"binary format used to write out solr's response"
so, its for solr response only, not for request sent to solr.
i am reading this: http://wiki.apache.org/solr/SolrCloud section
Re-sizing a Cluster
Its possible to add shard to an existing index? I do not need to get
data redistributed, they can stay where they are, its enough for me if
new entries will be distributed into new number of shards. restarting
Do it as it is done in cassandra database. Adding new node and
redistributing data can be done in live system without problem it looks
like this:
every cassandra node has key range assigned. instead of assigning keys
to nodes like hash(key) mod nodes, then every node has its portion of
hash k
Dne 11.10.2012 1:12, Upayavira napsal(a):
That is what is being discussed already. The thing is, at present, Solr
requires an even distribution of documents across shards, so you can't
just add another shard, assign it to a hash range, and be done with it.
You can use shard size as part of scori
Can you share more please?
i do not know how exactly is formula for calculating ratio.
if you have something like: (term count in shard 1 + term count in shard
2) / num documents in all shards
then just use shard size as weight while computing this:
(term count in shard 1 * shard1 keyspace
> You would only *need* the field if updateLog is turned on in your
updateHandler.
Solrcloud do not needs this?
can someone provide example configuration how to use new compression in
solr 4.1?
http://blog.jpountz.net/post/33247161884/efficient-compressed-stored-fields-with-lucene
i found this ticket: https://issues.apache.org/jira/browse/SOLR-3927
compression is currently lucene 4.1-branch only and not yet in solr4.1
branch?
can i do something like this: (fails with fieldType: missing mandatory
attribute 'class')
termVectors="true" termPositions="true" termOffsets="true"/>
one field type will extending another type to save copy and paste.
I have problems with very low indexing speed as soon as core size grows
over 15 GB. I suspect that it can be due io intensive segment merging.
Is there way to set-up logging to output something when segment merging
runs?
Can be segment merges throttled?
Dne 26.10.2012 3:47, Tomás Fernández Löbbe napsal(a):
Is there way to set-up logging to output something when segment merging
runs?
I think segment merging is logged when you enable infoStream logging (you
should see it commented in the solrconfig.xml)
no, segment merging is not logged at info
Dne 29.10.2012 0:09, Lance Norskog napsal(a):
1) Do you use compound files (CFS)? This adds a lot of overhead to merging.
i do not know. whats solr configuration statement for turning them on/off?
2) Does ES use the same merge policy code as Solr?
ES rate limiting:
http://www.elasticsearch.or
is there JIRA ticket dedicated to throttling segment merge? i could not
find any, but jira search kinda sucks.
It should be ported from ES because its not much code.
Dne 29.10.2012 12:18, Michael McCandless napsal(a):
With Lucene 4.0, FSDirectory now supports merge bytes/sec throttling
(FSDirectory.setMaxMergeWriteMBPerSec): it rate limits that max
bytes/sec load on the IO system due to merging.
Not sure if it's been exposed in Solr / ElasticSearch yet ...
i
31 matches
Mail list logo