(Given that hardware is sufficient) The upper limit of documents in Norch is 
determined by the capacity of levelDB, the underlying data store. I have heard 
tell of a slight performance drop off in LevelDB after 200 000 000 million 
entries. If you say that one Norch document generates roughly 200 levelDB keys, 
then a fair guess is that every Norch instance can handle about one million 
documents with no drop off in read speed.

As far as I know there has been no real world benchmarking of this- so feedback 
is welcome!

F


On Jul 6, 2013, at 12:09 AM, "Ali, Saqib" <docbook....@gmail.com> wrote:

> Very interesting. What is the upper limit on the number of documents?
> 
> Thanks! :)
> 
> 
> On Fri, Jul 5, 2013 at 11:53 AM, Fergus McDowall
> <fergusmcdow...@gmail.com>wrote:
> 
>> Here is some news that might be of interest to users and implementers of
>> Solr
>> 
>> 
>> http://blog.comperiosearch.com/blog/2013/07/05/norch-a-search-engine-for-node-js/
>> 
>> Norch (http://fergiemcdowall.github.io/norch/) is a search engine written
>> for Node.js. Norch uses the Node search-index module which is in turn
>> written using the super fast levelDB library that Google open-sourced in
>> 2011.
>> 
>> The aim of Norch is to make a simple, fast search server, that requires
>> minimal configuration to set up. Norch sacrifices complex functionality for
>> a limited robust feature set, that can be used to set up a free test search
>> engine for most enterprise scenarios.
>> 
>> Currently Norch features
>> 
>> Full text search
>> Stopword removal
>> Faceting
>> Filtering
>> Relevance weighting (tf-idf)
>> Field weighting
>> Paging (offset and resultset length)
>> 
>> Norch can index any data that is marked up in the appropriate JSON format
>> 
>> Download the first release of Norch (0.2.1) here (
>> https://github.com/fergiemcdowall/norch/releases)
>> 

Reply via email to