On May 5, 2012, at 8:39 AM, Jan Høydahl wrote:
> ZK is not really designed for keeping large data files,
> fromhttp://zookeeper.apache.org/doc/current/zookeeperProgrammers.html#Data+Access:
>> ZooKeeper was not designed to be a general database or large object
>> store.If large data storage
On Sat, May 5, 2012 at 8:39 AM, Jan Høydahl wrote:
> support for CouchDb, Voldemort or whatever.
Hmmm... Or Solr!
-Yonik
ZK is not really designed for keeping large data files, from
http://zookeeper.apache.org/doc/current/zookeeperProgrammers.html#Data+Access:
> ZooKeeper was not designed to be a general database or large object
> store.If large data storage is needed, the usually pattern of dealing
> with suc
On Fri, May 4, 2012 at 12:50 PM, Mark Miller wrote:
>> And how should we detect if data is compressed when
>> reading from ZooKeeper?
>
> I was thinking we could somehow use file extensions?
>
> eg synonyms.txt.gzip - then you can use different compression algs depending
> on the ext, etc.
>
> We
On May 3, 2012, at 8:30 AM, Markus Jelsma wrote:
> Hi.
>
> Compression is a good suggestion. All large dictionaries are compressed well
> below 1MB with GZIP. Where should this be implemented? SolrZkClient or
> ZkController?
Hmm...I'm not sure - we want to be careful with this feature. Offhan
Hi.
Compression is a good suggestion. All large dictionaries are compressed well
below 1MB with GZIP. Where should this be implemented? SolrZkClient or
ZkController? Which good compressor is already in Solr's lib? And what's the
difference between SolrZkClient setData and create? Should it auto
On May 3, 2012, at 5:15 AM, Markus Jelsma wrote:
> Hi,
>
> We've increased Zookeepers znode size limit to accomodate for some larger
> dictionaries and other files. It isn't the best idea to increase the maximum
> znode size. Any plans for splitting up larger files and storing them with
> mul
Hi,
We've increased Zookeepers znode size limit to accomodate for some larger
dictionaries and other files. It isn't the best idea to increase the maximum
znode size. Any plans for splitting up larger files and storing them with
multi? Does anyone have another suggestion?
Thanks,
Markus