On Tue, Feb 24, 2009 at 07:39:01PM -0800, David Fox wrote:
> Are there a lot of files in each directory, or are there a lot of directories?
>
> One thing I can think off the top of my head when
> organizing/retrieving data this way (other than using an rDBMS) is
> that the directory read function
On Tue, Feb 24, 2009 at 5:28 PM, Mag Gam wrote:
> Paul:
> For instance, grep "something" country/2005/??/01/foo.txt
>
> It gives an instant result. Thats how we are using it and we love it.
Are there a lot of files in each directory, or are there a lot of directories?
One thing I can think off
Mag Gam wrote:
For instance, country/2005/01/01/foo.txt
...
For instance, grep "something" country/2005/??/01/foo.txt
It gives an instant result. Thats how we are using it and we love it.
I would suspect it's two-fold.
1. Your data is already separated and well organized down to
country
Paul:
Thanks for the response.
> I will guess that, at your company, there are very few updates of this
> transaction data, New transactions are added to the record as they
> happen. The *.txt files may contain references to prior transactions,
> but these are human readable text, not some sort
On 2009-02-23_23:28:22, Mag Gam wrote:
> I was curious why this was faster:
>
> At our company we store close to 50TB of certain transaction data and
> we stored it on a UNIX filesystem raw without any DBMS help.
I will guess that, at your company, there are very few updates of this
transaction
If it works for you, why change?
Lets face it not every problem needs a relational database, if you do not
need
atomic transactions and crash recovery as examples of what a DBMS system
will do better than file systems. Furthermore, you may be able to save some
space by eliminating redunent inform
On 02/23/2009 10:28 PM, Mag Gam wrote:
I was curious why this was faster:
At our company we store close to 50TB of certain transaction data and
we stored it on a UNIX filesystem raw without any DBMS help.
This doesn't sound very Linuxy...
For example:
country/A/name/A.txt
country/B/name/B.t
I was curious why this was faster:
At our company we store close to 50TB of certain transaction data and
we stored it on a UNIX filesystem raw without any DBMS help.
For example:
country/A/name/A.txt
country/B/name/B.txt
country/C/name/C.txt
and so on...
We have close to 500 million entries in t
8 matches
Mail list logo