Re: [Virtuoso-users] Large N3 files

2009-10-02 Thread Marc-Alexandre Nolin
Thanks, If you implement something with compression, tell me, I would be happy to test it because it is a problem for me :) Idea: - It could be a rule to compress every literal (but indexation would need to be done before its compress) - Il could be a compression based on a selection of predicate

Re: [Virtuoso-users] Large N3 files

2009-10-01 Thread Ivan Mikhailov
> If I understand correctly, B3S is the Billions triple challenge > dataset, correct? Located here http://challenge.semanticweb.org/ . It > is stated that it is 1.14 Billions statements. Yes B3S was 1.14G, but we have some additional data (but still much less than your 10G). > The currents amount

Re: [Virtuoso-users] Large N3 files

2009-10-01 Thread Marc-Alexandre Nolin
Thanks for the fast answer, If I understand correctly, B3S is the Billions triple challenge dataset, correct? Located here http://challenge.semanticweb.org/ . It is stated that it is 1.14 Billions statements. The currents amounts of statements for the Genbank N3 Dump is 6,561,103,030 triples For

Re: [Virtuoso-users] Large N3 files

2009-10-01 Thread Ivan Mikhailov
> Does somebody ever load something has large has a complete N3 version > of GenBank or Refseq into a single Virtuoso Triplestore? I'm using a > the ttlp_mt program has mentioned on how to load Bio2RDF data, but I > call it from a Perl script. We've loaded B3S dataset plus some extra data without

[Virtuoso-users] Large N3 files

2009-10-01 Thread Marc-Alexandre Nolin
Hello, Does somebody ever load something has large has a complete N3 version of GenBank or Refseq into a single Virtuoso Triplestore? I'm using a the ttlp_mt program has mentioned on how to load Bio2RDF data, but I call it from a Perl script. The current Virtuoso.db are by far the biggest I've ev