Thanks,
If you implement something with compression, tell me, I would be happy
to test it because it is a problem for me :)
Idea:
- It could be a rule to compress every literal (but indexation would
need to be done before its compress)
- Il could be a compression based on a selection of predicate
> If I understand correctly, B3S is the Billions triple challenge
> dataset, correct? Located here http://challenge.semanticweb.org/ . It
> is stated that it is 1.14 Billions statements.
Yes B3S was 1.14G, but we have some additional data (but still much less
than your 10G).
> The currents amount
Thanks for the fast answer,
If I understand correctly, B3S is the Billions triple challenge
dataset, correct? Located here http://challenge.semanticweb.org/ . It
is stated that it is 1.14 Billions statements.
The currents amounts of statements for the Genbank N3 Dump is
6,561,103,030 triples
For
> Does somebody ever load something has large has a complete N3 version
> of GenBank or Refseq into a single Virtuoso Triplestore? I'm using a
> the ttlp_mt program has mentioned on how to load Bio2RDF data, but I
> call it from a Perl script.
We've loaded B3S dataset plus some extra data without
Hello,
Does somebody ever load something has large has a complete N3 version
of GenBank or Refseq into a single Virtuoso Triplestore? I'm using a
the ttlp_mt program has mentioned on how to load Bio2RDF data, but I
call it from a Perl script.
The current Virtuoso.db are by far the biggest I've ev