Mark James wrote:
Hi Guys,
I need one further point of clarification. Is it possible to restrict
the triples dumped via this procedure?
Yes, of course.
Some of the rdf views being used will have billions of triples within
them. It is not practical to dump the lot daily.
We are working on a solution that will simply sync between the Virtual
Triples and Physical Triples. Basically, we will have triggers on the
source SQL Tables. This is coming very soon :-)
Ideally only triples required by the individual reports would be
dumped (Or as best can be guessed given that the final reports will
probably require triples from other views).
See comments above.
Kingsley
Thanks
Mark
On 20 April 2010 02:36, Mitko Iliev <imi...@gmail.com
<mailto:imi...@gmail.com>> wrote:
Hi Mark,
The attached script will create a stored procedure which allows
you to dump RDF View based graph to a n3 dumps, see the script
head for help , at the end there are examples of it's use.
Then the produced dumps can be loaded on any Virtuoso instance
using TTLP () import function.
Best Regards,
Mitko
On Apr 19, 2010, at 7:25 PM, Kingsley Idehen wrote:
> Mark James wrote:
>> Hi,
>> Is it possible to cache data (in the quad store) retrieved by a
sparql
>> query against an rdf view much like a sponger query?
>>
>> Eg. (Using northwind demo db)
>>
>> select *
>> from <http://localhost:8890/Demo#>
>> where { ?s <http://localhost:8890/schemas/Demo/shipname>
'Rancho grande' }
>>
>> returns 5 rows. However if I then run -
>
> You asked for a solution (resultset) setting your data source
scope to a
> Named Graph IRI (partition hosting the records). This Named
Graph was
> created as part of the RDF View Generation process.
>
>>
>> select *
>> where { ?s <http://localhost:8890/schemas/Demo/shipname>
'Rancho grande' }
>> no rows are returned as expected.
> This query is scoped to the Default Graph which doesn't have any
matches
> for the query.
>
>>
>> If however I browse on of the original returned rows directly eg
>>
http://localhost:8890/about/html/http://localhost:8890/Demo/orders/OrderID/10448#this
>
> The RDF View generation process has a Re-write rule scoped to
the Named
> Graph IRI hosting the records (triples) in question.
>>
>> And then run
>> select *
>> where { ?s <http://localhost:8890/schemas/Demo/shipname>
'Rancho grande' }
>> A row has been returned as it is now cached in the quad store.
>
> When you de-referenced the above via the Sponger you used the Linked
> Data published via the RDF View to create a Triple in the Quad Store
> which is scoped to the Default Named Graph, which is why you get
data.
>
> You have a Virtual Triples hosting Named Graph (produced by the RDF
> Views creation process) and a conventional Quad Store Default
Graph re.
> effect of Sponging the Published RDF View Entity URIs.
>>
>> I would like to get the same cache functionality via my
original query.
>>
> What you need is the ability, at RDF View creation time, to generate
> Triples in the Default Graph (in a sense make Persistent Triples
from
> the Virtual ones in the RDF Views Named Graph). This is coming,
in the
> meantime, there is a script for doing this, I'll have some one
post it here.
>> My reasoning is that there is an opportunity to hit some
operational
>> databases overnight and generate reports using rdf views.
However as
>> these databases will be unavailable for querying during the day all
>> investigations on the data used for the reports would need to
rely on
>> cached data.
>>
>> Is there a way to do this? It would also be nice to have
mechanisms to
>> only get data if it didn't already exist or was of a certain
age much
>> like the sponger does.
>
> Yes, as per my comments above. This also gives you the additional
> benefits of using the Faceted Browers and Inference Rules etc :-)
>
>
> Kingsley
>>
>> Cheers
>> Mark
>>
------------------------------------------------------------------------
>>
>>
------------------------------------------------------------------------------
>> Download Intel® Parallel Studio Eval
>> Try the new software tools for yourself. Speed compiling, find bugs
>> proactively, and fine-tune applications for parallel performance.
>> See why Intel Parallel Studio got high marks during beta.
>> http://p.sf.net/sfu/intel-sw-dev
>>
------------------------------------------------------------------------
>>
>> _______________________________________________
>> Virtuoso-users mailing list
>> Virtuoso-users@lists.sourceforge.net
<mailto:Virtuoso-users@lists.sourceforge.net>
>> https://lists.sourceforge.net/lists/listinfo/virtuoso-users
>>
>
>
> --
>
> Regards,
>
> Kingsley Idehen
> President & CEO
> OpenLink Software
> Web: http://www.openlinksw.com
> Weblog: http://www.openlinksw.com/blog/~kidehen
<http://www.openlinksw.com/blog/%7Ekidehen>
> Twitter/Identi.ca: kidehen
>
>
>
>
>
>
>
------------------------------------------------------------------------------
> Download Intel® Parallel Studio Eval
> Try the new software tools for yourself. Speed compiling, find bugs
> proactively, and fine-tune applications for parallel performance.
> See why Intel Parallel Studio got high marks during beta.
> http://p.sf.net/sfu/intel-sw-dev
> _______________________________________________
> Virtuoso-users mailing list
> Virtuoso-users@lists.sourceforge.net
<mailto:Virtuoso-users@lists.sourceforge.net>
> https://lists.sourceforge.net/lists/listinfo/virtuoso-users
--
Mitko Iliev
Developer Virtuoso Team
OpenLink Software
http://www.openlinksw.com/virtuoso
Cross Platform Web Services Middleware
--
Regards,
Kingsley Idehen
President & CEO
OpenLink Software
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen