Jem Rayfield wrote:
Hi Kingsley,

I want to push content into Virtuoso via HTTP rather than a crawl
mechanism. I don't think the content crawler is what I am after.
Yes.

When you PUT data into Virtuoso, sponging can occur, it depends on where you PUT the data :-)
Our content gets created and pushed onto publish/rendering queues. I
need to aysnch the triple extraction as and when content is published
rather than waiting for a publishing mechanism, exposure onto public
facing web servers and a subsequent crawl. Ideally I would like the
extraction process to work on new content documents (rdf) and augment
the rdf to an existing graph. (Will a HTTP post approach always create a
new graph? Maybe I have configured something incorrectly?)
New graph if you create resources in the RDF_Sink folder. Otherwise not.

Of course you can tailor all of this to your specific needs e.g. put the triples in one graph.
        (The crawl mechanism certainly looks like a very interesting
feature but I don't think it fits this use case. Although I could have
missed something?)
After pushing the XHTML2/RDFa content into Virtuoso via HTTP I am able
to use SPARQL on the quad store. The origin document is also available
via the DAV interface.
Yes.
However I am assuming that your DAV store is built using database tables
and thus the sponging process also consumes the XHTML into a Table
(XMLType (text index)).
This Table/XMLType could then be queried using SQL/SPARQL?. I could then
maybe even be expose these a stored proc using virtuoso web service->PL?
Yes, but you should use WebDAV as if first via functions. Also note we do have a separate WebDAV Cartridge that simply makes an RDF graph of DAV resources. This particular cartridge is about the DAV information resources. Thus, you have SPARQL access to these, you just use the http://<cname>/webdav Graph IRI. Even better, if you install ODS-Briefcase, all you DAV resources are exposed via a more granular graph using the SIOC Ontology.

Only if the ODS-Briefcase or basic WebDAV graphs don't meet your needs should you consider writing Virtuoso PL against the WebDAV tables.


Kingsley

So my question is if this assumption is correct can I query this table
and if so what is this table.
Or do I need to create a new cartridge which persists the origin
document into a specific table with the correct ACL which can then be
queried?


I hope this make some sort of sense. (It's getting late on a Friday ;-))


I will read over the tutorial links you sent and something will probably
spring out at me after a weekend of sleep!



Thanks again...any pointers idea appreciated
Jem



-----Original Message-----
From: Kingsley Idehen [mailto:kide...@openlinksw.com] Sent: 13 February 2009 15:41
To: Jem Rayfield
Cc: virtuoso-users@lists.sourceforge.net
Subject: Re: [Virtuoso-users] XPATH/XSLT on sponged RDFa

Jem,

<<
I have managed to sponge XHTML/RDFa via DAV (using the steps below)
however I want to be able to query the origin XHTML2 using XPATH or
transform using XSLT. I can only see the origin content within DAV at
the moment and cannot see the XML stored as an XMLType within a Virtuoso
Table. Ideally I would like to be able to query the triples using SPARQL
and (maybe in combination with) query the origin XHTML using XPATH. Have
you came across anything like this (examples)? This would be most useful
as I could then transform required content from the result of a SPARQL
query and maybe even expose the content via a Virtuoso stored procedure
(Virtuosos web service->stored-proc mapping) this would allow some
pretty funky logic and will enable me to constrain and control access to
the content.
 >>

Please confirm that this is what you seek:

1. Grab XHTML content from an HTTP accessible source into Virtuoso
(WebDAV Content Management realm) 2. Have an RDF graph(s) generated from
the imported resource(s) 3. Have SPARQL access to the RDF in the Quad
Store 4. XQuery/XPath access to the XHTML via WebDAV or any other means.


If the above is true, the key to this is via the Virtuoso Content Crawler which can do the following on a scheduled basis:

1. Grab/Sponge Web content into a location of your choice within WebDAV
2. Indicate to the Crawler that is should use one or more Sponger Cartridges during the crawl

Result:

1. WebDAV accessible XHTML to which you can apply XSLT, XQuery, XPath Queries 2. Triples in the Quad Store (with Graph IRIs matching the content source URLs)


Also see: http://demo.openlinksw.com/tutorial re. examples of XML data manipulation etc.. You can install a local version of this via the "tutorial vad package".



--


Regards,

Kingsley Idehen       Weblog: http://www.openlinksw.com/blog/~kidehen
President & CEO OpenLink Software Web: http://www.openlinksw.com





Reply via email to