Hi folks,
What's the best way to retrieve types (rdf:type) of a large amount of
concepts, in association with the concept itself? For instance:
[Barack_Obama:Person, United_States:Place,...]
I came up with this query:
SELECT ?c ?t {
{ a ?t. foaf:isPrimaryTopicOf ?c} UNION
{ a ?t. foaf
Hi folks,
(context = DBPedia data loaded in Virtuoso)
Imagine topic A has categories (dcterms:subject) X, Y and Z. How do you
find other topics that have 1 or more of these categories, sorted by the
number of mutual categories? So if a topic has all 3 of them it should be
on top and so on.
--
Hi folks,
I've went through this (loading whole DBPedia datasets) a few times already
but I still have no idea about the best tuning for this kind of bulk
import.
What's your suggestion? (anything from memory allocation to dropping
specific indexes to autocommit settings, etc)
Regards,
Parsa
Hi fellas,
After re-installing Virtuoso I'm still facing the same problem, any
suggestions ?
On Fri, Sep 24, 2010 at 5:28 PM, Parsa Ghaffari wrote:
> Hi Kingsley,
>
> Thanks for your advice, I think there's a problem with my copy of Virtuoso
> since there was no cartridg
ke that.
Thanks for your time and sorry for my wall of text.
Best,
Parsa
On Thu, Sep 23, 2010 at 7:18 PM, Parsa Ghaffari wrote:
> Dear all,
>
> I'm trying to make a mashup of DBPedia and Freebase. At a query level I
> know I can use SPARQL pragmas for owl:sameAs but I think it li
Dear all,
I'm trying to make a mashup of DBPedia and Freebase. At a query level I know
I can use SPARQL pragmas for owl:sameAs but I think it limits me to 2.4
million interlinked concepts and on the Virtuoso's built-in crawler side, I
think I'll get some noise (i.e. data other than rdf.freebase.co
Hi Hugh,
After checking "Semantic Web Crawling" the issue was solved.
Best,
Parsa
lection/resource as documented at:
>
>http://docs.openlinksw.com/virtuoso/fn_dav_api_add.html
>
> Best Regards
> Hugh Williams
> Professional Services
> OpenLink Software
> Web: http://www.openlinksw.com
> Support: http://support.openlinksw.com
> Forums: http://boards.o
I'm using the Crawler with "Ask for RDF", "Store content locally" and "Store
metadata" all enabled. But after crawling is done, I get the data only in my
user's DAV path (e.g. /DAV/home/dba/rdf_sink/) and not in the Quad Store.
"select count(*) where {?s ?p ?o}" returns exactly the same number bef