Hello, Adam --

I have reservations about using Fedora to track workflow;  in my mind, 
Fedora is a repository of digital objects in a fairly stable state, an 
(almost) immutable archive of stuff that theoretically can be shared and 
re-used elsewhere.  Workflow information, on the other hand, is highly 
volatile and highly specific;  your workflow information will be useless 
to me, so much noise, if I ever decide I'd like a copy of your objects. 
  Also, the overhead of keeping workflow datastreams in a current state 
in Fedora, while not prohibitive, is higher than it could be;  there are 
tools better suited for tracking that kind of information.

All that said, I know there are institutions out there that do embed 
workflow metadata in their digital objects, and have had some success 
doing so (the work that Stanford and the University of Hull has 
accomplished immediately comes to mind).

Here at UW, we've been developing a workflow system that is 
loosely-coupled to Fedora:  on one end, an outside application tracks 
and manages all our local pre-ingest and update workflow processes, and 
is responsible for getting the final product (the pristine digital 
object) into Fedora.  On the other end, we use Fedora's built in 
messaging API to trigger other workflow steps once an object is created, 
updated, or purged.  Our intent has been to treat Fedora like a data 
warehouse, and keep data/metadata creation/management tasks and 
information at arm's length.

That's just one approach, however;  many other approaches are possible, 
including giving Fedora a more prominent role in managing your digital 
objects.

-- Scott

Adam Wead wrote:
> Dear all,
> 
> Is anyone using Fedora to track ingestion workflow and could share his or
> her experiences?  I'm currently trying out a method that uses a simple
> object datastream that tracks what has been done to a SIP.  A submission
> would be in the form of a folder with some files.  A parent object in Fedora
> would hold descriptive metadata datastreams and a workflow metadata
> datastream, and each file would be ingested as a child object of the parent.
>  The number and location of files would be put in the workflow datastream
> prior to ingest.  When the ingest process runs, any errors or other results
> would be logged to the workflow datastream.
> 
> Has anyone tried such an approach or could offer other ideas in terms of
> what are considered best practices, etc.
> 
> Thanks in advance,
> 
> Adam Wead
> 
> 
> 
> ------------------------------------------------------------------------
> 
> ------------------------------------------------------------------------------
> Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
> Finally, a world-class log management solution at an even better price-free!
> Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
> February 28th, so secure your free ArcSight Logger TODAY! 
> http://p.sf.net/sfu/arcsight-sfd2d
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Fedora-commons-users mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/fedora-commons-users


-- 
Scott Prater
Library, Instructional, and Research Applications (LIRA)
Division of Information Technology (DoIT)
University of Wisconsin - Madison
[email protected]

------------------------------------------------------------------------------
Special Offer-- Download ArcSight Logger for FREE (a $49 USD value)!
Finally, a world-class log management solution at an even better price-free!
Download using promo code Free_Logger_4_Dev2Dev. Offer expires 
February 28th, so secure your free ArcSight Logger TODAY! 
http://p.sf.net/sfu/arcsight-sfd2d
_______________________________________________
Fedora-commons-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/fedora-commons-users

Reply via email to