I see, vacuum() does not help in Virtuoso 7, thanks a lot though!

On Sat, Apr 18, 2015 at 1:02 PM, Hugh Williams <hwilli...@openlinksw.com>
wrote:

> Hi Gang,
>
> The latest Virtuoso 6+  database engines automatically automatically
> compact themselves, thus whilst with earlier release we use to have the
> “vacuum” [1] function it is no longer necessary.
>
> The most compact you can get the database would be to dump and restore the
> database , when it will then be as its most compact ...
>
> [1] http://docs.openlinksw.com/virtuoso/fn_vacuum.html
>
> Best Regards
> Hugh Williams
> Professional Services
> OpenLink Software, Inc.      //              http://www.openlinksw.com/
> Weblog   -- http://www.openlinksw.com/blogs/
> LinkedIn -- http://www.linkedin.com/company/openlink-software/
> Twitter  -- http://twitter.com/OpenLink
> Google+  -- http://plus.google.com/100570109519069333827/
> Facebook -- http://www.facebook.com/OpenLinkSoftware
> Universal Data Access, Integration, and Management Technology Providers
>
> On 17 Apr 2015, at 15:05, Gang Fu <gangfu1...@gmail.com> wrote:
>
> By the way, I have tried to compress the big db file using gzip, and I can
> get over 1:4 compression ratio, so I think there is still a lot of room in
> the db file that are not very important.
>
> Is there a function in virtuoso that can be post-loading optimization to
> reduce the db file size, which may in turn boost the performance as well?
>
> Best,
> Gang
>
> On Fri, Apr 17, 2015 at 8:32 AM, Gang Fu <gangfu1...@gmail.com> wrote:
>
>> Thank you very much, Morty! You are right, 'split' plus 'cat' is a better
>> option, since the server can start immediately with the rebuild db file.
>>
>> Is there a way to test whether a stored procedure exists? I have another
>> ticket about this question, but I have not gotten any reply yet there :)
>>
>> On Wed, Apr 15, 2015 at 3:51 PM, Morty <morty+virtu...@frakir.org> wrote:
>>
>>> Yes, you can "split" a large file into many small files.  At the colo,
>>> you can put them back together again.  The command to put them back
>>> together is "cat".  The "join" command does something else, so you
>>> don't want to try to use it.
>>>
>>> NB: this is actually what the cat command is for.  "cat" is short for
>>> "concatenate".  Although it's rarely used for this purpose!  ;)
>>>
>>> Alternatively, there are options for rsync that turn off the checksum
>>> stuff.  So if a file transfer gets interrupted, it picks off right
>>> where it left off.  You can then do file verification outside the
>>> scope of rsync, e.g. by doing sha1sum on both sides and comparing the
>>> results.
>>>
>>> Contact your local sysadmins for assistance with either of these
>>> options.  :)
>>>
>>> - Morty
>>>
>>>
>>> On Wed, Apr 15, 2015 at 02:50:31PM -0400, Gang Fu wrote:
>>> > We want to transfer the files to another location, 'colo' for disaster
>>> > recovery. The long distance transfer is time-consuming and may fail
>>> > sometimes.
>>> >
>>> > We are using rsync, and we believe rsync a 500 GB file or rsync many
>>> small
>>> > files indeed make difference, since rsync does a checksum validation
>>> before
>>> > transfer, so if a large portion of many small files have the same
>>> checksum,
>>> > then we only need to transfer a small port of them.
>>> >
>>> > Can we just 'split' and 'join' db files before and after transferring?
>>> >
>>> > Best,
>>> > Gang
>>> >
>>> > On Wed, Apr 15, 2015 at 1:17 PM, Morty <morty+virtu...@frakir.org>
>>> wrote:
>>> >
>>> > > On Tue, Apr 14, 2015 at 12:24:22PM -0400, Gang Fu wrote:
>>> > >
>>> > > > We want to copy a large virtuoso db from one server to another in
>>> > > > different location. We cannot copy single 500 GB db file, which is
>>> > > > slow and unstable.  So we want to break the db files in different
>>> > > > segments. I have tried with virtuoso striping: each segment has 20
>>> > > > GB, and in total we have over 25 segments.
>>> > >
>>> > > What issue are you seeing with transferring a 500GB file?
>>> > > Transferring one 500GB file should not be significantly slower than
>>> > > transferring 25x 20GB files.
>>> > >
>>> > > If you are concerned about a transfer interruption, you could use
>>> > > rsync.  rsync has options to resume a failed transfer.
>>> > >
>>> > > Alternatively, you could use the Linux/Unix "split" command to split
>>> > > the one large file into a bunch of smaller files.
>>> > >
>>> > > Or you could use the commercial version of virtuoso with built-in
>>> > > replication.
>>> > >
>>> > > - Morty
>>> > >
>>>
>>> --
>>>                            Mordechai T. Abzug
>>> Linux red-sonja 3.11.0-24-generic #42-Ubuntu SMP Fri Jul 4 21:19:31 UTC
>>> 2014 x86_64 x86_64 x86_64 GNU/Linux
>>> "A verbal contract isn't worth the paper it's written on." - Samuel
>>> Goldwyn
>>>
>>
>>
>
> ------------------------------------------------------------------------------
> BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
> Develop your own process in accordance with the BPMN 2 standard
> Learn Process modeling best practices with Bonita BPM through live
> exercises
> http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual-
> event?utm_
>
> source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF_______________________________________________
> Virtuoso-users mailing list
> Virtuoso-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/virtuoso-users
>
>
>
------------------------------------------------------------------------------
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process modeling best practices with Bonita BPM through live exercises
http://www.bonitasoft.com/be-part-of-it/events/bpm-camp-virtual- event?utm_
source=Sourceforge_BPM_Camp_5_6_15&utm_medium=email&utm_campaign=VA_SF
_______________________________________________
Virtuoso-users mailing list
Virtuoso-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/virtuoso-users

Reply via email to