On Friday 30 May 2008 9:59:09 pm Sandro Tosi wrote:
> this is a really annoying bug (sometimes even grave, consider my
> situation: on a 56k line, if I ctrl+c sitecopy, all info are lost and
> then I need to reupload all the files, or reinitialize from remote,
> that are both unacceptable).
>
> The problem is that sitecopy reset to 0 bytes ~/.sitecopy/<site> file
> while uploading (I suppose keeping in memory the list, flushing only
> at end), so if you ctrl+c sitecopy the <site> package information are
> lost.

Hi Sandro,

Thanks for your inputs. This is known issue as upstream also confirmed it..

> A simple workaround is just to create a ~/.sitecopy/<site>.bak when
> upload starts, and to remove when ~/.sitecopy/<site> is flushed from
> memory.

Thanks for tips. Can you prepare patch for that. We can send upstream too. 

> Please implement something: we cannot always wait for upstream to fix (it)

True. But, I am in touch with upstream to solve it. Let me look into it and 
try from my side (our side indeed!).

-- 
 Cheers,
 Kartik Mistry | GPG: 0xD1028C8D | IRC: kart_
 Blogs: {ftbfs,kartikm}.wordpress.com




-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to