Hi, I've spent the last two hours working on this bug. I found that even after modifying the variable to be off_t instead of int, in an i386 architecture the file created would be much smaller, because of the use of fread to read the whole file. fread can only read a maximum of size_t bytes (4GB in i386).
Also, the program tries to encrypt the whole file in memory. malloc also can only allocate size_t bytes. The only way to fix this right now (without redoing the whole program), would be to put an early check that if the file is larger than the max of size_t, the file can't be encrypted and just skip it. However, I've found several other bugs that make this program behave bad in unexpected ways. For example, if running bcrypt over a file on which the user doesn't have read permissions, it says that the file can't be opened, but then the calling code doesn't check the error code from the function and starts encrypting whatever garbage was allocated in the buffer, eventually dying with a segfault. I suspect that if whatever was on the buffers was something different, it would even finish "successfully" and try to remove that file. Checking the code, I found that there are many reasons on why a file might be skipped during processing, but this is not shown to the user, so that the user doesn't know what's going on, what files were encrypted, what files were skipped, etc. The fact that it tries to put the whole file in memory is the worst. For very large files, this implies the machine almost certainly runnning out of memory. This behaviour is neither documented in the man, nor the expected behaviour by the user. So, summing up: The fix for this bug is to change the variable to off_t and check that the size is smaller than max size_t. However, the program is in such bad state that it is not of release quality. I'll file a request for removal from testing. -- Besos, Marga -- To UNSUBSCRIBE, email to debian-bugs-rc-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org