On 16/01/2008, martin f krafft <[EMAIL PROTECTED]> wrote: > I just scanned 170 pages and saved them to PDF before running e.g. > unpaper on them, just so that I would not have to rescan if anything > went wrong. I did well, because my X server crashed. So I tried to > reimport the PDF, but found that for each page, gscan2pdf dumps > a 12Mb PPM file, like x-085.ppm, into /tmp. This caused my 2Gb /tmp > to fill up. I now wonder why I was able to scan all these pages in > the first place and whether there could be a better way to handle > importing.
I take it that the X server crash was nothing to do with gscan2pdf? Importing is handled by first calling pdfimages, which extracts the images. By default, ppms are extracted - the scanner produces pnms. It would be possible to extract the images for just a range of pages. gscan2pdf doesn't support this at the moment. A better option would have been to have saved the 170 pages as either a TIFF, which would be reimported as TIFF - better compression, or a DjVu file, which has anyway better compression. Regards Jeff -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]