J.C. Roberts wrote on Fri, Jun 29, 2007 at 12:46:02PM -0700: > The unarj v2.43 archiver we have for use with clamav virus scanning does > not really work. The same is true for the newer 2.65 version released > by the author. The problem is unarj is unable to extract with paths, > hence it will overwrite files and stuff won't actually be scanned. > > At the moment, I've got a working port of 2.65 patched to extract with > full paths. The last problem to solve is preventing path traversal > exploits. I suspect that just searching for double dot ".." in the to > be created path string is not enough but since I've never done this > sort of thing, I'm not sure where/what to ask.
I just checked what Mark Dowd/John McDonald/Justin Schuh tell on path traversal vulnerabilities in "The art of software security assessment". They treat this topic at several points, but don't give any reference implementation saying "do it like this". If you want to keep unarj portable, keep in mind that different platforms use different path syntax: just to give two non-exhaustive examples, "/" vs. "\" for separators and "^/" vs. "^[A-Z]:\\" for file system roots come to mind. In the following, i shall restrict myself to Unix all the same as i did too little programming on other platforms. On Unix, as far as i know, the only ways to achieve path traversal are - either starting the path with "/" - or including ".." in the path. When checking the path, make sure to first fully concatenate it before checking it. Otherwise, dir="." + file="./myfile" might get you. Check for just "..", not "../" or "/.." or even "/../". Keep in mind that "//" is equivalent to "/". Keep in mind that handling paths may expose other vulnerabilities besides path traversal, in particular buffer overflows or path truncation triggered by paths containing long strings like "////.////././//". Depending on how you are using the path in the end of the day, also give thought to shell globbing (which is a much more difficult problem than just path traversal). Note shell globbing is shell dependent: Try `ls -d .*.` on ksh and bash. Finally, consider whether you only need path checking or whether you also need path normalisation. Path normalisation is considerably more difficult than just checking for path traversal. On the other hand, depending on the particular context, using realpath(3) and checking the result may or may not be a nice way to guard against path traversal. You might also consider looking at the tar(1) sources in /usr/src/bin/pax to understand how initial slashes can be handled. On the other hand, even venerable tar(1) does not bother preventing path traversal. Why? You can regard having ".." in file names in the archive as a feature rather than a bug. Unless running privileged, you cannot clobber /root/.profile anyway. In case you have installed tar SUID, you get what you deserve. When running anything as root, you should only be using trusted input files anyway. Look here: [EMAIL PROTECTED] $ mkdir -p oldroot/olddir newroot/newdir [EMAIL PROTECTED] $ touch oldroot/olddir/myfile [EMAIL PROTECTED] $ cd oldroot/olddir/ [EMAIL PROTECTED] $ tar -cvf /tmp/my.tar .. .. ../olddir ../olddir/myfile [EMAIL PROTECTED] $ cd ../../newroot/newdir/ [EMAIL PROTECTED] $ tar -xvf /tmp/my.tar .. ../olddir ../olddir/myfile [EMAIL PROTECTED] $ ls [EMAIL PROTECTED] $ cd .. [EMAIL PROTECTED] $ find . . ./newdir ./olddir ./olddir/myfile > I would like to find a standardized, well tested way to test strings > for potential path traversal sequences. Searching with google has > been fruitless. If you'd be so kind as to drop kick me in the right > direction, possibly example code, it would be much appreciated. Perhaps someone more experienced can comment on this one. I'm not exactly sure, but i suspect you found nothing for the following simple reason: if all you want to do is checking for simple path traversal under Unix, m/^\// and m/\.\./ are all you need.

