On Jul 27, 2011 4:28 AM, "Ivan Shmakov" <i...@gray.siamics.net> wrote: > > >>>>> shawn wilson <ag4ve...@gmail.com> writes:
> > However, I'd look at some of the bio perl modules if this was the > > type of data I was looking at. Either way, learning dozens of tools > > to manipulate lots of data is quite time consuming, prone to failure, > > and quite frankly senseless. > > How it's different to learning dozens of functions documented in > perlfunc(3)? Or even more, should CPAN modules be taken into > account? How could it be that the Shell commands do not form a > library, or a set of, of a sort? > Different commands use different switches and do the same thing (sed vs awk vs grep for tons of uses), bash is slower. And I find it easier for bad / different data to break a shell script (well I can technically stop most languages from earring with try / catch which is a plus but not the point) and verifying data in bash is a pita. Also, idk of any debug option in bash (perl -d, gdb, etc). However, this is not answering the op's question. So, while I started this, I'll start a new thread if we wish to continue this (preferably with code examples :) ). And I do hope you wish to continue this as I find the debate fun but way OT (per op question) at this point.