> I don't know anything about PIL and its implementation, but I would not
> be surprised if the cost is mostly accessing items which are not
> contiguous in memory and bounds checking ( to check where you are in the
> subimage). Conditional inside loops often kills performances, and the
> actual co
>> The image I tested initially is 2000x2000 RGB tif ~11mb in size.
I continued testing, with the initial PIL approach
and 3 alternative numpy scripts:
#Script 1 - indexing
for i in range(10):
imarr[:,:,0].mean()
imarr[:,:,1].mean()
imarr[:,:,2].mean()
#Script 2 - slicing
for i in ran
> > arr=asarray(img)
> > arr.shape
> > (1600,1900,3)
> No, it means that you have 1600 rows, 1900 columns and 3 colour channels.
According to scipy documentation at
http://pages.physics.cornell.edu/~myers/teaching/ComputationalMethods/python/arrays.html
you are right.
In this case I import numpy
Testing the PIL vs numpy in calculating the mean value of each color channel of
an image I timed the following.
impil = Image.open("10.tif")
imnum = asarray(impil)
#in PIL
for i in range(1,10):
stats = ImageStat.Stat(impil)
stats.mean
# for numpy
for i in range(1,10):
imnum.resha
Hi,
I'm using PIL for image processing, but lately I also try numpy for the
flexibility and superior speed it offers. The first thing I noticed is that for
an RGB image with height=1600 and width=1900 while
img=Image.open('something.tif')
img.size
(1900,1600)
then
arr=asarray(img)
arr.shape
(160