ven...@gmail.com wrote:
>> On Thursday, October 17, 2013 at 10:50:49 AM UTC-4, Josh Aas wrote:
>>> Blog post is here:
>>>
>>> https://blog.mozilla.org/research/2013/10/17/studying-lossy-image-compression-efficiency/
>>>
>>> Study is here:
>>&g
e.
~Daniel
On 2/24/18 9:51 AM, audioscaven...@gmail.com wrote:
> On Thursday, October 17, 2013 at 10:50:49 AM UTC-4, Josh Aas wrote:
>> Blog post is here:
>>
>> https://blog.mozilla.org/research/2013/10/17/studying-lossy-image-compression-efficiency/
>>
>> Stud
On 2018-02-24 12:51 PM, audioscaven...@gmail.com wrote:
On Thursday, October 17, 2013 at 10:50:49 AM UTC-4, Josh Aas wrote:
Blog post is here:
https://blog.mozilla.org/research/2013/10/17/studying-lossy-image-compression-efficiency/
Study is here:
http://people.mozilla.org/~josh
On Thursday, October 17, 2013 at 10:50:49 AM UTC-4, Josh Aas wrote:
> Blog post is here:
>
> https://blog.mozilla.org/research/2013/10/17/studying-lossy-image-compression-efficiency/
>
> Study is here:
>
> http://people.mozilla.org/~josh/lossy_compressed_image_study_octobe
On 26/12/2014 08:38, mikethedudishd...@gmail.com wrote:
>> color blindness
> I know this is a common way to call color vision deficiency, but it's
> the wrong term. So called "color blindness" really means you see
> colors *differently* than other people, sometimes it means you cannot
> see some sh
> color blindness
I know this is a common way to call color vision deficiency, but it's the wrong
term. So called "color blindness" really means you see colors *differently*
than other people, sometimes it means you cannot see some shades that others
do, but it never means you don't see colors.
On Tuesday, July 15, 2014 7:34:35 AM UTC-7, Josh Aas wrote:
> This is the discussion thread for Mozilla's July 2014 Lossy Compressed Image
> Formats Study and the Mozilla Research blog post entitled "Mozilla Advances
> JPEG Encoding with mozjpeg 2.0".
It would help if you would use much more dis
On Tuesday, July 15, 2014 1:38:00 PM UTC-6, stone...@gmail.com wrote:
> Would be nice if you guys just implemented JPEG2000. It's 2014.
Based on what data?
> Not only would you get a lot more than a 5% encoding boost, but you'd get
> much higher quality images to boot.
Based on what data?
>
On Tuesday, July 15, 2014 8:34:35 AM UTC-6, Josh Aas wrote:
> This is the discussion thread for Mozilla's July 2014 Lossy Compressed Image
> Formats Study and the Mozilla Research blog post entitled "Mozilla Advances
> JPEG Encoding with mozjpeg 2.0".
Could you post the command lines used for th
Den torsdag den 24. juli 2014 23.59.58 UTC+2 skrev Josh Aas:
>
> > I selected 10,000 random JPEGs that we were caching for customers and ran
> > them through mozjpeg 2.0 via jpegtran. Some interesting facts:
>
>
> With mozjpeg you probably want to re-encode with cjpeg rather than jpegtran.
> W
On Friday, July 18, 2014 10:05:19 AM UTC-5, j...@cloudflare.com wrote:
> I selected 10,000 random JPEGs that we were caching for customers and ran
> them through mozjpeg 2.0 via jpegtran. Some interesting facts:
With mozjpeg you probably want to re-encode with cjpeg rather than jpegtran. We
add
> Are there any plans to integrate into other tools, specifically imagemagick?
>
> Or would you leave that up to others?
For now we're going to stay focused on improving compression in mozjpeg's
library. I think a larger improved toolchain for optimizing JPEGs would be
great, but it's probably
On Tuesday, July 15, 2014 3:15:13 PM UTC-5, perez@gmail.com wrote:
> #1 Would it be possible to have the same algorithm that is applied to webP to
> be applied to JPEG?
I'm not sure. WebP was created much later than JPEGs, so I'd think/hope they're
already using some equivalent to trellis q
One option that I haven't seen compared is the combination of JPEG w/ packJPG
(http://packjpg.encode.ru/?page_id=17). packJPG can further compress JPEG
images another 20%+ and still reproduce the original bit-for-bit.
More details on how this is done can be found here:
http://mattmahoney.net/d
On 19/07/2014 22:40, Ralph Giles wrote:
> Probably not for Firefox OS, if you mean mozjpeg. Not necessarily
> because it uses hardware, but because mozjpeg is about spending more cpu
> power to compress images. It's more something you'd use server-side or
> in creating apps. The phone uses libjpeg-
On 2014-07-19 1:14 PM, Caspy7 wrote:
> Would this code be a candidate for use in Firefox OS or does most of that
> happen in the hardware?
Probably not for Firefox OS, if you mean mozjpeg. Not necessarily
because it uses hardware, but because mozjpeg is about spending more cpu
power to compress
Would this code be a candidate for use in Firefox OS or does most of that
happen in the hardware?
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
On Tuesday, July 15, 2014 3:34:35 PM UTC+1, Josh Aas wrote:
> This is the discussion thread for Mozilla's July 2014 Lossy Compressed Image
> Formats Study and the Mozilla Research blog post entitled "Mozilla Advances
> JPEG Encoding with mozjpeg 2.0".
Josh,
I work for CloudFlare on many things
Cool
=Re decoding.
I'm replying to this note:
"1. We're fans of libjpeg-turbo - it powers JPEG decoding in Firefox because
its focus is on being fast, and that isn't going to change any time soon. The
mozjpeg project focuses solely on encoding, and we trade some CPU cycles for
smaller fi
On 7/15/14 12:38 PM, stonecyp...@gmail.com wrote:
> Similarly there's a reason that people are still hacking video into
> JPEGs and using animated GIFs.
People are using animated GIFs, but animated GIFs people are using may
not be animated GIFs [1].
(2014/07/16 5:43), Chris Peterson wrote:
> Do C
On 7/15/14 12:38 PM, stonecyp...@gmail.com wrote:
On Tuesday, July 15, 2014 7:34:35 AM UTC-7, Josh Aas wrote:
This is the discussion thread for Mozilla's July 2014 Lossy Compressed Image Formats
Study and the Mozilla Research blog post entitled "Mozilla Advances JPEG Encoding
with mozjpeg 2.0"
On Tuesday, July 15, 2014 10:34:35 AM UTC-4, Josh Aas wrote:
> This is the discussion thread for Mozilla's July 2014 Lossy Compressed Image
> Formats Study and the Mozilla Research blog post entitled "Mozilla Advances
> JPEG Encoding with mozjpeg 2.0".
#1 Would it be possible to have the same al
On Tuesday, July 15, 2014 7:34:35 AM UTC-7, Josh Aas wrote:
> This is the discussion thread for Mozilla's July 2014 Lossy Compressed Image
> Formats Study and the Mozilla Research blog post entitled "Mozilla Advances
> JPEG Encoding with mozjpeg 2.0".
Would be nice if you guys just implemented J
On Tuesday, July 15, 2014 7:34:35 AM UTC-7, Josh Aas wrote:
> This is the discussion thread for Mozilla's July 2014 Lossy Compressed Image
> Formats Study and the Mozilla Research blog post entitled "Mozilla Advances
> JPEG Encoding with mozjpeg 2.0".
Would be nice if you guys just implemented J
Hello Josh,
thank you and all involved for your efforts to make the web faster.
Are there any plans to integrate into other tools, specifically imagemagick?
Or would you leave that up to others?
With all the options available for image processing one can end up with
building quite a complex chai
Study is here:
http://people.mozilla.org/~josh/lossy_compressed_image_study_july_2014/
Blog post is here:
https://blog.mozilla.org/research/2014/07/15/mozilla-advances-jpeg-encoding-with-mozjpeg-2-0/
___
dev-platform mailing list
dev-platform@lists.moz
This is the discussion thread for Mozilla's July 2014 Lossy Compressed Image
Formats Study and the Mozilla Research blog post entitled "Mozilla Advances
JPEG Encoding with mozjpeg 2.0".
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https:
On Saturday, October 19, 2013 12:14:40 PM UTC-4, stephan...@gmail.com wrote:
> Of course, you can throw a bunch of images to some naive observers with a
> nice web interface, but what about their screens differences? what about
> their light conditions differences? how do you validate people for
On Feb 23, 2014, at 5:17 PM, evacc...@gmail.com wrote:
> On Monday, October 21, 2013 8:54:24 AM UTC-6, tric...@accusoft.com wrote:
>>> - I suppose that the final lossless step used for JPEGs was the usual
>>> Huffman encoding and not arithmetic coding, have you considered testing the
>>> later
Why did you choose jpeg quality as your independent variable? Wouldn't it make
more sense to use the similarity value? When trying to match other formats to
the jpeg's value, you can get close but can't exactly match it. This creates an
inherent bias.
So for one thing, the data should have incl
On Monday, October 21, 2013 8:54:24 AM UTC-6, tric...@accusoft.com wrote:
> > - I suppose that the final lossless step used for JPEGs was the usual
> > Huffman encoding and not arithmetic coding, have you considered testing the
> > later one independently?
>
>
>
> Uninteresting since nobody us
About the methodology of using identical colorspace conversion for all formats,
the study asserts
> and manual visual spot checking did not suggest the conversion
> had a large effect on perceptual quality
I think this claim should be examined more carefully.
Take this image, for example: https:
On Thursday, October 17, 2013 7:48:16 AM UTC-7, Josh Aas wrote:
> This is the discussion thread for the Mozilla Research blog post entitled
> "Studying Lossy Image Compression Efficiency", and the related study.
Few queries regarding the study's methodology:
1.) The com
On Tuesday, October 22, 2013 11:12:08 AM UTC+4, Yoav Weiss wrote:
>
> Last time I checked, about 60% of all PNG image traffic (so about ~9% of all
> Web traffic, according to HTTPArchive.org) is PNGs of color type 6, so 24 bit
> lossless images with an alpha channel. A large part of these PNGs a
On Tuesday, October 22, 2013 at 10:15 AM, pornel...@gmail.com wrote:
> On Tuesday, 22 October 2013 08:12:08 UTC+1, Yoav Weiss wrote:
>
> > This is a part of Web traffic that would make enormous gains from an
> > alpha-channel capable format, such as WebP or JPEG-XR (Don't know if
> > HEVC-MS
On Tuesday, 22 October 2013 08:12:08 UTC+1, Yoav Weiss wrote:
> This is a part of Web traffic that would make enormous gains from an
> alpha-channel capable format, such as WebP or JPEG-XR (Don't know if HEVC-MSP
> has an alpha channel ATM), yet this is completely left out of the research. I
I have a couple of points which IMO are missing from the discussion.
# JPEG's missing features & alpha channel capabilities in particular
Arguably, one of the biggest gains from WebP/JPEG-XR support is the ability to
send real life photos with an alpha channel.
Last time I checked, about 60% of
On Monday, October 21, 2013 4:05:36 PM UTC+1, tric...@accusoft.com wrote:
> There is probably a good study by the EPFL from, IIRC, 2011, published at the
> SPIE, Applications of Digital Image Processing, and many many others.
>
> Outcome is more or less that JPEG 2000 and JPEG XR are on par for a
> I think it would be worthwhile to do two experiments with real people
>
> evaluating the images:
>
> 1) For a given file size with artifacts visible, which format
>
> produces the least terrible artifacts?
>
> 2) Which format gives the smallest file size with a level of
>
> artifacts that
> Are there now JPEG 2000 encoders that make images such that if you
>
> want to decode an image in quarter of the full-size in terms of number
>
> of pixels (both dimensions halved), it is sufficient to use the first
>
> quarter of the file length?
Yes, certainly. Just a matter of the progres
There are probably a couple of issues here:
> - Why didn't you include JPEG 2000?
This is the first one. However, I would also include various settings of the
codecs involved. There is quite a bit one can do. For example, the overlap
settings for XR or visual weighting for JPEG 2000, or subsamp
It's not as simple as reading n% of the bit-stream – the image needs
to be encoded using tiles so a tile-aware decoder can simply read only
the necessary levels. This is very popular in the library community
because it allows a site like e.g. http://chroniclingamerica.loc.gov/
to serve tiles for a
On Fri, Oct 18, 2013 at 5:16 PM, wrote:
> I think JP2 support could potentially be very interesting because it would
> make responsive images almost trivial without requiring separate files (i.e.
> srcset could simply specify a byte-range for each size image) but the
> toolchain support needs
On Fri, Oct 18, 2013 at 1:08 AM, wrote:
> Which leads to think that doing some blinded experiment (real people
> evaluating the images) to compare compressed images has still some value.
I think it would be worthwhile to do two experiments with real people
evaluating the images:
1) For a given
> I have a couple of fundamental issues with how you're calculating 3 of the 4
> metrics (all but RGB-SSIM, which I didn't think too much about)
You are right about it, methodology is not clear on this point.
> First, am I correct in my reading of your methodology that for all metrics,
> you en
I have a couple of fundamental issues with how you're calculating 3 of the 4
metrics (all but RGB-SSIM, which I didn't think too much about)
First, am I correct in my reading of your methodology that for all metrics, you
encode a color image (4:2:0) and use that encoded filesize? If so, then all
On Saturday, October 19, 2013 1:12:14 AM UTC+2, Ralph Giles wrote:
> On 2013-10-18 1:57 AM, Yoav Weiss wrote:
>
>
>
> > Would you consider a large sample of lossless Web images (real-life images
> > served as PNG24, even though it'd be wiser to serve them as JPEGs) to be
> > unbiased enough to
I'll just talk about the quality evaluation aspects of this study, as it is a
field I know quite well (PhD on the topic, even if in video specifically).
> I think the most important kind of comparison to do is a subjective blind
> test with real people. This is of course produces less accurate
On Saturday, October 19, 2013 12:30:15 PM UTC+1, Jeff Muizelaar wrote:
> - Original Message -
>
> > On Saturday, October 19, 2013 12:12:14 AM UTC+1, Ralph Giles wrote:
>
> > > On 2013-10-18 1:57 AM, Yoav Weiss wrote:
>
> > > Do you have such a sample?
>
> >
>
> > For what it's worth h
- Original Message -
> On Saturday, October 19, 2013 12:12:14 AM UTC+1, Ralph Giles wrote:
> > On 2013-10-18 1:57 AM, Yoav Weiss wrote:
> > Do you have such a sample?
>
> For what it's worth here's an image I made quite awhile ago showing the
> results of my own blind subjective comparis
On Saturday, October 19, 2013 12:12:14 AM UTC+1, Ralph Giles wrote:
> On 2013-10-18 1:57 AM, Yoav Weiss wrote:
> Do you have such a sample?
For what it's worth here's an image I made quite awhile ago showing the results
of my own blind subjective comparison between codecs:
http://www.filedropper
On 2013-10-18 1:57 AM, Yoav Weiss wrote:
> Would you consider a large sample of lossless Web images (real-life images
> served as PNG24, even though it'd be wiser to serve them as JPEGs) to be
> unbiased enough to run this research against? I believe such a sample would
> better represent Web i
I think you are attacking from the wrong angle. Being responsible in an
Enterprise for quite a few sites, most issues I have are, where all current
formats fail miserably. To make the point, see the two following Images, were I
have to live with PNG-24 huge sized files, due to a) alpha-transpara
On Thursday, October 17, 2013 1:50:12 PM UTC-4, cry...@free.fr wrote:
> Thank you for publishing this study, here are my first questions:
>
> - Why didn't you include JPEG 2000?
You might find https://bugzilla.mozilla.org/show_bug.cgi?id=36351#c120
interesting: it discusses what it would take to
Very interesting study. I’m shocked to see WebP and JPEG-XR perform so poorly
on so many of the tests. Do they really perform *that* much *worse* than JPEG?
It seems hard to imagine. I've done my own tests on jpeg, web-p and jpeg-xr by
blindly comparing files of the same size and deciding subjec
On Thursday, October 17, 2013 4:48:16 PM UTC+2, Josh Aas wrote:
> This is the discussion thread for the Mozilla Research blog post entitled
> "Studying Lossy Image Compression Efficiency", and the related study.
Thank you for publishing this research!
While I like the methodolo
HDR-VDP-2 is relatively recent metric that produces predictions for difference
visibility and quality degradation.
http://sourceforge.net/apps/mediawiki/hdrvdp/index.php?title=Main_Page
It could been interesting to add this metric in future studies.
Rafał Mantiuk (the guy behind HDR-VDP-2) also w
On Thursday, October 17, 2013 12:50:12 PM UTC-5, cry...@free.fr wrote:
> Thank you for publishing this study, here are my first questions:
>
> - Why didn't you include JPEG 2000?
We couldn't test everything, we picked a small set of the formats that we hear
the most about and that seem interesti
On 10/17/2013 9:48 AM, Josh Aas wrote:
This is the discussion thread for the Mozilla Research blog post entitled "Studying
Lossy Image Compression Efficiency", and the related study.
HEVC-MSP did really well. Its unfortunate that Mozilla could not use it
in any capacity since i
Thank you for publishing this study, here are my first questions:
- Why didn't you include JPEG 2000?
- Correct me if I'm wrong but JPEG-XR native color space is not Y'CbCr this
means that this format had to perform an extra (possibly lossy) color space
conversion.
- I suppose that the final lo
On Thursday, 17 October 2013 10:48:16 UTC-4, Josh Aas wrote:
> This is the discussion thread for the Mozilla Research blog post entitled
> "Studying Lossy Image Compression Efficiency", and the related study.
Would be interesting if you could post your conclusions
Blog post is here:
https://blog.mozilla.org/research/2013/10/17/studying-lossy-image-compression-efficiency/
Study is here:
http://people.mozilla.org/~josh/lossy_compressed_image_study_october_2013/
___
dev-platform mailing list
dev-platform
This is the discussion thread for the Mozilla Research blog post entitled
"Studying Lossy Image Compression Efficiency", and the related study.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
63 matches
Mail list logo