JPEG Quality (compression)
The quality slider allows you to control the amount of JPEG compression. A value of 100 means minimum compression, while a value of 1 means maximum compression.
The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality.
JPEG typically achieves 10 to 1 compression with little perceivable loss in image quality.
From my experience, quality values between 65 and 95 are considered acceptable. Lower values can be used in certain situations.
The user needs to experiment with this slider in order to choose the perfect tradeoff between quality loss and file size.
To minimize the color information loss you should adjust also the chroma sub-sampling setting.
Chroma sub-sampling overview
The JPEG image format can produce significant reductions in file size through lossy compression. The techniques used to achieve these levels of JPEG compression take advantage of the limitations of the human eye. The compression algorithm saves space by discarding additional image information / detail that may not be as noticeable to the human observer.
The human eye is much more sensitive in changes to luminance (brightness) than we are in chrominance (color information)
Chroma subsampling is a method that stores color information at lower resolution than normal.
Chroma sub-sampling in RIOT
In RIOT you will find 4 settings: none, low, medium and high. Their technical names are showed as well. The color information is fully kept with the none setting and drastically reduced with the high setting.
You will find that JPEGs with text or certain details need lower chroma sub-sampling settings.
Technical details on the various levels of subsampling:
To facilitate the different compression requirements of the two “channels” of image information, the JPEG file format translates 8-bit RGB data (Red, Green, Blue) into 8-bit YCbCr data (Luminance, Chroma Blue, Chroma Red). Now, with the brightness separated into a separate data channel, it is much easier to change the compression algorithm used for one channel versus the others.
- 4:4:4 – none – The resolution of chrominance information (Cb & Cr) is preserved at the same rate as the luminance (Y) information. Also known as 1×1 (or subsampling disabled).
- 4:2:2 – low– Half of the horizontal resolution in the chrominance is dropped (Cb & Cr), while the full resolution is retained in the vertical direction, with respect to the luminance. This is also known as 2×1 chroma subsampling, and is quite common for digital cameras.
- 4:2:0 – medium– With respect to the information in the luminance channel (Y), the chrominance resolution in both the horizontal and vertical directions is cut in half. This form is also known as 2×2 chroma subsampling.
- 4:1:1 – high– Only a quarter of the chrominance information is preserved in the horizontal direction with respect to the luminance information.
* Some explanations are taken from impulseadventure.com
To optimize the way a greyscale image is beeing stored as JPEG, you can select Grayscale from the JPEG settings.
An 8 bit greyscale image will be created.
Always select this option for greyscale images.
RIOT offers two encoding modes:
- standard optimized encoding
- progressive encoding
The standard optimized encoding produces basically a baseline (standard) JPEG that includes optimized Huffman tables. These tables are created after statistical analysis of the image’s unique content.
Tis encoding mode offers half the speed of progressive mode for similar compression (usually lower, but higher with certain image types – see recommendations).
The resulting image will be rendered from top to bottom.
Technical details on Huffman coding
Huffman coding is a method that takes symbols (e.g. bytes, DCT coefficients, etc.) and encodes them with variable length codes that are assigned according to statistical probabilities. A frequently-used symbol will be encoded with a code that takes up only a couple bits, while symbols that are rarely used are represented by symbols that take more bits to encode.
A JPEG file contains up to 4 huffman tables that define the mapping between these variable-length codes (which take between 1 and 16 bits) and the code values (which is an 8-bit byte). Creating these tables generally involves counting how frequently each symbol (DCT code word) appears in an image, and allocating the bit strings accordingly. A standard JPEG encoder simply use the huffman tables presented in the JPEG standard. RIOT allow one to optimize these tables, which means that an optimal binary tree is created which allows a more efficient huffman table to be generated.
* Some explanations are taken from impulseadventure http://www.impulseadventure.com/photo/jpeg-huffman-coding.html
Progressive encoding usually produces smaller images.
A progressive encoded image comes into focus while it is being displayed. Instead of rendering the image a line at a time from top to bottom, the whole image is displayed as very low-quality and fuzzy, which becomes sharper as the lines fill in. It gives the illusion that the page has downloaded faster even though it takes the same time to achieve the final sharpness. The progressive JPEG is created in multiple passes (scans) of the image and is similar to interlacing in GIF and PNG.
- when your JPEG image is under 10K, it’s better to be saved as standard optimized JPEG (estimated 75% chance it will be smaller)
- for files over 10K the progressive JPEG will give you a better compression (in 94% of the cases)
So if your aim is to squeeze every byte (and consistency is not an issue), the best thing to do is to try both standard optimized and progressive and pick the smaller one.
The other option is have all images smaller than 10K as baseline and the rest as progressive. Or simply use baseline for thumbs, progressive for everything else.
* taken from a study of Stoyan Stefanov http://www.yuiblog.com/blog/2008/12/05/imageopt-4/
Technical details on progressive encoding
Progressive encoding divides the file into a series of scans. The first scan shows the image at the equivalent of a very low quality setting, and therefore it takes very little space. Following scans gradually improve the quality. Each scan adds to the data already provided, so that the total storage requirement is roughly the same as for a baseline JPEG image of the same quality as the final scan. (Basically, progressive JPEG is just a rearrangement of the same data into a more complicated order.)
The advantage of progressive JPEG is that if an image is being viewed on-the-fly as it is transmitted, one can see an approximation to the whole image very quickly, with gradual improvement of quality as one waits longer; this is much nicer than a slow top-to-bottom display of the image. The disadvantage is that each scan takes about the same amount of computation to display as a whole baseline JPEG file would. So progressive JPEG only makes sense if one has a decoder that’s fast compared to the communication link. (If the data arrives quickly, a progressive-JPEG decoder can adapt by skipping some display passes. Hence, those of you fortunate enough to have T1 or faster net links may not see any difference between progressive and regular JPEG; but on a modem-speed link, progressive JPEG is great.)
Up until recently, there weren’t many applications in which progressive JPEG looked attractive, so it hasn’t been widely implemented. But with the popularity of World Wide Web browsers running over slow modem links, and with the ever-increasing horsepower of personal computers, progressive JPEG has become a win for WWW use. It is possible to convert between baseline and progressive representations of an image without any quality loss. (But specialized software is needed to do this; conversion by decompressing and recompressing is *not* lossless, due to roundoff errors.)
A progressive JPEG file is not readable at all by a baseline-only JPEG decoder, but these programs are very old and rare.
* adapted from JPEG image compression FAQ http://www.faqs.org/faqs/jpeg-faq/part1/section-11.html