1.JPEG Optimization #

JPEG Optimization
JPEG Quality (compression)

The quality slider allows you to control the amount of JPEG compression. A value of 100 means minimum compression, while a value of 1 means maximum compression.

quality slider

The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality.

JPEG typically achieves 10 to 1 compression with little perceivable loss in image quality.


From my experience, quality values between 65 and 95 are considered acceptable. Lower values can be used in certain situations.

The user needs to experiment with this slider in order to choose the perfect tradeoff between quality loss and file size.

To minimize the color information loss you should adjust also the chroma sub-sampling setting.

Chroma sub-sampling
Chroma sub-sampling overview

The JPEG image format can produce significant reductions in file size through lossy compression. The techniques used to achieve these levels of JPEG compression take advantage of the limitations of the human eye. The compression algorithm saves space by discarding additional image information / detail that may not be as noticeable to the human observer.

The human eye is much more sensitive in changes to luminance (brightness) than we are in chrominance (color information)
Chroma subsampling is a method that stores color information at lower resolution than normal.

Chroma sub-sampling in RIOT

In RIOT you will find 4 settings: none, low, medium and high. Their technical names are showed as well. The color information is fully kept with the none setting and drastically reduced with the high setting.


You will find that JPEGs with text or certain details need lower chroma sub-sampling settings.

Technical details on the various levels of subsampling:

To facilitate the different compression requirements of the two “channels” of image information, the JPEG file format translates 8-bit RGB data (Red, Green, Blue) into 8-bit YCbCr data (Luminance, Chroma Blue, Chroma Red). Now, with the brightness separated into a separate data channel, it is much easier to change the compression algorithm used for one channel versus the others.

  • 4:4:4 – none – The resolution of chrominance information (Cb & Cr) is preserved at the same rate as the luminance (Y) information. Also known as 1×1 (or subsampling disabled).
  • 4:2:2 – low– Half of the horizontal resolution in the chrominance is dropped (Cb & Cr), while the full resolution is retained in the vertical direction, with respect to the luminance. This is also known as 2×1 chroma subsampling, and is quite common for digital cameras.
  • 4:2:0 – medium– With respect to the information in the luminance channel (Y), the chrominance resolution in both the horizontal and vertical directions is cut in half. This form is also known as 2×2 chroma subsampling.
  • 4:1:1 – high– Only a quarter of the chrominance information is preserved in the horizontal direction with respect to the luminance information.

* Some explanations are taken from

Grayscale JPEG

To optimize the way a greyscale image is beeing stored as JPEG, you can select Grayscale from the JPEG settings.

selecting grayscale
click on Image adjustments, then select grayscale

An 8 bit greyscale image will be created.


Always select this option for greyscale images.

JPEG encoding

RIOT offers two encoding modes:

select encoding mode
  • standard optimized encoding
  • progressive encoding

The standard optimized encoding produces basically a baseline (standard) JPEG that includes optimized Huffman tables. These tables are created after statistical analysis of the image’s unique content.

Tis encoding mode offers half the speed of progressive mode for similar compression (usually lower, but higher with certain image types – see recommendations).

The resulting image will be rendered from top to bottom.

Technical details on Huffman coding

Huffman coding is a method that takes symbols (e.g. bytes, DCT coefficients, etc.) and encodes them with variable length codes that are assigned according to statistical probabilities. A frequently-used symbol will be encoded with a code that takes up only a couple bits, while symbols that are rarely used are represented by symbols that take more bits to encode.

A JPEG file contains up to 4 huffman tables that define the mapping between these variable-length codes (which take between 1 and 16 bits) and the code values (which is an 8-bit byte). Creating these tables generally involves counting how frequently each symbol (DCT code word) appears in an image, and allocating the bit strings accordingly. A standard JPEG encoder simply use the huffman tables presented in the JPEG standard. RIOT allow one to optimize these tables, which means that an optimal binary tree is created which allows a more efficient huffman table to be generated.

* Some explanations are taken from impulseadventure

Progressive encoding usually produces smaller images.

A progressive encoded image comes into focus while it is being displayed. Instead of rendering the image a line at a time from top to bottom, the whole image is displayed as very low-quality and fuzzy, which becomes sharper as the lines fill in. It gives the illusion that the page has downloaded faster even though it takes the same time to achieve the final sharpness. The progressive JPEG is created in multiple passes (scans) of the image and is similar to interlacing in GIF and PNG.

  • when your JPEG image is under 10K, it’s better to be saved as standard optimized JPEG (estimated 75% chance it will be smaller)
  • for files over 10K the progressive JPEG will give you a better compression (in 94% of the cases)

So if your aim is to squeeze every byte (and consistency is not an issue), the best thing to do is to try both standard optimized and progressive and pick the smaller one.

The other option is have all images smaller than 10K as baseline and the rest as progressive. Or simply use baseline for thumbs, progressive for everything else.

* taken from a study of Stoyan Stefanov

Technical details on progressive encoding

Progressive encoding divides the file into a series of scans.  The first scan shows the image at the equivalent of a very low quality setting, and therefore it takes very little space.  Following scans gradually improve the quality.  Each scan adds to the data already provided, so that the total storage requirement is roughly the same as for a baseline JPEG image of the same quality as the final scan.  (Basically, progressive JPEG is just a rearrangement of the same data into a more complicated order.)

The advantage of progressive JPEG is that if an image is being viewed on-the-fly as it is transmitted, one can see an approximation to the whole image very quickly, with gradual improvement of quality as one waits longer; this is much nicer than a slow top-to-bottom display of the image. The disadvantage is that each scan takes about the same amount of computation to display as a whole baseline JPEG file would. So progressive JPEG only makes sense if one has a decoder that’s fast compared to the communication link. (If the data arrives quickly, a progressive-JPEG decoder can adapt by skipping some display passes.  Hence, those of you fortunate enough to have T1 or faster net links may not see any difference between progressive and regular JPEG; but on a modem-speed link, progressive JPEG is great.)

Up until recently, there weren’t many applications in which progressive JPEG looked attractive, so it hasn’t been widely implemented.  But with the popularity of World Wide Web browsers running over slow modem links, and with the ever-increasing horsepower of personal computers, progressive JPEG has become a win for WWW use. It is possible to convert between baseline and progressive representations of an image without any quality loss.  (But specialized software is needed to do this; conversion by decompressing and recompressing is *not* lossless, due to roundoff errors.)

A progressive JPEG file is not readable at all by a baseline-only JPEG decoder, but these programs are very old and rare.

* adapted from JPEG image compression FAQ

2.PNG Optimization #

PNG optimization

To optimize a PNG image you can use these strategies:

1. Color and bit depht reduction

1.1. Autmatic (lossless)

1.2. Manual (lossy, using color quatization)

2. Compression

2.1. internal

2.2. external optimizers

1. Color and bit depth reduction

Color reduction is one of the most important optimization tasks for PNG and GIF.

The color reduction is made in RIOT in two ways:

  1. automatic reductions (such as bit depth reductions or when using the automatic mode)
  2. manual reductions (when using color quantization)
1.1 Automatic (lossless)

The bit depth translates to the possible number of colors that can be stored in an image.

Some images will be stored in True Color. This means 24 bits are used for each pixel if the image does not contain transparency, or 32 bits per pixel if the image is transparent.

24 bits (2^24) gives 16,777,216 color variations. This is a lot. But if your image actually contains 7 unique colors, this is a waste of space, because each pixel allows for a much wider range.

The fact that is contains 7 unique colors, doesn’t mean the pixels with the same color won’t occupy extra space. Imagine that instead of representing a number of 1 to 7 using 10 fingers, while 3 are unused, you use 7 but require millions of unused fingers to be present. Those fingers need to there, even if they are not used.

RIOT will analyze the source image and it will try to use the smallest number of unique colors and the optimal bit depth.

This is possibile for 256 or less colors.

For 7 unique colors, this means RIOT can decrease the bit depth to 4 (2^4 = 16 colors). So instead of 24 bits per pixel it will only need 4 bits per pixel and some extra space for a palette.

Internally a so called “palette” will be built using the actual unique colors. The palette will contain the colors, and the pixels will only contain a reference to the index where the oclor is found in the palette.

In a less spectacular way, but still an optimizaton, RIOT can decrease the bit depth from 8 or 4 bit in case an already palletized image is the source to the optimal bit depth and to the actual number of colors, making the unused colors blank.

Thinks are actually more complicated when it involves alpha transparency (semi-transparent areas), but you get the idea.

1.2 Manual color reduction (lossy)

Going further, you could have an image with a higher number of unique colors like 4000 stored in 24 bit.

Lossless reductions won’t work, because we will need to change some pixel’s color to be able to only use 256 or less colors.

For this thing color quantization comes to the rescue.

Color quantization is a process that reduces the number of distinct colors used in an image, usually with the intention that the new image should be as visually similar as possible to the original image.

So, from 4000 colors, you can get the most used 256 colors, adjusting a bit the colors to the closest available color.

From 256 colors you can get naughty and squize some bytes out by using the max colors slider even more.

Color quantization  can be used in RIOT only with the following color reduction presets:

  • Optimal x colors Palette
  • Grayscale 256 colors Palette

RIOT implements the following algorithms:

  • Xiaolin Wu
  • NeuQuant neural-net

Both algorithms have advantages and disadvantages

Xiaolin Wu produces accurate colors, but it is suitable for images with few distinct colors, when you need to keep the exact colors.

Xiaolin Wu's author description should probably make things clearer: ( or not :))

Greedy orthogonal bipartition of RGB space for variance minimization aided by inclusion-exclusion tricks.

NeuQuant is superior in image quality when used with a high color image and a high number of quantized colors.

The high-quality but slow NeuQuant algorithm reduces images to 256 colors by training a Kohonen neural network “which self-organises through learning to match the distribution of colours in an input image. Taking the position in RGB-space of each neuron gives a high-quality colour map in which adjacent colours are similar.” It is particularly advantageous for images with gradients.
This algorithm should be used for higher color counts.

Optimization Recommendations
  • Choose Xiaolin Wu for images with few distinct colors, especially when you need fewer than 64 colors
  • Choose NeuQuant for images with many distinct colors, but don”t use it to output less than 64 colors, because it was not designed for this
  • Reduce the number of unique colors using the slider until the image starts to look different than the original
Design Recommendations

Use as few colors as possible

Prefer solid colors over gradients when designing PNG graphics


Compression can be made internal or external

Internal compression

About internal compression there is not much to say. Select from the drop down a compression level.

The minimum level is the fastest, but the compression level will be lower. While the maximum level will be the slowest, the level of compression will be the highest.

External optimizers (compression)
external optimizers

When using the external optimizer, you can use one of the three offered by RIOT or use your own optimizer.

To use your own optimizer, click Add PNG optimization tool.

Add button

A pop-up window will appear. Search the tool using the browse button and finally click ok.

Now RIOT is ready to work his magic. All that’s missing is a click on the Play button.

Play button

3.GIF Optimization #

Color reduction

See PNG Color reduction


See PNG Color reduction

4.Batch operations #


You can use the Batch optimizer to process multiple images at once.

The same settings used for a single image are used in Batch mode, including format settings, Metadata, Mask and Image Adjustments.

Basic workflow
  • A common scenario when optimizing several pictures is to load one image, play with the settings inside RIOT until you happy with the results, then open the Batch optimizer to use the same settings for all images.
  • Load all images using the drop down menu options from “Add images”. You can select there single images (keep Ctrl key pressed to select more than one image) or even add whole folders and sub-folders.
  • If you want to use additional tasks like Resize, Rotate, Flip, Compress to size you can select them from the drop down menu labeled “Additional tasks”. Note: As long as an additional task settings pane is visible, then RIOT considers you want to apply that task to all images. Make sure you select only the needed tasks.
  • Choose the output folder (that is where all optimized images will go !)
  • Adjust the Batch settings by pressing on the Settings button on top and then confirm them with “Apply”
  • Press the Start button, sit back and relax until all images are processed.
As the batch optimizer cannot work if an open image is already opened in RIOT, you must unload it to use the Batch feature. This will be done automatically.
Batch settings
Batch settings

Currently RIOT let’s you choose the following options:

  • Overwrite existing files (default)

When checked, all files from the destination folder having the same name with the resulting image will be overwritten. Uncheck to skip saving if a file with that name already exists in the destination folder

  • Delete original files when complete

If checked, the source files will be deleted as long as they are not the same as the target. Use with care, as this will destroy originals.

  • Keep original date/time

When checked, the target files will have the same date/time as the original files.

Report file size changes

When checked, after a successful execution of the batch job, the status column will contain the percentage of file decrease/increase next to each item (file in the list).

5.Basic tools #

Basic tools
Resize (Resample)

A very common task when optimizing images is decreasing the image size in pixels.

Resizing is done using so called “resample filters” or “interpolation filters” which decide what pixels should be added in the case of enlarging or what pixels should be removed in the case of shrinking.

It has no point in using a bigger image if it is rendered on screen at a smaller size. It is even better to resize it to ensure the display quality is not modified.

When the source image is bigger than the rendered size a similar process is used (usually a plain resize, but if you are lucky, a fast interpolation filter like Bilinear, much inferior to the slower, higher quality filters present in RIOT). So to insure the program (a browser, a viewer, etc) or the device (like a digital photo frame or phone) will not resize the image using it’s own filter, you must make sure you resample the image using one of the good RIOT filters.

You will benefit then of:

  • smaller filesize
  • higher image quality
  • faster response time from programs or devices

A common mistake amateur web-designers do is to use a big image and set a smaller size via HTML or CSS. This is wrong, for the reasons explained earlier.

To resample an image using RIOT, you must open first the image then use any of these methods:

  • menu-> Edit->Resample
  • shortcut: Ctrl + R
  • bottom toolbar: the resample button located in the left of “Compress to size”

The resample dialog will pop up like this:

Resampling filters present in RIOT:

1.  Box filter(also known as Nearest Neighbor).

This method is the simplest and the fastest: it computes new pixels as the value of the nearest pixel in the source image. This produces a blocky result when upsampling and a grainy effect when downsampling.

‘Box’ filtering is only suitable for ‘binning’ images, that is, reduce images by integer multiples to ensure that every pixel in the result is an average of the same number of neighbouring pixels (the ‘bin’). The resulting image will thus remain clean looking.

The box filter works best with illustrations containing non-anti-aliased edges like rectangular shapes.

2.  Bilinear (also known as Triangle)

This is a fast filter, with smooth results.
Bilinear interpolation considers the closest 2×2 neighborhood  of known pixel values surrounding the unknown pixel. It then takes a weighted average of these 4 pixels to arrive at its final interpolated value. This results in much smoother looking images than nearest neighbor.

Bilinear filtering is rather accurate until the scaling of the image gets below half or above double the original size of the texture – that is, if the image was 256 pixels in each direction, scaling it to below 128 or above 512 pixels can make the image look bad, because of missing pixels or too much smoothness.

This algorithm reduces some of the visual distortion caused by resizing an image to a non-integral zoom factor, as opposed to nearest neighbor interpolation, which will make some pixels appear larger than others in the resized image.

The Bilinear filter is recommended as a fast alternative to the other filters when resizing artificial images such as textures or gradients.

3.  Cubic family filters

Bicubic goes one step beyond bilinear by considering the closest 4×4 neighborhood of known pixels — for a total of 16 pixels. Since these are at various distances from the unknown pixel, closer pixels are given a higher weighting in the calculation. Bicubic produces noticeably sharper images than the previous two methods, and is perhaps the ideal combination of processing time and output quality. For this reason it is a standard in many image editing programs (including Adobe Photoshop), printer drivers and in-camera interpolation.

RIOT includes the following cubic filters:

  • Mitchel – Netravali bicubic

Mitchell and Netravali’s bicubic filter is an advanced parameterized scaling filter. It produces very smooth output while maintaining dynamic range and sharpness. Bicubic scaling takes approximately twice the processing time as Bilinear.

  • Catmull – Rom bicubic

This is a variation of the bicubic filter where the “B” parameter controlling blurring is set to 0, producing sharper images.

The Catmull-Rom filter is generally accepted as the best cubic interpolant filter. Some people preffer it over Lanczos3 because it has reduced ringing effects and a slightly smoother output.

Interesting thing to know is the fact this filter has an output almost  identical to Lanczos2.
  • B-Spline 4th order (cubic)

The B-spline filter produces the smoothest output, but tends to smooth over fine details. This function requires the same processing time as Mitchell and Netravali’s Bicubic filter.

B-spline filter is recommended for applications where the smoothest output is required.It has the advantage of dampening noise and JPEG artifacts.

4.  Lanczos3

This is the most theoretically correct filter and produces the best output for photographic images that do not have sharp transitions in them. However, Lanczos will produce ripple artefacts especially for block text, due to aliasing. Lanczos also requires three times the processing time of Bilinear.

This is a slow method but it usually produces the sharpest images. Under certain conditions, it may introduce some ringing patterns and emphasize JPEG artifacts.

 Lanczos uses a filter based on the sinc function

Common interpolation errors: