About a year ago I have briefly shown in my post “smoothing images” that averaging can be important to get good images without pixel noise. For my kaleidoscope app, see http://geometricolor.ch/sphericalKaleidoscopeApp.html, I have improved on these ideas and that’s what I want to present now.

The kaleidoscope app uses three steps to get the colour of an output image pixel from pixels of an input image. First we have a linear transformation from column and row indexes (i,j) of an output pixel to a geometrical space with coordinates (x,y). It translates and a scales:

x = s_{out} * i + a and y = s_{out} * j + b,

where (a,b) is the translation vector and s_{out} a scale factor. Changing the translation and the scaling allows us to zoom in on interesting parts of the image. Then comes a nonlinear mapping in geometrical space that creates the symmetries of the image. This does not need to be a kaleidoscope; instead we could have a fractal or a quasi-periodic tiling. Quite in general, we put:

x = u(x,y) and y = v(x,y)

for its result, where u and v are some arbitrary functions. Finally, we have a linear transformation from geometrical space to indexes (h,k) of input image pixels. It is made of a translation, a rotation and a scaling:

h = s_{in} * cos(α) * x – s_{in} * sin(α) * y + c

and k = s_{in} * sin(α) * x + s_{in} * cos(α) * y + d,

where we can change the scale s_{out}, the rotation angle α and the translation vector (c,d) to choose a suitable sampling region of the input image. The resulting image can vary dramatically. I presented this method somewhat differently in my interactive slide for the Bridges conference 2018; you can see it at http://geometricolor.ch/setup.html.

Usually, the index vector (h,k) has not integer numbers but we can round them to the nearest integer and read out the colour of the pixel. Using that colour for the output pixel often gives bad results. Here is an example:

It is a hyperbolic image. At the border of the Poincaré disk we get random pixel garbage instead of a repetition of smaller and smaller copies of the central part. Around the center you see strong aliasing. The rods of a railing do not appear as continuous lines. This image cannot be used.

For better results we have to realize that pixels are not points. They are really small squares that cover the space and their colour is the average colour of these small regions. The three mappings from output image pixels to input image pixels can strongly magnify the size of a square belonging to an output pixel, which then covers many input image pixels. Reading the colour of only one of these pixels can give aliasing or even random results, as in the image shown above. The colour of the output pixel should rather be an average of input image pixels. This gives good results:

The small repeating structure at the border of the disc is now clearly visible and the railing has no aliasing anymore. You can see a related interactive slide at http://geometricolor.ch/kaleidoscopeLens.html. Here I will go more into details.

In the space of output image pixels (i,j) we have trivially a square of side length 1 for each pixel. After the transformation to geometrical space these squares have side lengths equal to the scale factor s_{out}. We now determine how the second mapping changes the size of the square. To simplify the calculation I assume that its lower left corner lies at the position of the pixel. This supposes that the second mapping in geometrical space is rather smooth. Anyway, if it weren’t we wouldn’t get nice images.

Before the mapping the lower left corner thus lies at

(x,y)=(s_{out} * i + a, s_{out} * j + b)

and after the mapping at

(x,y)=(u(s_{out} * i + a, s_{out} * j + b), v(s_{out} * i + a, s_{out} * j + b)).

Going out from this corner we have two sides connecting it with points corresponding to indexes (i+1,j) and (i,j+1). The first one is a vector A with components

A_{x}=u(s_{out} * (i+1) + a, s_{out} * j + b)-u(s_{out} * i + a, s_{out} * j + b) and

A_{y}=v(s_{out} * (i+1) + a, s_{out} * j + b)-v(s_{out} * i + a, s_{out} * j + b).

Similarly for the second side we have a vector **B** with

B_{x}=u(s_{out} * i + a, s_{out} * (j+1) + b)-u(s_{out} * i + a, s_{out} * j + b) and

B_{y}=v(s_{out} * i + a, s_{out} * (j+1) + b)-v(s_{out} * i + a, s_{out} * j + b).

These vectors are easy to calculate because we already use these values of u and v for mapping the pixel positions. So there is not much extra work needed to get this mapping of the sides of the pixel squares.

The nonlinear mapping translates, rotates and scales the square. In addition it can do a shearing and an anisotropic scaling. But we are not interested in these details. We merely want to know how many input image pixels cover an output image pixel. Thus we calculate the surface S of the parallelogram given by the cross product of the vectors **A** and **B**:

S = A_{x }B_{y }– A_{y }B_{x}.

The last transform to the space of input pixels makes an additional scaling and the surface of the input image pixel is equal to one. Thus the number N of input image pixels covered by an output image pixel is

N=s_{in}² S.

If N is about equal to one we can simply read out an input image pixel and do not need anti-aliasing. For small N we have to do pixel interpolation. But for large N we have to average the colours of input pixels.

For an efficient calculation we simply average inside an axis aligned square of surface N centred at the mapped position of the pixel and using summed area tables for the red, green and blue components of the input image. This is fast and easy to program. An alternative would be mipmaps on the GPU.