How to generate rosettes

A rosette is an image with rotational symmetry. For p-fold symmetry we can use the methods of the last post “Anamorphosis and symmetries” with a simple power as a mapping function between output and input images:


Here z=x+iy relates to the position (x,y) of a pixel of the output image. We then have to sample the color of the input image at the position (X,Y) given by Z=X+iY. This gives the color of the pixel. With polar coordinates, where r²=x²+y² and tan(φ)=y/x, we see easily that we have p-fold rotational symmetry. A rotation of (x,y) by 2π/p changes φ to φ+2π/p and gives the same Z. Thus we get the same color for the pixel at the rotated position.

With p=5 I got this result:


The input image is “Bee on lavender blossom” of my photography blog “looking closer“. I used my program “rosette“, which is an interactive web page. You can download its html and JavaScript files to make your own experiments. To get images of different symmetry you simply have to change the code of the last function: mapping(x,y), see later. Here I choose the “nearest” input image pixel interpolation option, which in fact does not smooth pixels. Thus you can see that some pixels are strongly magnified and distorted at the center of the image because of the large power of r in the mapping function. At the border you see a frieze of five somehow distorted bees following each other.

We can get more interesting images if we use several combinations of z together with its complex conjugate. As proposed by Farris:



where (m-n) has is a multiple of p for p-fold rotational symmetry. For the power of r we can use any integer number resulting in a more detailed center of the rosette. I find it convenient to rearrange these terms and to use real coordinates. Thus




which is convenient for computation and discussing symmetries. Programming becomes easy, see the end of the rosette.js file. The function mapping(x,y) has first a call to the function imageZero(x,y) for initialization. Then we make calls to the function imageAdd(a,b,c,d,rPower,phiPower), adding terms of the above equations with rPower=k and phiPower=l*p.

We can get more varied images with more detail, such as this one:


Again, I did not use smoothing of pixels.

One last thing: Fight for reason and support the March for Science.

Posted in Anamorphosis, Kaleidoscopes, programming | Tagged , , , | Leave a comment

Anamorphosis and symmetries

As proposed by Farris in “Creating Symmetry” we can use anamorphosis to make images of any symmetry from some other input image. Here I briefly discuss how I am doing it and what you will find in my next program.

Each pixel of the output image has integer indices (h,k) for its row and column. After an offset and a scaling we get a point (x,y) in 2-dimensional space:

x=outputScale*(h-outputOffsetX) and


We can choose the offsets and the scale interactively. Changing the offset we move a center of symmetry around in the image. We can zoom in or out by varying the scale.

Then a mapping transforms the point (x,y) to an image point (X,Y). This mapping defines the symmetry. For theoretical work we can use complex coordinates z=x+i*y and Z=X+i*Y. But in the end we have to use real numbers x and y for our programming.

The image point (X,Y) points to the input image and shows where to sample its color, which will be the color of the output image pixel at (h,k). For versatility we rotate, scale and shift the coordinates to get row and column values:

H=inputScale*(X*cos(α)-Y*sin(α))+inputOffsetX and


To find the best image we choose the angle α, the input scale and the input offsets interactively. Obviously, H and K are not integers and we need some pixel interpolation. For fast exploration we simply take the color of the nearest pixel. To get good image quality we can change to linear or cubic interpolation.

And now for something completely different: Fight for your rights and support the ACLU.

Posted in Anamorphosis, Kaleidoscopes, programming | Tagged , , | Leave a comment

Approximating the logarithm function

I still want fast approximations of the logarithm and the inverse tangent function for my work. I don’t know if they are really needed, but they are nice pillow problems to keep you from ruminating those stupid things happening now.

If  x is in a limited range, 0<x_min<x<x_max, then you can approximate log(x) with a simple table. Because of the singularity at x=0 and other properties of the logarithm you should use a size of about 1000*(x_max/x_min) to get a good approximation. For more details see

Results are similar as for the exponential function: Firefox is about 4 times faster than Chrome and the approximation of log(x) is about 4 times faster too.

I have put the timing loop in a separate function. This lowered the measured times considerably. I suppose that the JavaScript engine made better optimizations.




Posted in programming | Tagged , , , , | Leave a comment

Approximating the exponential function

The garden has frozen over and I have caught a cold. It is hard to do difficult work. Thus I continue to find fast approximations of transcendental functions. This is more fun than solving crossword puzzles.

The exponential function is not periodic like the sine and cosine, but we can use a similar method. For any x we have exp(x)=exp(n)*exp(x-n). If n is the largest integer smaller than x, then 0≤x-n≤1 and we need two tables. One is for exp(n) of integers n with a value of exp(n) larger than the smallest number and smaller than the largest number JavaScript can handle. This makes a table of about 1500 elements. The second table is for exp(x), with 0≤x-n≤1. Its size depends on the desired precision. For more details see the code of

What is the accuracy? The exponential function varies strongly and it is more reasonable to look at its relative error. Using a table of 1000 elements for exp(x), we get a relative error of 0.00025 for the simple nearest point approximation. The linear interpolation is better with an error of only 6e-8. That’s good enough for most calculations. Because (d²exp(x)/dx²)=exp(x) we can again improve the linear interpolation to get a quadratic approximation. Its error is only 4e-12.

The timing for 100 million function evaluations is again very interesting. Chrome seems to be a bit faster than Chromium. It takes about 1 second for the empty loop. Evaluating the original Mat.exp() function takes additional 10 seconds. Our approximations are about four times faster. The simple approximation takes 2.1 seconds, the linear interpolation 2.4 seconds and the “quadratic” approximation 2.6 seconds.

Again, Firefox is faster. Doing the empty loop takes only 0.3 seconds. The Math.exp() needs only 2.5 seconds more and is four times faster than in Chrome! The approximations are five times faster. The simple approximation takes 0.3 seconds, the linear interpolation 0.5 seconds and the “quadratic” approximation 0.7 seconds.

It seems best to use Firefox and the linear approximation.

Posted in programming | Tagged , , , , | Leave a comment

Fast approximations of the sine and cosine functions

I made up the web-page to check out the different approximations for sine and cosine functions. Load it and make your own tests. You can use the “save”-function of your browser to download the code, change and use it for your own projects.

Because cos(x)=sin(x+π/2) we need only a table for the sine function. It is periodic over 2*π and we use a table for this range. We could save memory with the symmetries sin(x+π/2)=-sin(x) and sin(π/4-x)=sin(π/4+x), but this considerably slows done the computation. The length of the table should be a power of 2 plus one for linear approximation. Then we can use bitwise-and (&) to impose periodicity instead of the much slower modulus (%) operation.

For checking the accuracy we note that the for all derivatives of the sine function the largest value is equal to 1. Using a table of 4097 elements we get an error of the nearest value approximation of 0.0007. This agrees with the results of the last post. This error might be somewhat too big, depending on your application.

The linear interpolation is better with an error of less than 3e-7. This is good enough for all calculations. To get a similar accuracy with the simple nearest value approximation we would need a table of 8 million entries.

We can estimate the second derivative with d²sin(x)/dx²≅-0.5*(sin(x_i)+sin(x_(i+1)) and improve on the linear approximation. This results in rather accurate results with an error of less than 3e-11. I conclude that we can get very fast and accurate approximations of the sine and cosine functions with only small tables of saved results.

But how fast are these approximations ? I used 100 million function evaluations to get reliable results. My computer is an average laptop with an Intel i7-3610QM CPU at 2.3 GHz with 8GB memory, running Linux Mint 64-bit.

All browsers take about 11 seconds for the standard Math.sin(x) function. But then I got surprised because Firefox seems to be about 5 times faster than Chrome, Chromium or Opera. Doing 100 million times an empty while-loop or an empty for-loop takes about 0.3 seconds for Firefox and 1.4 seconds for the other browsers. There does this difference come from ? I cannot explain and I am confused.

Chrome, Chromium and Opera take about 3.5 seconds to do the while loop with the function approximations. Subtracting the loop overhead, this leaves about 2 seconds for 100 million function evaluations and an acceleration by 5. There is no significant difference between the different approximations. Why ?

Firefox is much faster. The loop with the simple nearest point lookup takes only 0.35 seconds. Without loop overhead that makes 0.05 seconds per 100 million function evaluations and is 200 times faster than Math.sin ! Then, the linear interpolation takes in total 0.48 seconds, making 0.18 seconds for the function evaluation and a 60 fold acceleration. That is a great improvement ! Similarly, the improved “linear” interpolation takes 0.3 seconds for the function evaluations and makes a 30 fold acceleration. Note that the speed of these approximations decreases as expected.

In conclusion, we get a big acceleration of sine and cosine functions using Firefox and linear interpolation.



Posted in programming | Tagged , , , , | Leave a comment

Accelerating functions with tables

To get fast function evaluations we use tables of function values f(x_i) at equidistant points x_i=i*Δ. Taking for any x the nearest point x_i with |x_i-x|<Δ we can approximate f(x)≅f(x_i). This is the fastest and least accurate approximation. What is its error ?

Assuming that the first derivative (df/dx) is a constant, we get that the error of this approximation is

|f(x)-f(x_i)|≅0.5Δ*(df/dx) at x-x_i=Δ/2.

This means that the error is inversely proportional to the size of the table and that we need huge tables to get reasonably accurate approximations. Can we do better ?

Linear interpolation gives a far better approximation. We use the table entries below and above x with x_i<x<x_(i+1) and an interpolation parameter δ. The approximation is then


which too is very fast. How accurate is this approximation ?

This linear approximation has roughly the same first derivative as the exact function and thus the error now depends on the second derivative (d²f/dx²). Assuming that it is a constant we get


which has a maximum at δ=0.5Δ. Its value is


The error of the linear approximation is roughly the square of the small error of the nearest point approximation and becomes thus much smaller.  It takes about three times the computation time and should be preferred.

If we have a simple approximation of the second derivative, then we can make an even better approximation with


Its error is proportional to the third derivative and only about Δ³*(d³f/dx³).

So we have three different approximations and a trade-off between computation time, accuracy and memory requirements. The ideal choices for the method and the table size depend on the hardware, the programming language and the compiler or interpreter. You will see a surprising example in the next post.

Posted in programming | Tagged , , , | 1 Comment

Numerical performance

Curves do not need much calculations and are easy to generate. Rosettes, friezes and kaleidoscopes are different. They need many calculations for each pixel, often using several evaluations of trigonometric functions and exponential functions. Fortunately, our PCs are fast. A simple timing experiment shows that they can do approximately 10 million evaluations of the sine function per second. Is this fast enough ?

Not really, depending on the results you want. Typical low resolution images for the web have 512*512 pixels and thus a quarter of a million pixels. They need some tenths of a second to generate and that is tolerable. But if you want high-resolution images similar to digital cameras, then you will get around ten million pixels. This would make you wait for tens of seconds. Sure, that’s boring and we want to do better.

We can use tables to make much faster approximations of functions, as I will discuss in forthcoming posts. And we will see that there is a large difference in the speed of different browsers. This is important too for other image creation methods such as generative design and evolutionary creation.

Posted in programming | Tagged , , , , , | Leave a comment