To satisfactorily explain the flying carpet, we have to go back to the beginning: the logistic map (although you can click on the images above if you want the pretty animations right away). Imagine a family of perfectly spherical frogs living in a pond. The water is clean, food plentiful, and there is so much room that the frogs multiply to their tiny hearts' content. In other words, each couple produces many (more than 2 on average) offspring, making the number grow geometrically. If \(x\) is the number of frogs in one generation, there will be \(kx\) in the next generation, with \(k=1\) meaning that each frog is replaced by just one tadpole, and the total population does not change; \(k=2\) would mean each couple is replaced by 4 tadpoles and so on. We can write it as a function rule: \(x\mapsto kx\).
This model is not very realistic (except for small populations in humongous ponds), and we have to include some mechanism that reflects the finite space and resources at least. So let's say that the more frogs there are, the harder it is for them to get food, the water gets dirtier, the less space to stretch their legs there is. We are not going to build a full, intricate modle, just take into account some natural capacity of the pond. If there is some maximal number of frogs that the pond can comfortably contain, we can express our \(x\) as a fraction of that number, so that \(x\in[0,1]\). We can then say that the rate of growth is not constant, but that it diminishes as \(x\) grows. The easiest choice is to make it diminish lineraly, so that we change \(k\) to \(k(1-x)\), meaning that when \(x\) is close to 1, the growth is almost 0, while for very small \(x\), the growth is about \(k\). Our new improved equation can now be written as $$x \mapsto kx(1-x)\quad\text{or}\quad x_{t+1} = kx_t(1-x_t).$$ The second form labels the population with some discrete time \(t\), which is discrete: increasing by 1 for each generation. It will be convenient to define \(f(x):=kx(1-x)\).
A lot of interesting mathematics happens even for this simple equation, and for now I will only cover some very basic facts. First and foremost the behaviour of the solution depends on the value of \(k\), and the deceptively simple polynomial hides a plethora of scenarios. Here are four examples of what could happen as we increase \(k\).
The first two show an inevitable evolution towards a stable point, where the population never changes. This point is given by the equation \(x = kx(1-x)\), whose one obvious solution is \(x=0\) (a rather sad, empty pond), and the other, more interesting solution is \(x_c=1-1/k\). The above scenarios in that the first has a monotonic increase, which strictlty speaking never reaches \(x_c\), except as a limit for \(t\rightarrow\infty\). The second, on the other hand, oscillates around \(x_c\), getting ever closer. The values of \(k\) are 2.1 and 2.9 respectively, and that's why the limits are different in the graphs.
These two show solutions that have more interesting limits. The first is winding onto a limit cycle: its “amplitude” increases, but not without bounds, as it gets closer and closer to switching between two values (again, without ever reaching them), the second is a periodic solution itself: it endlessly goes through 4 distinc values of x. Of course, the limit cycle itself is a periodic osolution, but it is instructive to see not the periodic behaviour itself, but rather that the trajectory can, in a manner of speaking, emanate from the critical point \(x_c\) instead of collapsing onto it, as was the case before. The specific values of \(k\) here were 3.1 and 4.
But these are just images, obtained with floating point calculation to make things worse, so can we be sure that this is not some kind of artifact? It is quite easy to find the exact periodic orbits in fact. Recall that the fixed point \(x_c\) is a solution of \(x=f(x)\), which is a quadratic equation. We can likewise ask for a solution of \(x = f(f(x))\), i.e. a point that returns to its origin, after two transformations. And yes, \(x_c\) trivially satisfies the second equation too, so we have to look among all solutions. Fortunately \(f(f(x))\) is a quartic polynomial so we can hope for as many as 4 solutions between 0 and 1 (depending on the value of \(k\)). For the right image, a solution of \(x=f(f(f(f(x))))\) had to be found numerically 1.1.
Now the spiders have a very useful perspective on this, observing from a distance. There is only one function (transformation rule) used repeatedly \(f(x)=kx(1-x)\), and it's straightforward to plot it. But can the iterated process be included on such a graph, where the current value of \(f(x_t)=x_{t+1}\) must become the argument in the next step \(x_{t+2}=f(x_{t+1})\)? It turns out it can all fit into one cobweb diagram.
Let's take it apart bit by bit. We start with the initial population of \(x_0\) which should turn into \(x_1=f(x_0)\). We depict it by marking \(x_0\) on the \(x\) axis, and find the value of \(f\) throught its graph (the blue parabola), following the arrow. The population is now \(x_1\), but that point we just found is \((x_0, x_1)\), so it's the ordinate that we want as our next argument, and we are no longer interested in the abscissa of \(x_0\). To “move” \(x_1\) back to the \(x\) axis, we use the auxiliary orange line, which is simply the graph of \(y=x\). Passing along the horizontal arrow we land in the point with coordinates \((x_1,x_1)\), whose abscissa (indicated by the vertical dashed line) is \(x_1\) like we wanted. This means we can apply \(f\) again by following the vertical arrow until it crosses the parabola -- the new point will have \(x_1\) as the abscissa and \(x_2=f(x_1)\) as its ordinate. From there the process continues by repetition.
Why is it called a cobweb diagram? Let's review the previous four scenarios from the spiders' perspective:
The intersection of the blue and orange lines is the critical point, and we clearly see the path tending to it via a staircase or spiraling onto it (if spirals can be square).
Here it's even more spider-like, winding out of the critical point, ever closer the limit cycle, which looks like a rectangle. And a purely periodic trajectory is just a self intersecting polygon — it has 4 vertices on the parabola, hence the period of 4 (remember that the orange line is auxiliary and so are the vertices on it).
But a really neat cobweb appears when we start in a generic (i.e. not periodic) point, which is almost everywhere on the \((0,1)\) segment — periodic orbits are special after all. Just look what happens when we take a point sligthly beside the above periodic solution, and keep on iterating (click to reset):
What a mess! Indeed \(k=4\) is well above the threshold for the system to be chaotic. Which means that two (arbitrarily) close points will diverge exponentially (at least at first), and that almost all initial points will explore the whole available range of \([0,1]\). There will also be infinitely many periodic orbits of any length.
One more thing that will reappear in the 2-dimensional case is the question of the asymptotic distribution of points. I.e., as the spider keeps spinning its web, there apear darker and lighter areas. Perhaps in its chaotic journey the point stays at some locations more often than at others? In other words, is there some probability distribution associated with the map? Let's perform a numerical experiment: divide the interval \([0,1]\) into 100 bins nd count how many times the point visits each bin. After a 100 thousand iterations we get this distribution:
Let's plot them all at once! Because we need one graph dimension to run over the values of \(k\), and another to denote the bins of \(x\), the actual height of the bins will be marked with color (if we want a 2D drawing) — like a topographic map. Thus, the above graph would just be a very thin strip at \(k=4\), black at its edges where the bins are high, and almost white in the middle, where they are low. Taking about 2000 bins for \(x\), and 40000 values of \(k\) between 3 and 4, the “map” looks like this (\(k\) is the abscissa):
A whole discussion of the bifurcation diagram, period doubling, and the Feigenbaum constants could begin here, but we would never get to the two-dimensional case then. So have a look at the Wikipedia article on the logistic map (and the dreaded references therein). Among the interesting facts are:
- The function \(f\) does not need to be a smooth parabola. It can be just two straight lines shaped like \(\land\), i.e. the tent map, for which the cobweb is as intricate (warning: page in Japanese).
- When \(k=4\), the solution can be given explicitly: \(x_t = \sin^2\left(2^t\theta_0\right)\), where the initial angle is determined through \(\theta_0=\arcsin\left(\sqrt{x_0}\right)\). In no way does this alleviate the chaotic features of the system: try calculating the (naive) error propagation for \(t=100\).
It is now the time time to adress the tiny elephant in the room, or, should I say, the fly in the pond. Frogs have to eat, and it just so happens that the food is an organism that deserves its own equation! No to overcomplicate things, let's approach it from a theoretical point of view. We want a set of equations that has the population of flies, \(y\), alongside the population of frogs \(x\), and they need to depend on each other. So the frog-function \(f\) will now have two arguments: \(f(x,y)\), and likewise for the flies with some \(g(x,y)\).
Keeping things simple, let's assume both functions are polynomials of degree 2 (as before), and that they keep the factor responsible for geometric growth. That is to say, \(f\), is proportional to \(x\) and \(g\) is proportional to \(y\), when other factors are negligible. This leads to a pretty specific form: $$\begin{aligned} x_{t+1} &= x_t(S - \alpha x_t - \beta y_t),\\ y_{t+1} &= y_t(K - \gamma y_t - \delta x_t), \end{aligned} $$ where \(S\), \(K\), \(\alpha\), \(\beta\), \(\gamma\), and \(\delta\) are parameters that should, in principle be determined by biological considerations. For example, taking \(S=\alpha=1\), and \(\beta=0\), decouples the frog subsystem, yielding the simple logistic map for \(x\). More realistically, we want the parameters reflect dependencies such as: if there are too many frogs, most flies get eaten, and the next generation is tiny (so \(\delta\) should be relatively large). Or that the pond can maintain many more flies than frogs, so \(K > S\) etc.
I can now divulge the secret formula behind the flying carpet itself: $$\begin{aligned} x_{t+1} &= 3.737x_t(1 - 1.17 x_t - 0.15 y_t),\\ y_{t+1} &= 3.737y_t(1 - 1.09 y_t), \end{aligned}$$ not that it's enough to get the above images by itself, so let's try to replicate some of the steps taken when studying the logistic map. The first obvious question is, how to even visualize the transformation \((x,y)\mapsto (f(x,y),g(x,y))\). Each of the functions alone can be shown in a “3D” plot, but a true graph would have to be 4-dimensional: transforming a (2D) plane into a (2D) plane. We can't fit that on a screen, and no simple analogon of the blue parabola of the previous sections exists.
What we can try to do is how how (a patch of) a plane is deformed. First, place a graph ruled piece of paper over the square covering the \([0,1]\times[0,1]\) square in the \((x,y)\) plane. Next, transform all the points in the square, carrying the grid along. Because time is discrete in the model, it should be a jump between two states, but to better see which points are going where, we can pretend there's a gradual transition. The simplest choice is a linear parametrization \((1-s)P_0 + sP_1\), which starts at the point \(P_0\) for \(s=0\) and finishes at \(P_1\) for \(s=1\), but also makes sense for all intermediate values of \(s\) — unlike the basic system with only integer \(t\). When this is done for all points we wish to follow (the grid), we get a frame for each value of \(s\). But before colleting them into an animation, one more decomposition can be made.
Because \(g(x,y)\) does not depend on \(x\) in this particular case, the transformation of \(x\) can be performed first, followed by that of \(y\). The details are not that important, but thanks to separating those stages, you can clearly notice two phases of folding in the the resulting animation:
Although we can sort of make out the stretching, folding and overlaying of the initial grid, the final image is a bit cluttered, so we can work a bit harder to spread it into 3 dimensions. It is a fiction in the sense that the actual system lives on the 2-dimensional plane, but we can imagine a piece of paper (or better yet of cloth) that is folded smoothly with all the layers separated. This is what it would look like immediately before being squashed back down:
Sadly, this can only be carried so far, and we soon run into the same difficulty as when trying to fold a newspaper sheet 50 times — its thickness exceeds the distance to the Moon (see also below, for folding a circle). We're stuck with observing the system projected onto one wall of the box (with thick axes above), which is its phase space anyway. And as the place is much larger than a line, following just one point won't be enough to quickly see what's going on. But rather than using the gride from before, we can use something in between: many points scattered randomly (but uniformly) on the whole plane.
There is (at least) one problem with that. As the grid experiment shows, some of the points end up outside the unit square. And we insist on the square, remembering that \(x\) and \(y\) were supposed to be percentage populations. Another reason is that the points that fall outside the square start jumping further and further from the origin, and eventually escape to infinity. This in itself is very interesting, it leads to the Crab Fractal, which you have probably seen on the main page. It will have its own article; here let's stick to the points that stay inside the square.
When the points start in a particular subset (called the basin of attraction), they will tend to the attractor if the system has one. The attractor is such, that it is folded onto itself, with all the features of chaos: stretching and mixing. Yet the points follow again a distribution, just as in the 1-dimensional case. This time, however, we are forced from the beginning to “look down” onto the state-space, and use color to code for frequency (bin counts). The darker the pixel, the more frequently points visit the spot. This is exactly what the three images at the very top show.
And I really think it's on the black background that the attractors become truly beautiful. So that's how the animations are presented. Let me explain what the buttons do. All of their functions correspond to what we have seen in the logistic case. You can take a single STEP, to just transform all the current points instantly. But you can also FOLD them, showing a continuous transformation between steps. These are fictitious, but show clearly which point goes where. If you don't want to keep clicking, you can RUN the animation, which will then display just the discrete steps; or you can make the points TUMBLE, to display the continuous folding in-between. But the attractor itself requires accumulating the bin counts, without showing the points themselves. You can sort of make out the attractor when the usual animation is running, but it is too grainy — hence the CUMMULATE button.
Here are some final, miscellaneous topics and animations.
One could ask not just about points, but complete curves being transformed. In particular, what happens to a circle upon subsequent transformation. Why a circle? Because it is the simplest curve that could give a sense of direction of travel, a sort of flow on the attractor.
A strange attractor is not a 1-dimensional curve, but for the purpose of visualization we are dealing with concrete approximations. We accept that after 100 or 1000 iterations any set set of starting points ends up on the attractor —be they random or arranged on a circle. The latter has the advantage of being continuously parametrized, and since the whoel transformation is continuous, so is the result. After 100 iterations it will be given by some nasty expression, several pages long, but still a continuous curve. so we will be able to show how a point, initially moving around the circle, moves along the resulting curve, indistinguishable from the attractor.Let's see first how the circle gets folded (not to scale).
Now let's place some points on the circle, connect them with an arc, and let them travel around. This is what it looks like on the attractor:
An even more general, and difficult question is the dependence of the attractor on the parameters. Or, indeed, its existence in the first place. This deserves its own article, so for now let's just see what happens when the parameters vary continuously. For each set of values we can draw the attractor, and then collect all drawings into an animation. The animation's “time”, call it \(s\), is just one variable, so it's not like we can vary all six of the model's parameters independently to see all possible changes at once. We have to choose some path in this six-dimensional space, but we can at least make it interesting: some sort of loop that departs in one direction, changes several parameters along the way, and finally returns to its origin from a different direction. The animation will then repeat, without being a simple to-and-fro.
For example, take the following circle in the \((\alpha, \gamma)\) plane $$ \alpha(s) = \alpha_0 + r\cos(s), \quad \gamma(s) = \gamma_0 + r\sin(s),\\$$ and similarly for \((\beta,\delta)\). Some trial and errors is needed to find “nice” values for \(\alpha_0\), \(\gamma_0\) and \(r\) — sometimes the attractor breaks along the way, or degenerates into single points. But the effort pays off; here are two results (best view it full-screen):