All posts by @aschinchon

The Chaos Game: an experiment about fractals, recursivity and creative coding

Mathematics, rightly viewed, possesses not only truth, but supreme beauty (Bertrand Russell)

You have a pentagon defined by its five vertex. Now, follow these steps:

  • Step 0: take a point inside the pentagon (it can be its center if you want to do it easy). Keep this point in a safe place.
  • Step 1: choose a vertex randomly and take the midpoint between both of them (the vertex and the original point). Keep also this new point. Repeat Step 1 one more time.
  • Step 2: compare the last two vertex that you have chosen. If they are the same, choose another with this condition: if it’s not a neighbor of the last vertex you chose, keep it. If it is a neighbor, choose another vertex randomly until you choose a not-neighbor one. Then, take the midpoint between the last point you obtained and this new vertex. Keep also this new point.
  • Step 3: Repeat Step 2 a number of times and after that, do a plot with the set of points that you obtained.

If you repeat these steps 10 milion times, you will obtain this stunning image:

I love the incredible ability of maths to create beauty. More concretely, I love the fact of how repeating extremely simple operations can bring you to unexpected places. Would you expect that the image created with the initial naive algorithm would be that? I wouldn’t. Even knowing the result I cannot imagine how those simple steps can produce it.

The image generated by all the points repeat itself at different scales. This characteristic, called self-similarity, is property of fractals and make them extremely attractive. Step 2 is the key one to define the shape of the image. Apart of comparing two previous vertex as it’s defined in the algorithm above, I implemented two other versions:

  • one version where the currently chosen vertex cannot be the same as the previously chosen vertex.
  • another one where the currently chosen vertex cannot neighbor the previously chosen vertex if the three previously chosen vertices are the same (note that this implementation is the same as the original but comparing with three previous vertex instead two).

These images are the result of applying the three versions of the algorithm to a square, a pentagon, a hexagon and a heptagon (a row for each polygon and a column for each algorithm):

From a technical point of view I used Rcppto generate the set of points. Since each iteration depends on the previous one, the loop cannot easily vectorised and C++ is a perfect option to avoid the bottleneck if you use another technique to iterate. In this case, instead of writing the C++ directly inside the R file with cppFunction(), I used a stand-alone C++ file called chaos_funcs.cpp to write the C++ code that I load into R using sourceCpp().

Some days ago, I gave a tutorial at the coding club of the University Carlos III in Madrid where we worked with the integration of C++ and R to create beautiful images of strange attractors. The tutorial and the code we developed is here. You can also find the code of this experiment here. Enjoy!

Colonizing Franky

Y otra vez me arranco despacito, al sentir que nada necesito (Locura transitoria, Extremoduro)

One of my favorite sites in the Internet is algorithmic botany . It’s always a source of inspiration for me. I recently discovered there the space colonization algorithm, concretely in this paper. Originally, the algorithm was developed to simulate leaf venation patterns as well as the branching structure of trees and it works by simulating the competition for space between growing veins (or branches). Given a initial set of attractor points (3.000 points in my case), and a initial node (also a point located randomly inside the picture) the algorithm performs the next steps iteratively:

  • measure distances between attractors and nodes
  • assign the closest node to each attractor
  • keep just those pairs (node, attractor) which distance is between a minimum and maximum
  • normalize and obtain the unit average vectors for each node
  • create a new set of nodes using previous vectors and a predefined longitude

Once again, I used this image of Boris Karloff as Frankenstein’s monster to experiment with the algorithm. I have to say that I coded my own version of the algorithm which maybe is not the canonical one inspired by the previous paper as well as by this tutorial by the coding challenge. This is the resulting drawing using my preferred parametrization. As you will see in the code, I overimpose several layers of colonizations. I love the resulting images:

Play with the parameters to create your own images. If you improve my code, do not hesitate to do a pull request in Github if you want.

Clustering Frankenstein

Necesito para estar sentado, un arbolito en este descampado (Desarraigo, Extremoduro)

From time to time I come back to experiment with this stunning photograph of Boris Karloff as Frankenstein’s monster. I have done several of them previously: from decomposing it into Voronoi regions, to draw it as a single line portrait using an algorithm to solve the travelling salesman problem. I also used this last technique to do a pencil portrait of the image. Today I will use a machine learning algorithm to reinterpret the monster once again. Concretely, I will use hierarchical clustering to do drawings like this one:

The idea is simple: once loaded the photograph, the first step is to binarize it into a black and white image using thresold function of imager package. After that, a random sample of black points is taken. Here comes the clustering algorithm, which starts measuring the euclidean distance between each pair of points. Then, a hierarchical clustering is done so I can reproduce how points are gathered walking through the resulting dendrogram of the previous clustering, starting from the maximum number of clusters (each cluster is an individual point) and ending with the minimum one (just one cluster with the whole sample). The next image shows and example of this process for a sample of 25 points. The left plot shows the population of points and the right one the way that points are connected once the dendrogram is analyzed following the steps described before:

Applying this technique to a big amount of points (between 2.000 and 5.000) result in very interesting drawings. To make process faster, I used map function from purrr package. To render the graph I use ggplot function with geom_curve. This geometry draws a curve between two points, named (x, y) and (xend, yend) respectively. Among others, there are two important parameters to control its shape: curvature (negative values produce left-hand curves, positive values produce right-hand curves, and zero produces a straight line) and angle (values less than 90 skew the curve towards the start point and values greater than 90 skew the curve towards the end point). Playing with this paramaters, as well as with the sample size, you can generate a wide variety of drawings (note that here only appear the segments, since now I removed the points of their extremes):

You can find the code of this experiment here. If you do something interesting with it, please let me know. Thanks a lot for reading my post.

Rhombuses

For a lonely soul, you’re having such a nice time (Nothing in my way, Keane)

In my previous post, I created the P2 Penrose tessellation according to the instructions of this post. Now it’s time to create the P3 tessellation following the same technique I described already. This is the image of the P3 tessellation:

Note that all tiles are rhombuses. I recognize that I like the P3 more than the P2, I do not really why. What about you? Here you have the code to play with it if you want.

Kites and Darts: the Penrose Tiling

Agarrada a mis costillas le cuelgan las piernas (Godzilla, Leiva)

Penrose tilings are amazing. Apart of the inner beauty of tesselations, they have two interesting properties: they are non-periodic (they lack any translational symmetry) and self-similar (any finite region appears an infinite number of times in the tiling). Both characteristics make them a kind of chaotical as well as ordered mathematical object that make them really appealing.

In this experiment I create Penrose tilings. Concretely, the P2 tiling, according to this article from Simon Tatham, that I will follow and provides a perfect explanation of how these tessellations can be constructed. The code is available here, and you can use it to create Penrose tilings like this one:

I will not explain in depth how to build the P2 tiling, since the article I mentioned before does it perfectly. Instead of that, I will give some highlights of the process together with a brief explanation of the code involved in it.

Everything has to do with triangles. Concretely, everything has to do with two types of triangles. To differenciate them I name their sides with numbers. The first triangle has labels 1, 2 and 3 and the other one has labels 1, 2 and 4. Two triangles of type 123 forms a kyte like this:

On the other hand, two triangles of type 124 forms a dart like this:

Actually, kites and darts don’t contain their inner segments so both of them are polygons of 4 sides. The building of a Penrose tiling is an iterative process that begins with 5 kites (i.e. 10 triangles of type 123) gathered like this:

You can start with many other patterns but this one will result in a round shape tiling and I like it. I build the tiling by subdividing triangles as Simon describes in his article. A triangle of type 123 is subdivided into three triangles: two of type 123 and one of type 124. The following image shows a 123 triangle (left) and the result after its division (right):

A triangle of type 124 is subdivided into two triangles: one of type 123 and one of type 124. The following image shows a 124 triangle (left) and the result after its division (right):

In each iteration, all triangles are subdivided according its type. After 5 iterations, the resulting pattern is like this:

To make calculations easier I arranged the data frame following a segment structure, in which the sides of triangles are defined by two coordinates: (x, y) and (xend, yend). The bad side of it is that I have rounding problems after making some iterations. It makes the points that would be the same differs slightly because they come from different triangles. I fix it using a hierarchical clustering and substituting points by its centroids after cutting up the dendogram using a very low thresold. Once this problem is solved I can remove the inner segments of all kites and darts, which are segments of type 3 or 4. Apart of removing them, I join the xx triengles to form 4-sides polygons. All these tasks are done with the function Arrange_df (remember that the code is here). This is the result:

This pattern is quite similar to its previous one but now the data frame is ready to be arranged as a polygon using the function Create_Polygon. At least, I calculate the area of each polygon with the Shoelace formula to create a columns called area which I use to fill polygons with two nice colors.

I hope that these explanations will help you to understand and improve the code as well as to invite you to create your own Penrose tilings.

Drrrawing with purrr

La luna es un pozo chico
las flores no valen nada
lo que valen son tus brazos
cuando de noche me abrazan
(Zorongo Gitano, Carmen Linares)

When I publish a post showing my drawings, I use to place some outputs, give some highlights about the techniques involved as well as a link to the R code that I write to generate them. That’s my typical generative-art post (here you have an example of it). I think that my audience knows to program in R and is curious enough to run and modify the code by themselves to generate their own outputs. Today I will try to be more educational and will explain step by step how you can obtain drawings like these:

There are two reasons for this decision:

  • It can illustrate quite well my mental journey from a simple idea to what I think is a interesting enough experiment to publish.
  • I think that this experiment is a good example of the use of accumulate, a very useful function from the life-changing purrr package.

Here we go: there are many ways of drawing a pentagon in R. Following you will find a piece of code that does it using accumulate function from purrr package. I will use only two libraries for this experiment: ggplot2 and purrr so I will just load in the tidyverse (both libraries take part of it):

library(tidyverse)

pentagon <- tibble(
  x    = accumulate(1:4, ~.x+cos(.y*2*pi/5), .init = 0),
  y    = accumulate(1:4, ~.x+sin(.y*2*pi/5), .init = 0),
  xend = accumulate(2:5, ~.x+cos(.y*2*pi/5), .init = cos(2*pi/5)),
  yend = accumulate(2:5, ~.x+sin(.y*2*pi/5), .init = sin(2*pi/5)))

ggplot(pentagon)+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

The function accumulate applies sequentially some function a number of times storing all the intermediate results. When I say sequentially I mean that the input of any step is the output of the prevoius one. The accumulate function uses internally two important arguments called .x and .y: my own way to understand its significance is that .x is the previous value of the output vector and .y is the previous value of the one which controls the iteration. Let’s see a example: imagine that I want to create a vector with the first 10 natural numbers. This is an option:

> accumulate(1:10, ~.y)
 [1]  1  2  3  4  5  6  7  8  9 10

The vector which controls the iteration in this case is 1:10 and .y are the values of it so I just have to define a function wich returns that values and this is as simple as ~.y. The first iteration takes the first element of that vector. This is another way to do it:

> accumulate(1:10, ~.x+1)
 [1]  1  2  3  4  5  6  7  8  9 10

To replicate the same output with .x I have to change a bit the function to ~.x+1 because if not, it will always return 1. Remember that .x is the previous output of the function and it is initialized with 1 (the first value of the vector 1:10). Intead of initializing .x with the first value of the vector of the first argument of accumulate, you can define exactly its first value using .init:

accumulate(2:10,  ~.y, .init = 1)
accumulate(1:9, ~.x+1, .init = 1)

Note that using .init I have to change the vector to reproduce the same output as before. I hope now you will understand how I generated the initial and ending points of the previous pentagon. Some points to help you if not:

  • I generate a tibble with 5 rows, each of one defines a different segment of the pentagon
  • First segments starts at (0,0)
  • The rotating angle is equal to 2*pi/5
  • The ending point of each segment becomes the starting point of the following one

The next step is to encapsulate this into a function to draw regular polygons with any given number of edges. I only have to generalize the number of steps and the rotating angle of accumulate:

polygon <- function(n) {
  tibble(
    x    = accumulate(1:(n-1), ~.x+cos(.y*2*pi/n), .init = 0),
    y    = accumulate(1:(n-1), ~.x+sin(.y*2*pi/n), .init = 0),
    xend = accumulate(2:n,     ~.x+cos(.y*2*pi/n), .init = cos(2*pi/n)),
    yend = accumulate(2:n,     ~.x+sin(.y*2*pi/n), .init = sin(2*pi/n)))
}

ggplot(polygon(6))+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

ggplot(polygon(7))+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

ggplot(polygon(8))+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

ggplot(polygon(9))+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

Now, let’s place another segment in the middle of each edge, perpendicular to it towards its centre. To do it I mutate de data frame to add those segments using simple trigonometry: I just have to add pi/2 to the angle wich forms the edge, obtained with atan2 function:

polygon(5) -> df1
df1 %>% mutate(angle = atan2(yend-y, xend-x)+pi/2,
               x = 0.5*x+0.5*xend,
               y = 0.5*y+0.5*yend,
               xend = x+0.2*cos(angle),
               yend = y+0.2*sin(angle)) %>% 
  select(x, y, xend, yend) -> df2
df1 %>% bind_rows(df2) -> df
ggplot(df)+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

These new segments have longitude equal to 0.2, smaller than the original edges of the pentagon. Now, let’s connect the ending points of these perpendicular segments. It is easy using mutate and first functions. Another smaller pentagon appears:

polygon(5) -> df1
df1 %>% mutate(angle = atan2(yend-y, xend-x)+pi/2,
               x = 0.5*x+0.5*xend,
               y = 0.5*y+0.5*yend,
               xend = x+0.2*cos(angle),
               yend = y+0.2*sin(angle)) %>% 
  select(x, y, xend, yend) -> df2
df2 %>% mutate(
  x=xend,
  y=yend,
  xend=lead(x, default=first(x)),
  yend=lead(y, default=first(y))) %>% 
  select(x, y, xend, yend) -> df3
df1 %>% bind_rows(df2) %>% bind_rows(df3) -> df
ggplot(df)+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

Since we are repeating these steps many times, I will write two functions: one to generate perpendicular segments to the edges called mid_points and another one to connect its ending points called con_points. The next code creates both funtions and uses them to add another level to our previous drawing:

mid_points <- function(d) {
  d %>% mutate(
    angle=atan2(yend-y, xend-x) + pi/2,
    x=0.5*x+0.5*xend,
    y=0.5*y+0.5*yend,
    xend=x+0.2*cos(angle),
    yend=y+0.2*sin(angle)) %>% 
    select(x, y, xend, yend)
}
con_points <- function(d) {
  d %>% mutate(
    x=xend,
    y=yend,
    xend=lead(x, default=first(x)),
    yend=lead(y, default=first(y))) %>% 
    select(x, y, xend, yend)
}
polygon(5) -> df1
df2 <- mid_points(df1)
df3 <- con_points(df2)
df4 <- mid_points(df3)
df5 <- con_points(df4)
df1 %>% 
  bind_rows(df2) %>% 
  bind_rows(df3) %>% 
  bind_rows(df4) %>% 
  bind_rows(df5) -> df
ggplot(df)+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

This pattern is called Sutcliffe pentagon. In the previous step, I did iterations manually. The function accumulate can help us to do it automatically. This code reproduces exactly the previous plot:

edges <- 5
niter <- 4
polygon(edges) -> df1
accumulate(.f = function(old, y) {
  if (y%%2!=0) mid_points(old) else con_points(old)
  },
  1:niter,
  .init=df1) %>% 
  bind_rows() -> df
ggplot(df)+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

Substituting edges by 7 and niter by 6 as well in the first two rows of the previous code, generates a different pattern, in this case heptagonal:

Let’s start to play with the parameters to change the appearance of the drawings. What if we do not start the perpendicular segments from the midpoints of the edges? It’s easy: we just need to add a parameter that will name p to the function mid_points (p=0.5 means starting from the middle). This is our heptagon pattern when p is equal to 0.3:

mid_points <- function(d, p) {
  d %>% mutate(
    angle=atan2(yend-y, xend-x) + pi/2,
    x=p*x+(1-p)*xend,
    y=p*y+(1-p)*yend,
    xend=x+0.2*cos(angle),
    yend=y+0.2*sin(angle)) %>% 
    select(x, y, xend, yend)
}
edges <- 7
niter <- 6
polygon(edges) -> df1
accumulate(.f = function(old, y) {
  if (y%%2==0) mid_points(old, 0.3) else con_points(old)
},
1:niter,
.init=df1) %>% 
  bind_rows() -> df
ggplot(df)+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

Another simple modification is to allow any angle between edges and next iteration segments (perpendicular until now ) so let’s add another parameter, called a, to themid_points function:

mid_points <- function(d, p, a) {
  d %>% mutate(
    angle=atan2(yend-y, xend-x) + a,
    x=p*x+(1-p)*xend,
    y=p*y+(1-p)*yend,
    xend=x+0.2*cos(angle),
    yend=y+0.2*sin(angle)) %>% 
    select(x, y, xend, yend)
}
edges <- 7
niter <- 18
polygon(edges) -> df1
accumulate(.f = function(old, y) {
  if (y%%2!=0) mid_points(old, 0.3, pi/5) else con_points(old)
},
1:niter,
.init=df1) %>% 
  bind_rows() -> df
ggplot(df)+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

That’s nice! It looks like a shutter. Now it’s time to change the longitude of the segments starting from the edges (those perpendicular in our first drawings). Now all them measure 0.2. I will take advantage of the parameter y of accumulate and apply a user defined function to modify that longitude each iteration. This example uses the identity function (FUN = function(x) x) to increase longitude step by step:

mid_points <- function(d, p, a, i, FUN = function(x) x) {
  d %>% mutate(
    angle=atan2(yend-y, xend-x) + a,
    radius=FUN(i),
    x=p*x+(1-p)*xend,
    y=p*y+(1-p)*yend,
    xend=x+radius*cos(angle),
    yend=y+radius*sin(angle)) %>% 
    select(x, y, xend, yend)
}

edges <- 7
niter <- 18
polygon(edges) -> df1
accumulate(.f = function(old, y) {
  if (y%%2!=0) mid_points(old, 0.3, pi/5, y) else con_points(old)
},
1:niter,
.init=df1) %>% 
  bind_rows() -> df
ggplot(df)+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

What if we increase niter from 18 to 250?

edges <- 7
niter <- 250
step <- 2
polygon(edges) -> df1
accumulate(.f = function(old, y) {
  if (y%%step!=0) mid_points(old, 0.3, pi/5, y) else con_points(old)
},
1:niter,
.init=df1) %>% 
  bind_rows() -> df
ggplot(df)+
  geom_curve(aes(x=x, y=y, xend=xend, yend=yend),
               curvature = 0,
               color="black",
               alpha=0.1)+
  coord_equal()+
  theme(legend.position  = "none",
        panel.background = element_rect(fill="white"),
        plot.background  = element_rect(fill="white"),
        axis.ticks       = element_blank(),
        panel.grid       = element_blank(),
        axis.title       = element_blank(),
        axis.text        = element_blank())

Not bad, but we can do it better. First of all, note that appart of adding transparency with the parameter alpha inside the ggplot function, I changed the geometry of the plot from geom_segment to geom_curve. Setting curvature = 0 as I did generates straight lines so the result is the same as geom_segment but it will give us an additional degree of freedom to do our plots. I also changed the theme_void by an explicit customization some of the elements of the plot. Concretely, I want to be able to change the background color. This is the definitive code explained:

library(tidyverse)

# This function creates the segments of the original polygon
polygon <- function(n) {
  tibble(
    x    = accumulate(1:(n-1), ~.x+cos(.y*2*pi/n), .init = 0),
    y    = accumulate(1:(n-1), ~.x+sin(.y*2*pi/n), .init = 0),
    xend = accumulate(2:n,     ~.x+cos(.y*2*pi/n), .init = cos(2*pi/n)),
    yend = accumulate(2:n,     ~.x+sin(.y*2*pi/n), .init = sin(2*pi/n)))
}

# This function creates segments from some mid-point of the edges
mid_points <- function(d, p, a, i, FUN = ratio_f) {
  d %>% mutate(
    angle=atan2(yend-y, xend-x) + a,
    radius=FUN(i),
    x=p*x+(1-p)*xend,
    y=p*y+(1-p)*yend,
    xend=x+radius*cos(angle),
    yend=y+radius*sin(angle)) %>% 
    select(x, y, xend, yend)
}

# This function connect the ending points of mid-segments
con_points <- function(d) {
  d %>% mutate(
    x=xend,
    y=yend,
    xend=lead(x, default=first(x)),
    yend=lead(y, default=first(y))) %>% 
    select(x, y, xend, yend)
}

edges <- 3   # Number of edges of the original polygon
niter <- 250 # Number of iterations
pond <- 0.24  # Weight to calculate the point on the middle of each edge
step  <- 13  # No of times to draw mid-segments before connect ending points
alph  <- 0.25 # transparency of curves in geom_curve
angle <- 0.6 # angle of mid-segment with the edge
curv <- 0.1   # Curvature of curves
line_color <- "black" # Color of curves in geom_curve
back_color <- "white" # Background of the ggplot
ratio_f <- function(x) {sin(x)} # To calculate the longitude of mid-segments

# Generation on the fly of the dataset
accumulate(.f = function(old, y) {
  if (y%%step!=0) mid_points(old, pond, angle, y) else con_points(old)
  }, 1:niter,
.init=polygon(edges)) %>% bind_rows() -> df

# Plot
ggplot(df)+
  geom_curve(aes(x=x, y=y, xend=xend, yend=yend),
             curvature = curv,
             color=line_color,
             alpha=alph)+
  coord_equal()+
  theme(legend.position  = "none",
        panel.background = element_rect(fill=back_color),
        plot.background  = element_rect(fill=back_color),
        axis.ticks       = element_blank(),
        panel.grid       = element_blank(),
        axis.title       = element_blank(),
        axis.text        = element_blank())

The next table shows the parameters of each of the previous drawings (from left to right and top to bottom):

edges niter pond step alph angle curv line_color back_color ratio_f
1 4 200 0.92 9 0.50 6.12 0.0 black white function (x) { x }
2 5 150 0.72 13 0.35 2.96 0.0 black white function (x) { sqrt(x) }
3 15 250 0.67 9 0.15 1.27 1.0 black white function (x) { sin(x) }
4 9 150 0.89 14 0.35 3.23 0.0 black white function (x) { sin(x) }
5 5 150 0.27 17 0.35 4.62 0.0 black white function (x) { log(x + 1) }
6 14 100 0.87 14 0.15 0.57 -2.0 black white function (x) { 1 – cos(x)^2 }
7 7 150 0.19 6 0.45 3.59 0.0 black white function (x) { 1 – cos(x)^2 }
8 4 150 0.22 10 0.45 4.78 0.0 black white function (x) { 1/x }
9 3 250 0.24 13 0.25 0.60 0.1 black white function (x) { sin(x) }

You can also play with colors. By the way: this document will help you to choose them by their name. Some examples:

I will not unveil the parameters of the previous drawings. Maybe it can encourage you to try by yourself and find your own patterns. If you do, I will love to see them. I hope you enjoy this reading. The code is also available here.

Allusions to parents in autobiographies (or reading 118 books in a few seconds)

If I keep holding out, will the light shine through? (Come Back, Pearl Jam)

Imagine that you are writing the story of your life. Almost sure you will make allusions to your parents, but will both of them have the same prominence in your biography or will you spend more words in one of them? In that case, which one will have more relevance? Your father or your mother?

This experiment analyses 118 autobiographies from the Project Gutenberg and count how many times do authors make allusions to their fathers and mothers. This is what I’ve done:

  • Download all works from Gutenberg Project containing the word autobiography in its title (there are 118 in total).
  • Count how many times the bigrams my father and my mother appear in each text. This is what I call allusions to father and mother respectively.

The number of allusions that I measure is a lower bound of the exact amount of them since the calculus has some limitations:

  • Maybe the author refers to them by their names.
  • After referring to them as my father or my mother, subsequent sentences may refer them as He or She.

Anyway, I think these constrains do not introduce any bias in the calculus since may affect to fathers and mothers equally. Here you can find the dataset I created after downloading all autobiographies and measuring the number of allusions to each parent.

Some results:

  • 64% of autobiographies have more allusions to the father than the mother.
  • 24% of autobiographies have more allusions to the mother than the father.
  • 12% allude them equally.

Most of the works make more allusions to father than to mother. As a visual proof of this fact, the next plot is a histogram of the difference between the amount of allusions to father and mother along the 118 works (# allusions to father# allusions to mother):

The distribution is clearly right skeweed, which supports our previous results. Another way to see this fact is this last plot, which situates each autobiography in a scatter plot, where X-axis is the amount of allusions to father and Y-axis to mother. It is interactive, so you can navigate through it to see the details of each point (work):

Most of the points (works) are below the diagonal, which means that they contain more allusions to father than mother. Here you can find a full version of the previous plot.

I don’t have any explanation to this fact, just some simple hypothesis:

  • Fathers and mothers influence their children differently.
  • Fathers star in more anecdotes than mothers.
  • This is the effect of patriarchy (72% of authors was born in the XIX century)

Whatever it is the explanation, this experiment shows how easy is to do text mining with R. Special mention to purrr (to iterate eficiently over the set of works IDs), tidytext (to count the number of appearances of bigrams), highcharter (to do the interactive plot) and gutenbergr (to download the books). You can find the code here.

Mandalaxies

One cannot escape the feeling that these mathematical formulas have an independent existence and an intelligence of their own, that they are wiser than we are, wiser even than their discoverers (Heinrich Hertz)

I love spending my time doing mathematics: transforming formulas into drawings, experimenting with paradoxes, learning new techniques … and R is a perfect tool for doing it. Maths are for me a the best way of escape and evasion from reality. At least, doing maths is a stylish way of wasting my time.

When I read something interesting, many times I feel the desire to try it by myself. That’s what happened to me when I discovered this fabolous book by Julien C. Sprott. I cannot stop doing images with the formulas that contains. Today I present you a mix of mandalas and galaxies that I called Mandalaxies:

This time, the equation that drives these drawings is this one:

x_{n+1}= 10a_1+(x_n+a_2sin(a_3y_n+a_4))cos(\alpha)+y_nsin(\alpha)\\ y_{n+1}= 10a_5-(x_n+a_2sin(a_3y_n+a_4))sin(\alpha)+y_nsin(\alpha)
where \alpha=2\pi/(13+10a_6)

The equation depends on six parameters (from a1 to a6). Searching randomly for values between -1.2 and 1.3 to each of them, you can generate an infinite number of beautiful images:

Here you can find the code to do your own images. Once again, Rcpp is key to generate the set of points to plot quickly since each of the previous plots contains 4 million points.

Rcpp, Camarón de la Isla and the Beauty of Maths

Desde que te estoy queriendo
yo no sé lo que me pasa
cualquier vereda que tomo
siempre me lleva a tu casa
(Y mira que mira y mira, Camarón de la Isla)

The verses that head this post are taken from a song of Camarón de la Isla and illustrate very well what is a strange attractor in the real life. For non-Spanish speakers a translation is since I’m loving you, I don’t know what happens to me: any path I take, always ends at your house. If you don’t know who is Camarón de la Isla, hear his immense and immortal music.

I will not try to give here a formal definition of a strange attractor. Instead of doing it, I will try to describe them with my own words. A strange attractor can be defined with a system of equations (I don’t know if all strage attractors can be defined like this). These equations determine the trajectory of some initial point along a number of steps. The location of the point at step i, depends on the location of it at step i-1 so the trajectory is calculated sequentially. These are the equations that define the attractor of this experiment:

x_{n+1}= a_{1}+a_{2}x_{n}+a_{3}y_{n}+a_{4} |x_{n}|^{a_5}+a_{6} |y_{n}|^{a_7}\\ y_{n+1}= a_{8}+a_{9}x_{n}+a_{10}y_{n}+a_{11} |x_{n}|^{a_{12}}+a_{13} |y_{n}|^{a_{14}}

As you can see there are two equations, describing the location of each coordinate of the point (therefore it is located in a two dimensional space). These equations are impossible to resolve. In other words, you cannot know where will be the point after some iterations directly from its initial location. The adjective attractor comes from the fact of the trajectory of the point tends to be the same independently of its initial location.

Here you have more examples: folds, waterfalls, sand, smoke … images are really appealing:

The code of this experiment is here. You will find there a definition of parameters that produce a nice example image. Some comments:

  • Each point depends on the previous one, so iteration is mandatory; since each plot involves 10 million points, a very good option to do it efficiently is to use Rcpp, which allows you to iterate directly in C++.
  • Some points are quite isolated and far from the crowd of points. This is why I locate some breakpoints with quantile to remove tails. If not, the plot may be reduced to a big point.
  • The key to obtain a nice plot if to find out a good set of parameters (a1 to a14). I have my own method, wich involves the following steps: generate a random value for each between -4 and 4, simulate a mini attractor of only 2000 points and keep it if it doesn’t diverge (i.e. points don’t go to infinite), if x and y are not correlated at all and its kurtosis is bigger than a certain thresold. If the mini attractor overcome these filters, I keep its parameters and generate the big version with 10 million points.
  • I would have publish this method together with the code but I didn’t. Why? Because this may bring yourself to develop your own since mine one is not ideal. If you are interested in mine, let me know and I will give you more details. If you develop a good method by yourself and don’t mind to share it with me, let me know as well, please.

This post is inspired in this beautiful book from Julien Clinton Sprott. I would love to see your images.

Spinning Pins

Condenado a estar toda la vida, preparando alguna despedida (Desarraigo, Extremoduro)

I live just a few minutes from the Spanish National Museum of Science and Technology (MUNCYT), where I use to go from time to time with my family. The museum is plenty of interesting artifacts, from a portrait of Albert Einstein made with thousands of small dices to a wind machine to experiment with the Bernoulli’s effect. There is also a device which creates waves. It is formed by a number of sticks, arranged vertically, each of one ending with a ball (so it forms a sort of pin). When you push the start button, all the pins start to move describing circles. Since each pair of consecutive pins are separated by the same angle, the resulting movement imitates a wave. In this experiment I created some other machines. The first one imitates exactly the one of the museum:

If you look carefully only to one pin, you will see how it describes a circle. Each one starts at a different angle and all them move at the same speed. Although an individual pin is pretty boring, all together create a nice pattern. The museum’s machine is formed just by one row of 20 pins. I created a 20×20 grid of pins to make result more appealing.

Playing with the angle between pins you can create another nice patterns like these:

The code is incredibly simple and can be used as a starting point to create much more complicated patterns, changing the speed depending on time or the location of pins. Play with colors or shapes of points, the number of pins or with the separation and speed of them. The magic of movement is created with gganimate package. You can find the code here.

Merry Christmas and Happy New Year. Thanks a lot for reading my posts.