Tag Archives: ggplot2

Colonizing Franky

Y otra vez me arranco despacito, al sentir que nada necesito (Locura transitoria, Extremoduro)

One of my favorite sites in the Internet is algorithmic botany . It’s always a source of inspiration for me. I recently discovered there the space colonization algorithm, concretely in this paper. Originally, the algorithm was developed to simulate leaf venation patterns as well as the branching structure of trees and it works by simulating the competition for space between growing veins (or branches). Given a initial set of attractor points (3.000 points in my case), and a initial node (also a point located randomly inside the picture) the algorithm performs the next steps iteratively:

  • measure distances between attractors and nodes
  • assign the closest node to each attractor
  • keep just those pairs (node, attractor) which distance is between a minimum and maximum
  • normalize and obtain the unit average vectors for each node
  • create a new set of nodes using previous vectors and a predefined longitude

Once again, I used this image of Boris Karloff as Frankenstein’s monster to experiment with the algorithm. I have to say that I coded my own version of the algorithm which maybe is not the canonical one inspired by the previous paper as well as by this tutorial by the coding challenge. This is the resulting drawing using my preferred parametrization. As you will see in the code, I overimpose several layers of colonizations. I love the resulting images:

Play with the parameters to create your own images. If you improve my code, do not hesitate to do a pull request in Github if you want.

Clustering Frankenstein

Necesito para estar sentado, un arbolito en este descampado (Desarraigo, Extremoduro)

From time to time I come back to experiment with this stunning photograph of Boris Karloff as Frankenstein’s monster. I have done several of them previously: from decomposing it into Voronoi regions, to draw it as a single line portrait using an algorithm to solve the travelling salesman problem. I also used this last technique to do a pencil portrait of the image. Today I will use a machine learning algorithm to reinterpret the monster once again. Concretely, I will use hierarchical clustering to do drawings like this one:

The idea is simple: once loaded the photograph, the first step is to binarize it into a black and white image using thresold function of imager package. After that, a random sample of black points is taken. Here comes the clustering algorithm, which starts measuring the euclidean distance between each pair of points. Then, a hierarchical clustering is done so I can reproduce how points are gathered walking through the resulting dendrogram of the previous clustering, starting from the maximum number of clusters (each cluster is an individual point) and ending with the minimum one (just one cluster with the whole sample). The next image shows and example of this process for a sample of 25 points. The left plot shows the population of points and the right one the way that points are connected once the dendrogram is analyzed following the steps described before:

Applying this technique to a big amount of points (between 2.000 and 5.000) result in very interesting drawings. To make process faster, I used map function from purrr package. To render the graph I use ggplot function with geom_curve. This geometry draws a curve between two points, named (x, y) and (xend, yend) respectively. Among others, there are two important parameters to control its shape: curvature (negative values produce left-hand curves, positive values produce right-hand curves, and zero produces a straight line) and angle (values less than 90 skew the curve towards the start point and values greater than 90 skew the curve towards the end point). Playing with this paramaters, as well as with the sample size, you can generate a wide variety of drawings (note that here only appear the segments, since now I removed the points of their extremes):

You can find the code of this experiment here. If you do something interesting with it, please let me know. Thanks a lot for reading my post.

Rhombuses

For a lonely soul, you’re having such a nice time (Nothing in my way, Keane)

In my previous post, I created the P2 Penrose tessellation according to the instructions of this post. Now it’s time to create the P3 tessellation following the same technique I described already. This is the image of the P3 tessellation:

Note that all tiles are rhombuses. I recognize that I like the P3 more than the P2, I do not really why. What about you? Here you have the code to play with it if you want.

Kites and Darts: the Penrose Tiling

Agarrada a mis costillas le cuelgan las piernas (Godzilla, Leiva)

Penrose tilings are amazing. Apart of the inner beauty of tesselations, they have two interesting properties: they are non-periodic (they lack any translational symmetry) and self-similar (any finite region appears an infinite number of times in the tiling). Both characteristics make them a kind of chaotical as well as ordered mathematical object that make them really appealing.

In this experiment I create Penrose tilings. Concretely, the P2 tiling, according to this article from Simon Tatham, that I will follow and provides a perfect explanation of how these tessellations can be constructed. The code is available here, and you can use it to create Penrose tilings like this one:

I will not explain in depth how to build the P2 tiling, since the article I mentioned before does it perfectly. Instead of that, I will give some highlights of the process together with a brief explanation of the code involved in it.

Everything has to do with triangles. Concretely, everything has to do with two types of triangles. To differenciate them I name their sides with numbers. The first triangle has labels 1, 2 and 3 and the other one has labels 1, 2 and 4. Two triangles of type 123 forms a kyte like this:

On the other hand, two triangles of type 124 forms a dart like this:

Actually, kites and darts don’t contain their inner segments so both of them are polygons of 4 sides. The building of a Penrose tiling is an iterative process that begins with 5 kites (i.e. 10 triangles of type 123) gathered like this:

You can start with many other patterns but this one will result in a round shape tiling and I like it. I build the tiling by subdividing triangles as Simon describes in his article. A triangle of type 123 is subdivided into three triangles: two of type 123 and one of type 124. The following image shows a 123 triangle (left) and the result after its division (right):

A triangle of type 124 is subdivided into two triangles: one of type 123 and one of type 124. The following image shows a 124 triangle (left) and the result after its division (right):

In each iteration, all triangles are subdivided according its type. After 5 iterations, the resulting pattern is like this:

To make calculations easier I arranged the data frame following a segment structure, in which the sides of triangles are defined by two coordinates: (x, y) and (xend, yend). The bad side of it is that I have rounding problems after making some iterations. It makes the points that would be the same differs slightly because they come from different triangles. I fix it using a hierarchical clustering and substituting points by its centroids after cutting up the dendogram using a very low thresold. Once this problem is solved I can remove the inner segments of all kites and darts, which are segments of type 3 or 4. Apart of removing them, I join the xx triengles to form 4-sides polygons. All these tasks are done with the function Arrange_df (remember that the code is here). This is the result:

This pattern is quite similar to its previous one but now the data frame is ready to be arranged as a polygon using the function Create_Polygon. At least, I calculate the area of each polygon with the Shoelace formula to create a columns called area which I use to fill polygons with two nice colors.

I hope that these explanations will help you to understand and improve the code as well as to invite you to create your own Penrose tilings.

Drrrawing with purrr

La luna es un pozo chico
las flores no valen nada
lo que valen son tus brazos
cuando de noche me abrazan
(Zorongo Gitano, Carmen Linares)

When I publish a post showing my drawings, I use to place some outputs, give some highlights about the techniques involved as well as a link to the R code that I write to generate them. That’s my typical generative-art post (here you have an example of it). I think that my audience knows to program in R and is curious enough to run and modify the code by themselves to generate their own outputs. Today I will try to be more educational and will explain step by step how you can obtain drawings like these:

There are two reasons for this decision:

  • It can illustrate quite well my mental journey from a simple idea to what I think is a interesting enough experiment to publish.
  • I think that this experiment is a good example of the use of accumulate, a very useful function from the life-changing purrr package.

Here we go: there are many ways of drawing a pentagon in R. Following you will find a piece of code that does it using accumulate function from purrr package. I will use only two libraries for this experiment: ggplot2 and purrr so I will just load in the tidyverse (both libraries take part of it):

library(tidyverse)

pentagon <- tibble(
  x    = accumulate(1:4, ~.x+cos(.y*2*pi/5), .init = 0),
  y    = accumulate(1:4, ~.x+sin(.y*2*pi/5), .init = 0),
  xend = accumulate(2:5, ~.x+cos(.y*2*pi/5), .init = cos(2*pi/5)),
  yend = accumulate(2:5, ~.x+sin(.y*2*pi/5), .init = sin(2*pi/5)))

ggplot(pentagon)+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

The function accumulate applies sequentially some function a number of times storing all the intermediate results. When I say sequentially I mean that the input of any step is the output of the prevoius one. The accumulate function uses internally two important arguments called .x and .y: my own way to understand its significance is that .x is the previous value of the output vector and .y is the previous value of the one which controls the iteration. Let’s see a example: imagine that I want to create a vector with the first 10 natural numbers. This is an option:

> accumulate(1:10, ~.y)
 [1]  1  2  3  4  5  6  7  8  9 10

The vector which controls the iteration in this case is 1:10 and .y are the values of it so I just have to define a function wich returns that values and this is as simple as ~.y. The first iteration takes the first element of that vector. This is another way to do it:

> accumulate(1:10, ~.x+1)
 [1]  1  2  3  4  5  6  7  8  9 10

To replicate the same output with .x I have to change a bit the function to ~.x+1 because if not, it will always return 1. Remember that .x is the previous output of the function and it is initialized with 1 (the first value of the vector 1:10). Intead of initializing .x with the first value of the vector of the first argument of accumulate, you can define exactly its first value using .init:

accumulate(2:10,  ~.y, .init = 1)
accumulate(1:9, ~.x+1, .init = 1)

Note that using .init I have to change the vector to reproduce the same output as before. I hope now you will understand how I generated the initial and ending points of the previous pentagon. Some points to help you if not:

  • I generate a tibble with 5 rows, each of one defines a different segment of the pentagon
  • First segments starts at (0,0)
  • The rotating angle is equal to 2*pi/5
  • The ending point of each segment becomes the starting point of the following one

The next step is to encapsulate this into a function to draw regular polygons with any given number of edges. I only have to generalize the number of steps and the rotating angle of accumulate:

polygon <- function(n) {
  tibble(
    x    = accumulate(1:(n-1), ~.x+cos(.y*2*pi/n), .init = 0),
    y    = accumulate(1:(n-1), ~.x+sin(.y*2*pi/n), .init = 0),
    xend = accumulate(2:n,     ~.x+cos(.y*2*pi/n), .init = cos(2*pi/n)),
    yend = accumulate(2:n,     ~.x+sin(.y*2*pi/n), .init = sin(2*pi/n)))
}

ggplot(polygon(6))+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

ggplot(polygon(7))+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

ggplot(polygon(8))+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

ggplot(polygon(9))+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

Now, let’s place another segment in the middle of each edge, perpendicular to it towards its centre. To do it I mutate de data frame to add those segments using simple trigonometry: I just have to add pi/2 to the angle wich forms the edge, obtained with atan2 function:

polygon(5) -> df1
df1 %>% mutate(angle = atan2(yend-y, xend-x)+pi/2,
               x = 0.5*x+0.5*xend,
               y = 0.5*y+0.5*yend,
               xend = x+0.2*cos(angle),
               yend = y+0.2*sin(angle)) %>% 
  select(x, y, xend, yend) -> df2
df1 %>% bind_rows(df2) -> df
ggplot(df)+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

These new segments have longitude equal to 0.2, smaller than the original edges of the pentagon. Now, let’s connect the ending points of these perpendicular segments. It is easy using mutate and first functions. Another smaller pentagon appears:

polygon(5) -> df1
df1 %>% mutate(angle = atan2(yend-y, xend-x)+pi/2,
               x = 0.5*x+0.5*xend,
               y = 0.5*y+0.5*yend,
               xend = x+0.2*cos(angle),
               yend = y+0.2*sin(angle)) %>% 
  select(x, y, xend, yend) -> df2
df2 %>% mutate(
  x=xend,
  y=yend,
  xend=lead(x, default=first(x)),
  yend=lead(y, default=first(y))) %>% 
  select(x, y, xend, yend) -> df3
df1 %>% bind_rows(df2) %>% bind_rows(df3) -> df
ggplot(df)+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

Since we are repeating these steps many times, I will write two functions: one to generate perpendicular segments to the edges called mid_points and another one to connect its ending points called con_points. The next code creates both funtions and uses them to add another level to our previous drawing:

mid_points <- function(d) {
  d %>% mutate(
    angle=atan2(yend-y, xend-x) + pi/2,
    x=0.5*x+0.5*xend,
    y=0.5*y+0.5*yend,
    xend=x+0.2*cos(angle),
    yend=y+0.2*sin(angle)) %>% 
    select(x, y, xend, yend)
}
con_points <- function(d) {
  d %>% mutate(
    x=xend,
    y=yend,
    xend=lead(x, default=first(x)),
    yend=lead(y, default=first(y))) %>% 
    select(x, y, xend, yend)
}
polygon(5) -> df1
df2 <- mid_points(df1)
df3 <- con_points(df2)
df4 <- mid_points(df3)
df5 <- con_points(df4)
df1 %>% 
  bind_rows(df2) %>% 
  bind_rows(df3) %>% 
  bind_rows(df4) %>% 
  bind_rows(df5) -> df
ggplot(df)+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

This pattern is called Sutcliffe pentagon. In the previous step, I did iterations manually. The function accumulate can help us to do it automatically. This code reproduces exactly the previous plot:

edges <- 5
niter <- 4
polygon(edges) -> df1
accumulate(.f = function(old, y) {
  if (y%%2!=0) mid_points(old) else con_points(old)
  },
  1:niter,
  .init=df1) %>% 
  bind_rows() -> df
ggplot(df)+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

Substituting edges by 7 and niter by 6 as well in the first two rows of the previous code, generates a different pattern, in this case heptagonal:

Let’s start to play with the parameters to change the appearance of the drawings. What if we do not start the perpendicular segments from the midpoints of the edges? It’s easy: we just need to add a parameter that will name p to the function mid_points (p=0.5 means starting from the middle). This is our heptagon pattern when p is equal to 0.3:

mid_points <- function(d, p) {
  d %>% mutate(
    angle=atan2(yend-y, xend-x) + pi/2,
    x=p*x+(1-p)*xend,
    y=p*y+(1-p)*yend,
    xend=x+0.2*cos(angle),
    yend=y+0.2*sin(angle)) %>% 
    select(x, y, xend, yend)
}
edges <- 7
niter <- 6
polygon(edges) -> df1
accumulate(.f = function(old, y) {
  if (y%%2==0) mid_points(old, 0.3) else con_points(old)
},
1:niter,
.init=df1) %>% 
  bind_rows() -> df
ggplot(df)+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

Another simple modification is to allow any angle between edges and next iteration segments (perpendicular until now ) so let’s add another parameter, called a, to themid_points function:

mid_points <- function(d, p, a) {
  d %>% mutate(
    angle=atan2(yend-y, xend-x) + a,
    x=p*x+(1-p)*xend,
    y=p*y+(1-p)*yend,
    xend=x+0.2*cos(angle),
    yend=y+0.2*sin(angle)) %>% 
    select(x, y, xend, yend)
}
edges <- 7
niter <- 18
polygon(edges) -> df1
accumulate(.f = function(old, y) {
  if (y%%2!=0) mid_points(old, 0.3, pi/5) else con_points(old)
},
1:niter,
.init=df1) %>% 
  bind_rows() -> df
ggplot(df)+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

That’s nice! It looks like a shutter. Now it’s time to change the longitude of the segments starting from the edges (those perpendicular in our first drawings). Now all them measure 0.2. I will take advantage of the parameter y of accumulate and apply a user defined function to modify that longitude each iteration. This example uses the identity function (FUN = function(x) x) to increase longitude step by step:

mid_points <- function(d, p, a, i, FUN = function(x) x) {
  d %>% mutate(
    angle=atan2(yend-y, xend-x) + a,
    radius=FUN(i),
    x=p*x+(1-p)*xend,
    y=p*y+(1-p)*yend,
    xend=x+radius*cos(angle),
    yend=y+radius*sin(angle)) %>% 
    select(x, y, xend, yend)
}

edges <- 7
niter <- 18
polygon(edges) -> df1
accumulate(.f = function(old, y) {
  if (y%%2!=0) mid_points(old, 0.3, pi/5, y) else con_points(old)
},
1:niter,
.init=df1) %>% 
  bind_rows() -> df
ggplot(df)+
  geom_segment(aes(x=x, y=y, xend=xend, yend=yend))+
  coord_equal()+
  theme_void()

What if we increase niter from 18 to 250?

edges <- 7
niter <- 250
step <- 2
polygon(edges) -> df1
accumulate(.f = function(old, y) {
  if (y%%step!=0) mid_points(old, 0.3, pi/5, y) else con_points(old)
},
1:niter,
.init=df1) %>% 
  bind_rows() -> df
ggplot(df)+
  geom_curve(aes(x=x, y=y, xend=xend, yend=yend),
               curvature = 0,
               color="black",
               alpha=0.1)+
  coord_equal()+
  theme(legend.position  = "none",
        panel.background = element_rect(fill="white"),
        plot.background  = element_rect(fill="white"),
        axis.ticks       = element_blank(),
        panel.grid       = element_blank(),
        axis.title       = element_blank(),
        axis.text        = element_blank())

Not bad, but we can do it better. First of all, note that appart of adding transparency with the parameter alpha inside the ggplot function, I changed the geometry of the plot from geom_segment to geom_curve. Setting curvature = 0 as I did generates straight lines so the result is the same as geom_segment but it will give us an additional degree of freedom to do our plots. I also changed the theme_void by an explicit customization some of the elements of the plot. Concretely, I want to be able to change the background color. This is the definitive code explained:

library(tidyverse)

# This function creates the segments of the original polygon
polygon <- function(n) {
  tibble(
    x    = accumulate(1:(n-1), ~.x+cos(.y*2*pi/n), .init = 0),
    y    = accumulate(1:(n-1), ~.x+sin(.y*2*pi/n), .init = 0),
    xend = accumulate(2:n,     ~.x+cos(.y*2*pi/n), .init = cos(2*pi/n)),
    yend = accumulate(2:n,     ~.x+sin(.y*2*pi/n), .init = sin(2*pi/n)))
}

# This function creates segments from some mid-point of the edges
mid_points <- function(d, p, a, i, FUN = ratio_f) {
  d %>% mutate(
    angle=atan2(yend-y, xend-x) + a,
    radius=FUN(i),
    x=p*x+(1-p)*xend,
    y=p*y+(1-p)*yend,
    xend=x+radius*cos(angle),
    yend=y+radius*sin(angle)) %>% 
    select(x, y, xend, yend)
}

# This function connect the ending points of mid-segments
con_points <- function(d) {
  d %>% mutate(
    x=xend,
    y=yend,
    xend=lead(x, default=first(x)),
    yend=lead(y, default=first(y))) %>% 
    select(x, y, xend, yend)
}

edges <- 3   # Number of edges of the original polygon
niter <- 250 # Number of iterations
pond <- 0.24  # Weight to calculate the point on the middle of each edge
step  <- 13  # No of times to draw mid-segments before connect ending points
alph  <- 0.25 # transparency of curves in geom_curve
angle <- 0.6 # angle of mid-segment with the edge
curv <- 0.1   # Curvature of curves
line_color <- "black" # Color of curves in geom_curve
back_color <- "white" # Background of the ggplot
ratio_f <- function(x) {sin(x)} # To calculate the longitude of mid-segments

# Generation on the fly of the dataset
accumulate(.f = function(old, y) {
  if (y%%step!=0) mid_points(old, pond, angle, y) else con_points(old)
  }, 1:niter,
.init=polygon(edges)) %>% bind_rows() -> df

# Plot
ggplot(df)+
  geom_curve(aes(x=x, y=y, xend=xend, yend=yend),
             curvature = curv,
             color=line_color,
             alpha=alph)+
  coord_equal()+
  theme(legend.position  = "none",
        panel.background = element_rect(fill=back_color),
        plot.background  = element_rect(fill=back_color),
        axis.ticks       = element_blank(),
        panel.grid       = element_blank(),
        axis.title       = element_blank(),
        axis.text        = element_blank())

The next table shows the parameters of each of the previous drawings (from left to right and top to bottom):

edges niter pond step alph angle curv line_color back_color ratio_f
1 4 200 0.92 9 0.50 6.12 0.0 black white function (x) { x }
2 5 150 0.72 13 0.35 2.96 0.0 black white function (x) { sqrt(x) }
3 15 250 0.67 9 0.15 1.27 1.0 black white function (x) { sin(x) }
4 9 150 0.89 14 0.35 3.23 0.0 black white function (x) { sin(x) }
5 5 150 0.27 17 0.35 4.62 0.0 black white function (x) { log(x + 1) }
6 14 100 0.87 14 0.15 0.57 -2.0 black white function (x) { 1 – cos(x)^2 }
7 7 150 0.19 6 0.45 3.59 0.0 black white function (x) { 1 – cos(x)^2 }
8 4 150 0.22 10 0.45 4.78 0.0 black white function (x) { 1/x }
9 3 250 0.24 13 0.25 0.60 0.1 black white function (x) { sin(x) }

You can also play with colors. By the way: this document will help you to choose them by their name. Some examples:

I will not unveil the parameters of the previous drawings. Maybe it can encourage you to try by yourself and find your own patterns. If you do, I will love to see them. I hope you enjoy this reading. The code is also available here.

Mandalaxies

One cannot escape the feeling that these mathematical formulas have an independent existence and an intelligence of their own, that they are wiser than we are, wiser even than their discoverers (Heinrich Hertz)

I love spending my time doing mathematics: transforming formulas into drawings, experimenting with paradoxes, learning new techniques … and R is a perfect tool for doing it. Maths are for me a the best way of escape and evasion from reality. At least, doing maths is a stylish way of wasting my time.

When I read something interesting, many times I feel the desire to try it by myself. That’s what happened to me when I discovered this fabolous book by Julien C. Sprott. I cannot stop doing images with the formulas that contains. Today I present you a mix of mandalas and galaxies that I called Mandalaxies:

This time, the equation that drives these drawings is this one:

x_{n+1}= 10a_1+(x_n+a_2sin(a_3y_n+a_4))cos(\alpha)+y_nsin(\alpha)\\ y_{n+1}= 10a_5-(x_n+a_2sin(a_3y_n+a_4))sin(\alpha)+y_nsin(\alpha)
where \alpha=2\pi/(13+10a_6)

The equation depends on six parameters (from a1 to a6). Searching randomly for values between -1.2 and 1.3 to each of them, you can generate an infinite number of beautiful images:

Here you can find the code to do your own images. Once again, Rcpp is key to generate the set of points to plot quickly since each of the previous plots contains 4 million points.

Tweetable Mathematical Art With R

Sin ese peso ya no hay gravedad
Sin gravedad ya no hay anzuelo
(Mira cómo vuelo, Miss Caffeina)

I love messing around with R to generate mathematical patterns. I always get surprised doing it and gives me lot of satisfaction. I also learn lot of things doing it: not only about R, but also about mathematics. It is one of my favourite hobbies. Some time ago, I published this post showing some drawings, each of them generated with less than 280 characters of code, to be shared on Twitter. This post came to appear in Hacker News, which provoked an incredible peak on visits to my blog. Some comments in the Hacker News entry are very interesting.

This Summer I delved into this concept of Tweetable Art publishing several drawings together with the R code to generate them. In this post I will show some.

Vertiginous Spiral

I came up with this image inspired by this nice pattern. It is a turtle graphic inspired pattern but instead of drawing lines I use geom_polygon to colour the resulting image in black and white:

Code:

library(tidyverse)
df <- data.frame(x=0, y=0)
for (i in 2:500){
  df[i,1] <- df[i-1,1]+((0.98)^i)*cos(i)
  df[i,2] <- df[i-1,2]+((0.98)^i)*sin(i)   
}
ggplot(df, aes(x,y)) + 
  geom_polygon()+
  theme_void()

Slight modifications of the code can generate appealing patterns like this:

Marine Creature

A combination of sines and cosines. It reminds me a jellyfish:

Code:

library(tidyverse)
seq(from=-10, to=10, by = 0.05) %>%
  expand.grid(x=., y=.) %>%
  ggplot(aes(x=(x^2+pi*cos(y)^2), y=(y+pi*sin(x)))) +
  geom_point(alpha=.1, shape=20, size=1, color="black")+
  theme_void()+coord_fixed()

Summoning Cthulhu

The name is inspired in an answer from Mara Averick to this tweet. It is a modification of the marine creature in polar coordinates:

Code:

library(tidyverse)
seq(-3,3,by=.01) %>%
  expand.grid(x=., y=.) %>%
  ggplot(aes(x=(x^3-sin(y^2)), y=(y^3-cos(x^2)))) +
  geom_point(alpha=.1, shape=20, size=0, color="white")+
  theme_void()+
  coord_fixed()+
  theme(panel.background = element_rect(fill="black"))+
  coord_polar()

Naive Sunflower

Sunflowers arrange their seeds according a mathematical pattern called phyllotaxis, whic inspires this image. If you want to create your own flowers, you can do this Datacamp’s project. It’s free and will introduce you to the amazing world of ggplot2, my favourite package to create images:

Code:

library(ggplot2)
a=pi*(3-sqrt(5))
n=500
ggplot(data.frame(r=sqrt(1:n),t=(1:n)*a),
       aes(x=r*cos(t),y=r*sin(t)))+
  geom_point(aes(x=0,y=0),
             size=190,
             colour="violetred")+
  geom_point(aes(size=(n-r)),
             shape=21,fill="gold",
             colour="gray90")+
  theme_void()+theme(legend.position="none")

Silk Knitting

It is inspired by this other pattern. A lot of almost transparent white points ondulating according to sines and cosines on a dark coloured background:

Code:

library(tidyverse)
seq(-10, 10, by = .05) %>%
  expand.grid(x=., y=.) %>%
  ggplot(aes(x=(x+sin(y)), y=(y+cos(x)))) +
  geom_point(alpha=.1, shape=20, size=0, color="white")+
  theme_void()+
  coord_fixed()+
  theme(panel.background = element_rect(fill="violetred4"))

Try to modify them and generate your own patterns: it is a very funny way to learn R.

Note: in order to make them better readable, some of the pieces of code below may have more than 280 characters but removing unnecessary characters (blanks or carriage return) you can reduce them to make them tweetable.

How Do We Draw a Line?

She dreams in colour, she dreams in red, can’t find a better man (Better Man, Pearl Jam)

Today I bring another experiment based on The Quick Draw! Data from Google, one of my most fortunate discoveries of the last times. The Quick Draw! is a web game developed by Google, that can be played on a computer, tablet or mobile phone, in which you are asked to draw something (for example, a bird). Then you have just 20 seconds to do it. You win if a machine, trained with a neural network, deduces what are you drawing. The best way to understand how it works is playing to it here. Google published data of about 50 million drawings across 345 categories, contributed by players of the game from all over the world. Datasets are in ndjson format (newline delimited JSON). In my previous post I analyzed one of these datasets, and showed a way to parse and represent the drawings in ggplot.

In this occasion I analyze the simplest drawing that Google can ask you: a line. The dataset, which is called lines.ndjson, can be found here and contains more than 143.000 lines drawn by people from about 170 countries. Most of these drawings come from The United States (45.4%), United Kingdom (7.5%), Canada (3.6%), Germany (3.5%) and Russian Federation (2.3%).

Let’s try to understand how humans draw lines. Concretely, in which direction do we draw them: horizontally? toward right o left? vertically? toward up or down? This analysis is inspired in two great articles I read recently:

There are some technical details around this experiment I would highlight:

  • I parse the dataset using fromJSON function from rjson package.
  • I use purrr package to apply a linear regression to the points defining the line for each drawing.
  • I easily convert the summary of the linear regression into a data frame using tidy function from broom package.
  • I use the slope of the regression to obtain the angle which describes the line (depending on where it is started I add pi to de arctangent of the slope)
  • I represent the frequence of angles using polar coordinates dividing circle in sections of 30 degrees in the following way: 345°- 15°, 15°- 45°, 45°-75°, 75°-105°, …, 315°-345° so for example, horizontal lines from left to right will fall into 345º- 15º category.

This is how do we draw lines analysing the entire dataset, without doing any distinction by country:

The fact seems clear: an average human who plays to the Quick Draw! game, draws a line horizontally from left to right with a probability of 59%. I have to admite that I expected a majority of horizontal-left-to-right lines, but not as crushingly as the plot shows. Maybe my a priori is far from the reality because I am lefty and I would draw it in another way. Remember as well that this mean human will probably come from The United States.

Are there differences by country? Yes, and they are very interesting. I removed all that countries with less then 150 drawings. Taking this into account, these are the four countries where more people draw vertical bottom-up lines:

And these are where more people draw horizontal right-left lines:

We’ve seen that on average, 59% of lines are drawn from left to right. This figure reaches more than 75% in the following countries:

And where do people draw more oblique lines? Here:

Surprisingly, a very small amount of lines are drawn toward down, which seems me quite intriguing.

Some thoughts (let me know yours):

  • Humans prefer doing horizontal lines from left-to-right everywhere
  • In case of drawing vertical, we clearly prefer bottom-up movement rather than the opposite; maybe the device configuration or the arrangement of the application motivates this behaviour.
  • Arab and hebrew are written from right-to-left: this fact seems to have a significant influence on the way that people draw lines.

You can find the code of this experiment here.

Exploring The Quick, Draw! Dataset With R: The Mona Lisa

All that noise, and all that sound, all those places I have found (Speed of Sound, Coldplay)

Some days ago, my friend Jorge showed me one of the coolest datasets I’ve ever seen: the Google quick draw dataset. In its Github website you can see a detailed description of the data. Briefly, it contains  around 50 million of drawings of people around the world in .ndjson format. In this experiment, I used the simplified version of drawings where strokes are simplified and resampled with a 1 pixel spacing. Drawings are also aligned to top-left corner and scaled to have a maximum value of 255. All these things make data easier to manage and to represent into a plot.

Since .ndjson files may be very large, I used LaF package to access randon lines of the file rather than reading it completely. I wrote a script to explore The Mona Lisa.ndjson file, which contains more than 120.000 drawings that the TensorFlow engine from Google recognized as being The Mona Lisa. It is quite funny to see them. Whit this script you can:

  • Reproduce a random single drawing
  • Create a 9×9 mosaic of random drawings
  • Create an animation simulating the way the drawing was created

I use ggplot2 package to render drawings and gganimate package of David Robinson to create animations.

This is an example of a single drawing:

This is an example of a 3×3 mosaic:

This is an example of animation:

If you want to try by yourself, you can find the code here.

Note: to work with gganimate, I downloaded the portable version and pointed to it with Sys.setenv command as explained here.

Coloring Sudokus

Someday you will find me
caught beneath the landslide
(Champagne Supernova, Oasis)

I recently read a book called Snowflake Seashell Star: Colouring Adventures in Numberland by Alex Bellos and Edmund Harris which is full of mathematical patterns to be coloured. All images are truly appealing and cause attraction to anyone who look at them, independently of their age, gender, education or political orientation. This book demonstrates how maths are an astonishing way to reach beauty.

One of my favourite patterns are tridokus, a sophisticated colored version of sudokus. Coloring a sudoku is simple: once that is solved it is enough to assign a color to each number (from 1 to 9).  If you superimpose three colored sudokus with no cells at the same position sharing the same color, and using again nine colors, the resulting image is a tridoku:

There is something attractive in a tridoku due to the balance of colors but also they seem a quite messy: they are a charmingly unbalanced.  I wrote a script to generalize the concept to n-dokus. The idea is the same: superimpose n sudokus without cells sharing color and position (I call them disjoint sudokus) using just nine different colors. I did’n’t prove it, but I think the maximum amount of sudokus can be overimposed with these constrains is 9. This is a complete series from 1-doku to 9-doku (click on any image to enlarge):

I am a big fan of colourlovers package. These tridokus are colored with some of my favourite palettes from there:

Just two technical things to highlight:

  • There is a package called sudoku that generates sudokus (of course!). I use it to obtain the first solved sudoku which forms the base.
  • Subsequent sudokus are obtained from this one doing two operations: interchanging groups of columns first (there are three groups: columns 1 to 3, 4 to 6 and 7 to 9) and interchanging columns within each group then.

You can find the code here: do you own colored n-dokus!