# Sunflowers for COLOURlovers

Andar, lo que es andar, anduve encima siempre de las nubes (Del tiempo perdido, Robe)

If you give importance to colours, maybe you know already COLOURlovers. As can be read in their website, COLOURlovers is a creative community where people from around the world create and share colors, palettes and patterns, discuss the latest trends and explore colorful articles… All in the spirit of love.

There is a R package called colourlovers which provides access to the COLOURlovers API. It makes very easy to choose nice colours for your graphics. I used clpalettes function to search for the top palettes of the website. Their names are pretty suggestive as well: Giant Goldfish, Thought Provoking, Adrift in Dreams, let them eat cake … Inspired by this post I have done a Shiny app to create colored flowers using that palettes. Seeds are arranged according to the golden angle. One example:

Some others:

You can play with the app here.

If you want to do your own sunflowers, here you have the code. This is the `ui.R` file:

```library(colourlovers)
library(rlist)
top=clpalettes('top')
sapply(1:length(top), function(x) list.extract(top, x)\$title)-&gt;titles

fluidPage(
titlePanel("Sunflowers for COLOURlovers"),
fluidRow(
column(3,
wellPanel(
selectInput("pal", label = "Palette:", choices = titles),
sliderInput("nob", label = "Number of points:", min = 200, max = 500, value = 400, step = 50)
)
),
mainPanel(
plotOutput("Flower")
)
)
)
```

And this is the `server.R` one:

```library(shiny)
library(ggplot2)
library(colourlovers)
library(rlist)
library(dplyr)

top=clpalettes('top')
sapply(1:length(top), function(x) list.extract(top, x)\$title)->titles

CreatePlot = function (ang=pi*(3-sqrt(5)), nob=150, siz=15, sha=21, pal="LoversInJapan") {

list.extract(top, which(titles==pal))\$colors %>%
unlist %>%
as.vector() %>%
paste0("#", .) -> all_colors

colors=data.frame(hex=all_colors, darkness=colSums(col2rgb(all_colors)))
colors %>% arrange(-darkness)->colors

background=colors[1,"hex"] %>% as.character

colors %>% filter(hex!=background) %>% .[,1] %>% as.vector()->colors

ggplot(data.frame(r=sqrt(1:nob), t=(1:nob)*ang*pi/180), aes(x=r*cos(t), y=r*sin(t)))+
geom_point(colour=sample(colors, nob, replace=TRUE, prob=exp(1:length(colors))), aes(size=(nob-r)), shape=16)+
scale_x_continuous(expand=c(0,0), limits=c(-sqrt(nob)*1.4, sqrt(nob)*1.4))+
scale_y_continuous(expand=c(0,0), limits=c(-sqrt(nob)*1.4, sqrt(nob)*1.4))+
theme(legend.position="none",
panel.background = element_rect(fill=background),
panel.grid=element_blank(),
axis.ticks=element_blank(),
axis.title=element_blank(),
axis.text=element_blank())}

function(input, output) {
output\$Flower=renderPlot({
CreatePlot(ang=180*(3-sqrt(5)), nob=input\$nob, siz=input\$siz, sha=as.numeric(input\$sha), pal=input\$pal)
}, height = 550, width = 550 )}
```

# Genetic Music: From Schoenberg to Bach

Bach, the epitome of a musician who strove all life long and finally acquired the ‘Habit of Perfection’, was a thoroughly imperfect human being (John Eliot Gardiner, Bach: Music in the Castle of Heaven)

Sometimes I dream awake and imagine I am a famous musician.  I fantasize being Paco de Lucía playing Mi niño Curro alone on the stage, Thom Yorke singing Fake plastic trees at Glastombury or Noel Gallagher singing Don’t look back in anger for a devoted crowd.

My parents gave me the opportunity to learn music, and this has been one of the best gifts I have received ever. I played the cello intensively until I had children but I still have enough skills to play some pieces. One of that is the Prelude of Suite No. 1 of J. S. Bach. It is very close to the limit of my possibilities but I love it. It is timeless, thrilling, provocative and elegant: an absolute masterpiece. I also imagine myself often playing it as well as my admired Yo-Yo Ma does.

The aim of this experiment is to discern first 4 beats of the prelude using a genetic algorithm. First of all, let’s listen our goal melody, created with tuneR package (sorry for the sound, Mr. Bach):

The frequency range of cello goes from 65.41 Hz to 987.77 Hz. Using the basic formula for the frequency of the notes, it means that a cello can produce 48 different notes. I generated the next codification for the 48 notes of the cello:

frequency (hz) note code
65.41 C2 a
69.30 C#2/Db2 b
73.42 D2 c
77.78 D#2/Eb2 d
82.41 E2 e
87.31 F2 f
92.50 F#2/Gb2 g
98.00 G2 h
103.83 G#2/Ab2 i
110.00 A2 j
116.54 A#2/Bb2 k
123.47 B2 l
130.81 C3 m
138.59 C#3/Db3 n
146.83 D3 o
155.56 D#3/Eb3 p
164.81 E3 q
174.61 F3 r
185.00 F#3/Gb3 s
196.00 G3 t
207.65 G#3/Ab3 u
220.00 A3 v
233.08 A#3/Bb3 w
246.94 B3 x
261.63 C4 y
277.18 C#4/Db4 z
293.66 D4 A
311.13 D#4/Eb4 B
329.63 E4 C
349.23 F4 D
369.99 F#4/Gb4 E
392.00 G4 F
415.30 G#4/Ab4 G
440.00 A4 H
466.16 A#4/Bb4 I
493.88 B4 J
523.25 C5 K
554.37 C#5/Db5 L
587.33 D5 M
622.25 D#5/Eb5 N
659.26 E5 O
698.46 F5 P
739.99 F#5/Gb5 Q
783.99 G5 R
830.61 G#5/Ab5 S
880.00 A5 T
932.33 A#5/Bb5 U
987.77 B5 V

So our goal melody is codified like this:

tAJHJAJAtAJHJAJAtCKJKCKCtCKJKCKCtEKJKEKEtEKJKEKEtFJHJFJFtFJHJFJF

I start with a population of 500 random melodies. All of them have 64 notes, the same length as the goal melody has. Given a melody, the algorithm compares it with the goal melody to calculate its fitness, with the following formula:

$fitness= {2}^{\displaystyle number of correct notes}$

For example, a melody with 5 correct notes has a fitness of 32. Being correct means being the right note in the right place. After measuring fitness of all melodies, I select 250 couples of individuals depending of its fitness (the more fitness, the more probability of being selected). Each couple generates two children for the next generation depending on certain probability, called crossover rate. Crossing operation is not always applied. Once two parents are selected, a random crossover point is chosen. At that point in both strings the genetic material from the left side of one parent is spliced to the material from the right side of other parent. The next figure illustrates the idea:

So two parents give birth to two children for the next generation. The last thing to do is mutate children. Once again, mutation is not always applied since it depends on a rate, usually small. Mutation introduces some new notes (new genetic material) to the next population. It increases convergence speed and reduces the probability to obtain a local optimum.

How many 32 -length melodies can be written with 48 notes? The answer is 4832, which is this extremely big number:

630.550.095.814.788.844.406.620.626.462.420.008.802.064.662.402.084.486

To understand how enormous is, let’s suppose we could work with Sunway TaihuLight, the fastest supercomputer in the world nowadays. This monster can do 93.000.000.000.000.000 floating-point operations per second so it will expend more than 214.995.831.974.513.789.322.026.202.008 years to calculate the fitness of all possible melodies: brute force is not an option.

A genetic algorithm does the job in just a few iterations. Best melodies introduce innovations which increase the average fitness of the whole population as well as its maximum fitness. Next table shows the evolution of an execution of the algorithm for a crossover rate equal of 75% and a mutation  rate of 1% (not exhaustive):

iteration best melody correct notes
1 OStxSTSbHwdsJAfTcRpoiNTRtRUxKhuRuKMcVNcBjRJNhENrVeFsPiegUpJHvRHw 7
5 tdbxSTSbHwdsJAfTcRpoiNTRtRITopoCPORzDdiFkEKrhEKtMHytiffzttJHvRHw 12
20 tAGHwdtUHzdMJATVACjJKVnetRQxKCKCtBKjqwiFkEKKhEKEMHyQiFfztUJHlRHF 25
50 tAJHwAJGjAJHJAJAtUCJKCkCtRUxKCKCtEKJKwKEtEKyhEKEMHyHrFfFtUJHJFHF 45
65 tAJHJAJGjAJHJAJAtUKJKCLCtCKxKCKCtEKJKwKEtEKyhEKEMHJHNFJFtFJHJFOF 52
80 tAJHJAJmtAJHJAJAtUKJKCLCtCKJKCKCtEKJKEKEtEKyMEKEMHJHJFJFtFJHJFOF 56
95 tAJHJAJjtAJHJAJAtUKJKCLCtCKJKCKCtEKJKEKEtEKJhEKEtFJHJFJFtFJHJFRF 59
110 tAJHJAJktAJHJAJAtUKJKCvCtCKJKCKCtEKJKEKEtEKJKEKEtFJHJFJFtFJHJFJF 61
125 tAJHJAJAtAJHJAJAtCKJKCKCtCKJKCKCtEKJKEKEtEKJKEKEtFJHJFJFtFJHJFJF 64

The optimum is reached in just 125 iterations. It is funny to merge the best melodies of some iterations. This sample blends four of them. The first one comes from the first initial population (the Schoenberg flavored) and the last one is our goal melody.  The other two were randomly picked from the rest iterations. It is nice to hear how the genetic algorithm turns randomness into the wonderful Bach’s melody:

This experiment was inspired by The Computational Beauty of Nature, a splendid book by Gary William Flake I strongly recommend you.

This is the code of the experiment:

```library(tuneR)
library(stringdist)
library(dplyr)
#Function to calculate frequency
freq=function(n) 440*(2^(1/12))^n
#cello notes
notes=c("C2",
"C#2/Db2",
"D2",
"D#2/Eb2",
"E2",
"F2",
"F#2/Gb2",
"G2",
"G#2/Ab2",
"A2",
"A#2/Bb2",
"B2",
"C3",
"C#3/Db3",
"D3",
"D#3/Eb3",
"E3",
"F3",
"F#3/Gb3",
"G3",
"G#3/Ab3",
"A3",
"A#3/Bb3",
"B3",
"C4",
"C#4/Db4",
"D4",
"D#4/Eb4",
"E4",
"F4",
"F#4/Gb4",
"G4",
"G#4/Ab4",
"A4",
"A#4/Bb4",
"B4",
"C5",
"C#5/Db5",
"D5",
"D#5/Eb5",
"E5",
"F5",
"F#5/Gb5",
"G5",
"G#5/Ab5",
"A5",
"A#5/Bb5",
"B5")
#Table of frequencies
frequencies=data.frame(n=-33:14) %>%
mutate(frequency=round(freq(n),4),
note=notes,
code=c(letters, toupper(letters))[1:48])
#Codification of the goal melody
prelude="tAJHJAJAtAJHJAJAtCKJKCKCtCKJKCKCtEKJKEKEtEKJKEKEtFJHJFJFtFJHJFJF"
#Sample wav
if (exists("all_wave")) rm(all_wave)
frequencies %>%
filter(code==substr(prelude,1,1)) %>%
select(frequency) %>%
as.numeric %>%
sine(duration = 10000)->all_wave
for (i in 2:nchar(prelude))
frequencies %>%
filter(code==substr(prelude,i,i)) %>%
select(frequency) %>%
as.numeric %>%
sine(duration = 10000) %>% bind(all_wave, .)->all_wave
play(all_wave)
writeWave(all_wave, 'PreludeSample.wav')

popsize=500 #Population size
length=nchar(prelude)
genes=frequencies\$code
maxfitness=2^(1-(stringdist(prelude, prelude, method="hamming")-length))
maxiter=200 #Max number of iterations
iter=1
mutrate=0.01
#Initial population
replicate(popsize, sample(genes, length, replace = TRUE)) %>%
apply(2, function(x) paste(x,collapse="")) -> population
#Fitness evaluation
fitness=sapply(population, function(x) 2^(1-(stringdist(x, prelude, method="hamming")-length)), USE.NAMES=FALSE)
#Maximum fitness
maxfitenss_iter=max(fitness)
#Best melody
which((fitness)==max(fitness)) %>% min %>% population[.] ->bestfit
results=data.frame(iteration=iter, best_melody=bestfit, correct_notes=log(maxfitenss_iter, base = 2)-1)
#Execution of the algorithm
while(maxfitenss_iter<maxfitness & iter<maxiter)
{
population2=c()
for (i in 1:(popsize/2))
{
parents=sample(1:popsize, size=2, prob=fitness/sum(fitness), replace=FALSE)
mix=sample(1:(length-1), 1)

if (runif(1)>.25)
{
p1=paste0(substr(population[parents[1]],1,mix), substr(population[parents[2]],mix+1,length))
p2=paste0(substr(population[parents[2]],1,mix), substr(population[parents[1]],mix+1,length))
}
else
{
p1=population[parents[1]]
p2=population[parents[2]]
}
for (j in 1:length) if(runif(1)<mutrate) substr(p1,j,j)=sample(genes,1)
for (j in 1:length) if(runif(1)<mutrate) substr(p2,j,j)=sample(genes,1)
c(p1, p2) %>% c(population2)->population2
}
#New population
population=population2
fitness=sapply(population, function(x) 2^(1-(stringdist(x, prelude, method="hamming")-length)), USE.NAMES=FALSE)
which((fitness)==max(fitness)) %>% min %>% population[.] ->bestfit
print(paste0("Iteration ",iter, ": ", bestfit))
maxfitenss_iter=max(fitness)
iter=iter+1
data.frame(iteration=iter, best_melody=bestfit, correct_notes=log(maxfitenss_iter, base = 2)-1) %>% rbind(results) -> results
}
```

# Frankenstein

Remember me, remember me, but ah! forget my fate (Dido’s Lament, Henry Purcell)

A Voronoi diagram divides a plane based on a set of original points. Each polygon, or Voronoi cell, contains an original point and all that are closer to that point than any other.

This is a nice example of a Voronoi tesselation. You can find good explanations of Voronoi diagrams and Delaunay triangulations here (in English) or here (in Spanish).

A grayscale image is simply a matrix where darkness of pixel located in coordinates (i, j) is represented by the value of its corresponding element of the matrix: a grayscale image is a dataset. This is a Voronoi diagraman of Frankenstein:

To do it I followed the next steps:

2. Convert it to gray scale
3. Turn it into a pure black and white image
4. Obtain a random sample of black pixels (previous image corresponds to a sample of 6.000 points)
5. Computes the Voronoi tesselation

Steps 1 to 3 were done with imager, a very appealing package to proccess and analice images. Step 5 was done with deldir, also a convenient package which computes Delaunay triangulation and the Dirichlet or Voronoi tessellations.

The next grid shows tesselations for sample size from 500 to 12.000 points and step equal to 500:

I gathered all previous images in this gif created with magick, another amazing package of R I discovered recently:

This is the code:

```library(imager)
library(dplyr)
library(deldir)
library(ggplot2)
library(scales)

# Read and convert to grayscale

# This is just to define frame limits
x %>%
as.data.frame() %>%
group_by() %>%
summarize(xmin=min(x), xmax=max(x), ymin=min(y), ymax=max(y)) %>%
as.vector()->rw

# Filter image to convert it to bw
x %>%
threshold("45%") %>%
as.cimg() %>%
as.data.frame() -> df

# Function to compute and plot Voronoi tesselation depending on sample size
doPlot = function(n)
{
#Voronoi tesselation
df %>%
sample_n(n, weight=(1-value)) %>%
select(x,y) %>%
deldir(rw=rw, sort=TRUE) %>%
.\$dirsgs -> data

# This is just to add some alpha to lines depending on its longitude
data %>%
mutate(long=sqrt((x1-x2)^2+(y1-y2)^2),
alpha=findInterval(long, quantile(long, probs = seq(0, 1, length.out = 20)))/21)-> data

# A little bit of ggplot to plot results
data %>%
ggplot(aes(alpha=(1-alpha))) +
geom_segment(aes(x = x1, y = y1, xend = x2, yend = y2), color="black", lwd=1) +
scale_x_continuous(expand=c(0,0))+
scale_y_continuous(expand=c(0,0), trans=reverse_trans())+
theme(legend.position  = "none",
panel.background = element_rect(fill="white"),
axis.ticks       = element_blank(),
panel.grid       = element_blank(),
axis.title       = element_blank(),
axis.text        = element_blank())->plot

return(plot)
}

# I call the previous function and store resulting plot in jpeg format
i=5000
name=paste0("frankie",i,".jpeg")
jpeg(name, width = 600, height = 800, units = "px", quality = 100)
doPlot(i)
dev.off()

# Once all images are stored I can create gif
library(magick)
frames=c()
images=list.files(pattern="jpeg")

for (i in length(images):1)
{
x=image_scale(x, "300")
c(x, frames) -> frames
}
animation=image_animate(frames, fps = 2)
image_write(animation, "Frankenstein.gif")
```

# Who is Alan Turing?

This government is committed to introducing posthumous pardons for people with certain historical sexual offence convictions who would be innocent of any crime now (British Government Spokesperson, September 2016)

Last September, the British government announced its intention to pursue what has become known as the Alan Turing law, offering exoneration to the tens of thousands of gay men convicted of historic charges.  The law was finally unveiled on 20 October 2016.

This plot shows the daily views of the Alan Turing’s wikipedia page during the last 365 days:

There are three huge peaks in May 27th, July 30th, and October 29th that can be easily detected using AnomalyDetection package:

After substituting these anomalies by a simple linear imputation, it is clear that the time series has suffered a significant impact since the last days of September:

To estimate the amount of incremental views since September 28th (this is the date I have chosen as starting point) I use CausalImpact package:

Last plot shows the accumulated effect. After 141 days, there have been around 1 million of incremental views to the Alan Turing’s wikipedia page (more than 7.000 per day) and it does not seem ephemeral.

Alan Turing has won another battle, this time posthumous. And thanks to it, there is a lot of people that have discovered his amazing legacy: long life to Alan Turing.

This is the code I wrote to do the experiment:

```library(httr)
library(jsonlite)
library(stringr)
library(xts)
library(highcharter)
library(AnomalyDetection)
library(imputeTS)
library(CausalImpact)
library(dplyr)

# Views last 365 days
(Sys.Date()-365) %>% str_replace_all("[[:punct:]]", "") %>% substr(1,8) -> date_ini
Sys.time()       %>% str_replace_all("[[:punct:]]", "") %>% substr(1,8) -> date_fin
url="https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/en.wikipedia/all-access/all-agents/Alan%20Turing/daily"
paste(url, date_ini, date_fin, sep="/") %>%
GET  %>%
content("text") %>%
fromJSON %>%
.[[1]] -> wikistats

# To prepare dataset for highcharter
wikistats %>%
mutate(day=str_sub(timestamp, start = 1, end = 8)) %>%
mutate(day=as.POSIXct(day, format="%Y%m%d", tz="UTC")) -> wikistats

# Highcharts viz
rownames(wikistats)=wikistats\$day
wikistats %>% select(views) %>% as.xts  %>% hchart

# Anomaly detection
wikistats %>% select(day, views) -> tsdf
tsdf %>%
AnomalyDetectionTs(max_anoms=0.01, direction='both', plot=TRUE)->res
res\$plot

# Imputation of anomalies
tsdf[tsdf\$day %in% as.POSIXct(res\$anoms\$timestamp, format="%Y-%m-%d", tz="UTC"),"views"]<-NA
ts(tsdf\$views, frequency = 365) %>%
na.interpolation() %>%
xts(order.by=wikistats\$day) -> tscleaned
tscleaned %>% hchart

# Causal Impact from September 28th
x=sum(index(tscleaned)<"2016-09-28 UTC")
impact <- CausalImpact(data = tscleaned %>% as.numeric,
pre.period = c(1,x),
post.period = c(x+1,length(tscleaned)),
model.args = list(niter = 5000, nseasons = 7),
alpha = 0.05)
plot(impact)
```

# How to Find Equidistant Coordinates Between Two Locations on Earth

Here’s to the ones who dream
foolish, as they may seem
(The Fools Who Dream, ‘La La Land’ OST)

One of the key points of The Meeting Point Locator is to obtain an orthogonal great circle to the bearing defined by any two given locations on Earth. A great circle is the intersection of the sphere and a plane that passes through the center point of the sphere. In other words, a great circle is a false meridian. The orthogonal great circle to the direction defined by any two given locations is the one which passes by all equidistant points to both of them (at least this is what I call orthogonal great circle). This was my first approach to obtain it:

• Get the midpoint between the initial locations, let’s call it p1
• Calculate the direction (bearing angle) between the initial locations, let’s call it α
• Obtain a very close point to p1 (only 1 meter away) with bearing α+90, let’s call it p2
• Calculate the great circle which passes through p1 and p2

This is the code I used in this first approach:

```library(dplyr)
library(ggmap)
library(geosphere)
library(leaflet)
library(ggplot2)
library(scales)
library(extrafont)
windowsFonts(Garamond=windowsFont("Garamond"))

#Starting places
place2="Toledo, Spain"

# Call to Google Maps API to obtain coordinates of Starting places
p1=geocode(place1, output = "latlon")
p2=geocode(place2, output = "latlon")

#Midpoint of p1 and p2
mid=midPoint(p1, p2)

#Direction between p1 and p2
bea=bearingRhumb(p1, p2)

# Great circle between midpoint and 1-meter separated point with bearing bea+90
points=greatCircle(destPoint(p=mid, b=bea+90, d=1), mid, n=100)

# Arrange the points dependning on the distance to the input locations
data.frame(dist2p1=apply(points, 1, function (x) distGeo(p1, x)),
dist2p2=apply(points, 1, function (x) distGeo(p2, x))) %>%
cbind(points) -> points

opts=theme(
panel.background = element_rect(fill="gray90"),
panel.border = element_rect(colour="black", fill=NA),
axis.line = element_line(size = 0.5, colour = "black"),
axis.ticks = element_line(colour="black"),
panel.grid.major = element_line(colour="white", linetype = 2),
panel.grid.minor = element_blank(),
axis.text = element_text(colour="gray25", size=6, family = "Garamond"),
axis.title = element_text(size=10, colour="gray10", family = "Garamond"),
legend.key = element_blank(),
legend.position = "none",
legend.background = element_blank(),
plot.title = element_text(size = 14, colour="gray10", family = "Garamond"),
plot.subtitle = element_text(size = 10, colour="gray20", family = "Garamond"))

ggplot(points, aes(x=dist2p1, y=dist2p2), guide=FALSE)+
geom_abline(intercept = 0, slope = 1, colour = "red", alpha=.25)+
geom_point(colour="blue", fill="blue", shape=21, alpha=.8, size=1)+
scale_x_continuous(label=scientific_format())+
scale_y_continuous(label=scientific_format())+
labs(title=paste(place1,"and" ,place2, sep=" "),
subtitle="Equidistant points (2nd approach)",
x=paste("Distance to" ,place1, "(Km)", sep=" "),
y=paste("Distance to" ,place2, "(Km)", sep=" "))+opts

#Map
points %>%
leaflet() %>%
lng=points\$lon, lat=points\$lat,
color = "blue",
stroke = FALSE, fillOpacity = 0.5) %>%
lng=c(p1\$lon, p2\$lon), lat=c(p1\$lat, p2\$lat),
color = "red",
stroke = FALSE, fillOpacity = 0.5)
```

I was pretty sure that all points of this last great circle must be equidistant to the initial locations but I was wrong. When the starting points are enough close, everything goes well. This is an example with Madrid and Toledo (separated only by 67 kilometers) as starting points. The following plot shows the distance to Madrid and Toledo of 100 points on the great circle obtained as I described before:

This map shows also these 100 points (in blue) as well as the starting ones (in red):

Quite convincent. But this is what happens when I choose Tokyo and New York (10.873 kms. away) as the starting points:

And the map:

To be honest, I do not know why this happens but, based on the success obtained using close starting points, the final solution was simple: bring the starting points closer preserving the original midpoint. This was my second (and definitive) try:

And the map:

Mission accomplished. The final code:

```library(dplyr)
library(ggmap)
library(geosphere)
library(leaflet)
library(ggplot2)
library(scales)
library(extrafont)
windowsFonts(Garamond=windowsFont("Garamond"))

# Starting places
place1="Tokyo, Japan"
place2="New York, USA"

# Call to Google Maps API to obtain coordinates of Starting places
p1=geocode(place1, output = "latlon")
p2=geocode(place2, output = "latlon")

# Midpoint of p1 and p2
mid=midPoint(p1, p2)
# Distance between p1 and p2
dist=distGeo(p1, p2)
# A simple piece of code to bring the starting points closer preserving the original midpoint
x=p1
y=p2
while(dist>1000000)
{
x=midPoint(mid, x)
y=midPoint(mid, y)
dist=distGeo(x, y)
}
# Direction between resulting (close) points
bea=bearingRhumb(x, y)
# Great circle between midpoint and 1-meter separated point with bearing bea+90
points=greatCircle(destPoint(p=mid, b=bea+90, d=1), mid, n=100)

# Arrange the points dependning on the distance to the input locations
data.frame(dist2p1=apply(points, 1, function (x) distGeo(p1, x)),
dist2p2=apply(points, 1, function (x) distGeo(p2, x))) %>%
cbind(points) -> points

opts=theme(
panel.background = element_rect(fill="gray90"),
panel.border = element_rect(colour="black", fill=NA),
axis.line = element_line(size = 0.5, colour = "black"),
axis.ticks = element_line(colour="black"),
panel.grid.major = element_line(colour="white", linetype = 2),
panel.grid.minor = element_blank(),
axis.text = element_text(colour="gray25", size=6, family = "Garamond"),
axis.title = element_text(size=10, colour="gray10", family = "Garamond"),
legend.key = element_blank(),
legend.position = "none",
legend.background = element_blank(),
plot.title = element_text(size = 14, colour="gray10", family = "Garamond"),
plot.subtitle = element_text(size = 10, colour="gray20", family = "Garamond"))

ggplot(points, aes(x=dist2p1, y=dist2p2), guide=FALSE)+
geom_abline(intercept = 0, slope = 1, colour = "red", alpha=.25)+
geom_point(colour="blue", fill="blue", shape=21, alpha=.8, size=1)+
scale_x_continuous(label=scientific_format())+
scale_y_continuous(label=scientific_format())+
labs(title=paste(place1,"and" ,place2, sep=" "),
subtitle="Equidistant points (2nd approach)",
x=paste("Distance to" ,place1, "(Km)", sep=" "),
y=paste("Distance to" ,place2, "(Km)", sep=" "))+opts

points %>%
leaflet() %>%
lng=points\$lon, lat=points\$lat,
color = "blue",
stroke = FALSE, fillOpacity = 0.5) %>%
lng=c(p1\$lon, p2\$lon), lat=c(p1\$lat, p2\$lat),
color = "red",
stroke = FALSE, fillOpacity = 0.5)
```

# The Ex Libris Generator

Clap your hands if that’s really what you came here for
(Heaven, The Milk Carton Kids)

Inspired by curves created by the harmonograph, I have done a Shiny app to generate random images that you can personalize and use as an Exlibris.  You can try the App here. For me, an exlibris (also known as bookplates) can be a nice, original and useful present for book-lovers. This is an example:

More examples:

I always put the code at the end of my posts. Since I always have doubts about how many people are interested in what I do, today will be different. I will share the code with those who ask it to me in any of the following ways:

• Sending me a direct message on Twitter
• Droping me an email

Cheers!

# The Meeting Point Locator

Hi Hillary, It’s Donald, would you like to have a beer with me in La Cabra Brewing, in Berwyn, Pensilvania? (Hypothetical utilization of The Meeting Point Locator)

Finding a place to have a drink with someone may become a difficult task. It is quite common that one of them does not want to move to the other’s territory. I am sure you have faced to this situation many times. With The Meeting Point Locator this will be no longer an issue, since it will give you a list of equidistant bars and coffees to any two given locations. Let’s see an example.

I do not know if Hillary Clinton and Donald Trump have met each other after the recent elections in United States, but the will probably do. Let’s suppose Hillary doesn’t want to go to The White House and that Donald prefers another place instead Hillary’s home. No problem at all. According to this, Hillary lives in Chappaqua, New York and Donald will live in The White House, Washington (although he supposedly won’t do full time as he announced recently). These two locations are the only input that The Meeting Point Locator needs to purpose equidistant places where having a drink. This is how it works:

• Generates a number of coordinates on the great circle which passes through the midpoint of the original locations and is orthogonal to the rhumb defined by them; the number of points depends on the distance between the original locations.
• Arranges these coordinates according to the distance to the original locations, from the nearest to the most distant.
• Depending also on the distance of the original locations, defines a radius to search around each point generated on the great circle (once calculated, this radius is constant for all searches).
• Starting from the nearest point, looks for a number of places (20 by default) to have a drink using the radius calculated previously. To do this, it calls to the Google Places API. Once the number of locations is reached, the proccess stops.

This map shows the places purposed for Hillary and Donald (blue points) as well as the original locations (red ones). You can make zoom in for details:

These are the 20 closest places to both of them:

Some other examples of the utility of The Meeting Point Locator:

• Pau Gasol (who lives in San Antonio, Texas) and Marc Gasol (in Memphis, Tennessee) can meet in The Draft Sports Bar, in Leesville (Louisiana) to have a beer while watching a NBA match. It is 537 kilometers far from both of them.
• Bob Dylan (who lives in Malibu, California) and The Swedish Academy (placed in Stockholm, Sweden) can smooth things over drinking a caipirinha in Bar São João, in Tremedal (Brasil)only 9.810 kilometers far from both of them.
• Spiderman (placed in New York City) and Doraemon (in Tokio, Japan) can meet in Andreyevskaya, in Stroitel (Russia) to have a have a hot drink. Since they are superheroes, they will cover the 9.810 kilometers of separation in no time at all.

I faced with two challenges to do this experiment: how to generate the orthogonal great circle from two given locations and how to define radius and number of points over this circle to do searchings. I will try to explain in depth both things in the future in another post.

You will find the code below. To make it work, do not forget to get your own key for Google Places API Web Service here. I hope this tool will be helpful for someone; if yes, do not hesitate to tell it to me.

```library(httr)
library(jsonlite)
library(dplyr)
library(ggmap)
library(geosphere)
library(DT)
library(leaflet)

# Write both addresses here (input)
place1="Chappaqua, New York, United States of America"
place2="The White House, Washington DC, United States of America"

# Call to Google Maps API to obtain coordinates of previous addresses
p1=geocode(place1, output = "latlon")
p2=geocode(place2, output = "latlon")

# To do searchings I need a radius
ifelse(distGeo(p1, p2)>100000, 2500, 1000))

# And a number of points
npoints=ifelse(distGeo(p1, p2)>1000000, 2002,
ifelse(distGeo(p1, p2)>100000, 7991, 19744))

# Place here the Google Places API Key
key="PLACE_YOUR_OWN_KEY_HERE"

# Build the url to look for bars and cafes with the previous key
url2="&types=cafe|bar&key="

# This is to obtain the great circle orthogonal to the rhumb defined by input locations
# and which passes over the midpoint. I will explain this step in the future
mid=midPoint(p1, p2)
dist=distGeo(p1, p2)
x=p1
y=p2
while(dist>1000000)
{
x=midPoint(mid, x)
y=midPoint(mid, y)
dist=distGeo(x, y)
}

bea=bearingRhumb(x, y)
points=greatCircle(destPoint(p=mid, b=bea+90, d=1), mid, n=npoints)

# Arrange the points dependning on the distance to the input locations
data.frame(dist2p1=apply(points, 1, function (x) distGeo(p1, x)),
dist2p2=apply(points, 1, function (x) distGeo(p2, x))) %>%
mutate(order=apply(., 1, function(x) {max(x)})) %>%
cbind(points) %>%
arrange(order) -> points

# Start searchings
nlocs=0 # locations counter (by default stops when 20 is reached)
niter=1 # iterations counter (if greater than number of points on the great circle, stops)
results=data.frame()
while(!(nlocs>=20 | niter>npoints)) {
print(niter)
url %>%
gsub("lat", points[niter, 'lat'], .) %>%
gsub("lon", points[niter, 'lon'], .) %>%
GET %>%
content("text") %>%
fromJSON -> retrieve

df=data.frame(lat=retrieve\$results\$geometry\$location\$lat,
lng=retrieve\$results\$geometry\$location\$lng,
name=retrieve\$results\$name,
results %>% rbind(df)->results

nlocs=nlocs+nrow(df)
niter=niter+1
}

# I prepare results to do a Data Table
data.frame(dist2p1=apply(results, 1, function (x) round(distGeo(p1, c(as.numeric(x[2]), as.numeric(x[1])))/1000, digits=1)),
dist2p2=apply(results, 1, function (x) round(distGeo(p2, c(as.numeric(x[2]), as.numeric(x[1])))/1000, digits=1))) %>%
mutate(mx=apply(., 1, function(x) {max(x)})) %>%
cbind(results) %>%
arrange(mx) %>%
mutate(rank=row_number()) %>%
select(-mx)-> resultsDT

# This is the Data table
datatable(resultsDT,
class = 'cell-border stripe',
rownames = FALSE,
options = list(pageLength = 5),
colnames = c('Distance to A (Km)',
'Distance to B (Km)',
'Latitude',
'Longitude',
'Name',
'Rank'))

# Map with the locations using leaflet
resultsDT %>%
leaflet() %>%
lng=resultsDT\$lng, lat=resultsDT\$lat,
color = "blue",
stroke = FALSE, fillOpacity = 0.5,
")
) %>%
lng=p1\$lon, lat=p1\$lat,
color = "red",
stroke = FALSE, fillOpacity = 0.5,
popup=paste("<b>Place 1</b>", place1, sep="
")
)%>%
lng=p2\$lon, lat=p2\$lat,
color = "red",
stroke = FALSE, fillOpacity = 0.5,
popup=paste("<b>Place 2</b>", place2, sep="
")
)
```

# Three Shiny Apps to Celebrate the Beauty of Maths

Mathematics knows no races or geographic boundaries; for mathematics, the cultural world is one country (David Hilbert)

One of the best decisions I took this year related with this blog was to move it to my own self-hosted domain using WordPress.org. It allows to me, for example, to embed dynamic JavaScript visualizations like this one. Another thing I can do now is to upload my Shiny Apps to share them with my readers. In this post I have gathered three Apps I made some time ago; you can play with them as well as get the code I wrote for each one:

• The Harmonograph: This App simulates harmonograph drawings. An harmonograph is a mechanism which draws trajectories by means of two pendulums: one moves a pencil and the other one moves a platform with a piece of paper on it. Click here to try it.
• Shiny Wool Skeins: This App, inspired by this post, creates a plot consisting of chords inside a circle . You can change colors as well as the number and quality of the chords. Click here to try it.
• The Coaster Maker: With this App you can create your own coasters using hypocicloids. Click here to try it.

I want to thank to my friend Jorge, without whom I would not have been able to make Shiny work in my server.

# The Somnambulist and Pi

How wary we are of something warm and unborn. Something calmly by zero will divide (Unbegotten, The Somnambulist)

Some time ago, I assumed the mission to draw a plot for the cover of the new album of The Somnambulist, a music band from Berlin. They wanted a circlization of Pi, which is a graphic where numbers are represented in a circular layout. The idea is connecting each digit of Pi to its successive digit with links to the position of the numerically corresponding external sectors. I used a color palette composed by 10 nuances of the visible spectrum as a tribute for Planck, as Marco (the vocalist) requested me. After a number of attempts:

The album is named Unbegotten, a german word which means archaic. As Marco told me, in theology it also means kind of eternal because of being never born and so never dying. I like how π is integrated into the title to substitute the string “tt” in the middle. Pi is also eternal so the association is genuine.

The music of The Somnambulist is intense, dark and powerful and is waiting for you here to listen it. My favorite song is the one that gives name to the album.

If you want to know more about circlizong numbers, you can visit this post, where you also can see the code I used as starting point to do this plot.

# Bayesian Blood

The fourth, the fifth, the minor fall and the major lift (Hallelujah, Leonard Cohen)

Next problem is extracted from MacKay’s Information Theory, Inference and Learning Algorithms:

Two people have left traces of their own blood at the scene of a crime. A suspect, Oliver, is tested and found to have type ‘O’ blood. The blood groups of the two traces are found to be of type ‘O’ (a common type in the local population, having frequency 60%) and of type ‘AB’ (a rare type, with frequency 1%). Do these data give evidence in favor of the proposition that Oliver was one of the people who left blood at the scene?

To answer the question, let’s first remember the probability form of Bayes theorem:

$p(H|D)=\dfrac{p(H)p(D|H)}{p(D)}$

where:

• p(H) is the probability of the hypothesis H before we see the data, called the prior
• p(H|D) is the probablity of the hyothesis after we see the data, called the posterior
• p(D|H) is the probability of the data under the hypothesis, called the likelihood
• p(D)is the probability of the data under any hypothesis, called the normalizing constant

If we have two hypothesis, A and B, we can write the ratio of posterior probabilities like this:

$\dfrac{p(A|D)}{p(B|D)}=\dfrac{p(A)p(D|A)}{p(B)p(D|B)}$

If p(A)=1-p(B) (what means that A and B are mutually exclusive and collective exhaustive), then we can rewrite the ratio of the priors and the ratio of the posteriors as odds. Writing o(A) for odds in favor of A, we get the odds form of Bayes theorem:

$o(A|D)=o(A)\dfrac{p(D|A)}{p(D|B)}$

Dividing through by o(A) we have:

$\dfrac{o(A|D)}{o(A)}=\dfrac{p(D|A)}{p(D|B)}$

The term on the left is the ratio of the posteriors and prior odds. The term on the right is the likelihood ratio, also called the Bayes factor. If it is greater than 1, that means that the data were more likely under A than under B. And since the odds ratio is also greater than 1, that means that the odds are greater, in light of the data, than they were before. If the Bayes factor is less than 1, that means the data were less likely under A than under B, so th odds in favor of A go down.

Let’s go back to our initial problem. If Oliver left his blood at the crime scene, the probability of the data is just the probability that a random member of the population has type ‘AB’ blood, which is 1%. If Oliver did not leave blood at the scene, what is the the chance of finding two people, one with type ‘O’ and one with type ‘AB’? There are two ways it might happen: the first person we choose might have type ‘O’ and the second ‘AB’, or the other way around. So the probability in this case is 2(0.6)(0.01)=1.2%. Dividing probabilities of both scenarios we obtain a Bayes factor of 0.83, and we conclude that the blood data is evidence against Oliver’s guilt.

Once I read this example, I decided to replicate it using real data of blood type distribution by country from here. After cleaning data, I have this nice data set to work with:

For each country, I get the most common blood type (the one which the suspect has) and the least common and replicate the previous calculations. For example, in the case of Spain, the most common type is ‘O+’ with 36% and the least one is ‘AB-‘ with 0.5%. The Bayes factor is 0.005/(2(0.36)(0.005))=1.39 so data support the hypothesis of guilt in this case. Next chart shows Bayes factor accross countries:

• Sometimes data consistent with a hypothesis are not necessarily in favor of the hypothesis
• How different is the distribution of blood types between countries!
• If you are a estonian ‘A+’ murderer, choose carefully your accomplice

This is the code of the experiment:

```library(rvest)
library(dplyr)
library(stringr)
library(DT)
library(highcharter)

# Webscapring of the table with the distribution of blood types
url <- "http://www.rhesusnegative.net/themission/bloodtypefrequencies/"
blood <- url %>%
html_node(xpath='/html/body/center/table') %>%
html_table(fill=TRUE)

# Some data cleansing
blood %>% slice(-c(66:68)) -> blood

blood[,-c(1:2)] %>%
sapply(gsub, pattern=",", replacement=".") %>%
as.data.frame %>%
sapply(gsub, pattern=".79.2", replacement=".79") %>%
as.data.frame-> blood[,-c(1:2)]

blood %>%
sapply(gsub, pattern="%|,", replacement="") %>%
as.data.frame -> blood

blood[,-1] = apply(blood[,-1], 2, function(x) as.numeric(as.character(x)))

blood[,-c(1:2)] %>% mutate_all(funs( . / 100)) -> blood[,-c(1:2)]

# And finally, we have a nice data set
datatable(blood,
rownames = FALSE,
options = list(
searching = FALSE,
pageLength = 10)) %>%
formatPercentage(3:10, 2)

# Calculate the Bayes factor
blood %>%
mutate(factor=apply(blood[,-c(1,2)], 1, function(x) {min(x)/(2*min(x)*max(x))})) %>%
arrange(factor)-> blood

# Data Visualization
highchart() %>%
hc_chart(type = "column") %>%
hc_title(text = "Bayesian Blood") %>%
hc_subtitle(text = "An experiment about the Bayes Factor") %>%
hc_xAxis(categories = blood\$Country,
labels = list(rotation=-90, style = list(fontSize = "12px")))  %>%
hc_yAxis(plotBands = list(list(from = 0, to = 1, color = "rgba(255,215,0, 0.8)"))) %>%