Tag Archives: leaflet

Fatal Journeys: Visualizing the Horror

In war, truth is the first casualty (Aeschylus)

I am not a pessimistic person. On the contrary, I always try to look at the bright side of life. I also believe that living conditions are now better than years ago as these plots show. But reducing the complexity of our world to just six graphs is riskily simplistic. Our world is quite far of being a fair place and one example is the immigration drama.

Last year there were 934 incidents around the world involving people looking for a better life where more than 5.300 people lost their life o gone missing, 60% of them in Mediterranean. Around 8 out of 100 were children.

The missing migrant project tracks deaths of migrants, including refugees and asylum-seekers, who have gone missing along mixed migration routes worldwide. You can find a huge amount of figures, plots and information about this scourge in their website. You can also download there a historical dataset with information of all these fatal journeys, including location, number of dead or missing people and information source from 2015 until today.

I this experiment I read the dataset and do some plots using highcharter; you can find a link to the R code at the end of the post.

This is the evolution of the amount of deaths or missing migrants in the process of migration towards an international destination from January 2015 to December 2017:

The Mediterranean is the zone with the most incidents. To see it more clearly, this plot compares Mediterranean with the rest of the world, grouping previous zones:

Is there any pattern in the time series of Mediterranean incidents? To see it, I have done a LOESS decomposition of the time series:

Good news: trend is decreasing for last 12 months. Regarding seasonal component, incidents increase in April and May. Why? I don’t know.

This is a map of the location of all incidents in 2017. Clicking on markers you will find information about each incident:

Every of us should try to make our world a better place. I don’t really know how to do it but I will try to make some experiments during this year to show that we have tons of work in front of us. Meanwhile, I hope this experiment is useful to give visibility to this humanitarian disaster. If someone wants to use the code, the complete project is available in GitHub.

The Cycling Accident Map of Madrid City

Far away, this ship has taken me far away (Starlight, Muse)

Madrid City has an Open Data platform where can be found around 300 data sets about a number of topics. One of these sets is the one I used for this experiment. It contains information about cycling accidents  happened in the city from January to July 2017. I have done a map to locate where the accidents took place. This experiment shows how R makes very easy to create professional maps with Leaflet (in this case I use Carto basemaps).

To locate accidents the data set only contains the address where they happened so the first thing I did is to obtain their geographical coordinates using geocode function from ggmap package. There were 431 accidents during the first 7 months of 2017 (such a big number!) and I got coordinates of 407 so I can locate 94% of the accidents.

Obviously, the amount of accidents in some place depend on how many bikers circulate there as well as on its infrastructure. None of these things can be seen in the map: It only shows number of accidents.

The categorization of accidents is:

  • Double collision (Colisión doble): Traffic accident occurred between two moving vehicles.
  • Multiple collision (Colisión múltiple): Traffic accident occurred between more than two moving vehicles.
  • Fixed object collision (Choque con objeto fijo): Accident occurred between a moving vehicle with a driver and an immovable object that occupies the road or separated area of ​​the same, whether parked vehicle, tree, street lamp, etc.
  • Accident (Atropello): Accident occurred between a vehicle and a pedestrian that occupies the road or travels by sidewalks, refuges, walks or zones of the public road not destined to the circulation of vehicles.
  • Overturn (Vuelco): Accident suffered by a vehicle with more than two wheels which by some circumstance loses contact with the road and ends supported on one side or on its roof.
  • Motorcycle fall (Caída motocicleta): Accident suffered by a motorcycle, which at some moment loses balance, because of the driver or due to the conditions of the road.
  • Moped fall (Caída ciclomotor): Accident suffered by a moped, which at some moment loses balance, because of the driver or due to the conditions of the road.
  • Bicycle fall (Caída bicicleta): Accident suffered by a bicycle, which at some moment loses balance, because of the driver or due to the conditions of the road.

These categories are redundant (e.g. Double and Multiple collision), difficult to understand (e.g. Overturn) or both things at the same time (e.g. Motorcycle fall and Moped fall). This categorization also forgets human damages incurred by the accident.

Taking all these things in mind, this is the map:

Here is a full-screen version of the map.

My suggestions to the city council of Madrid are:

  1. Add geographical coordinates to data (I guess many of the analysis will need them)
  2. Rethink the categorization to make it clearer and more informative
  3. Add more cycling data sets to the platform (detail of bikeways, traffic …) to understand accidents better
  4. Attending just to the number of accidents , put the focus around Parque del Retiro, specially on its west surroundings, from Plaza de Cibeles to Plaza de Carlos V: more warning signals, more  (or better) bikeways …

I add the code below to update the map (If someone ask it to me, I can do it myself regularly):

library(dplyr)
library(stringr)
library(ggmap)
library(leaflet)
# First, getting the data
download.file(paste0("http://datos.madrid.es/egob/catalogo/", file), 
              destfile="300110-0-accidentes-bicicleta.csv")

data=read.csv("300110-0-accidentes-bicicleta.csv", sep=";", skip=1)

# Prepare data for geolocation
data %>% 
  mutate(direccion=paste(str_trim(Lugar), str_trim(Numero), "MADRID, SPAIN", sep=", ") %>% 
           str_replace("NA, ", "") %>% 
           str_replace(" - ", " CON ")) -> data

# Geolocation (takes some time ...)
coords=c()
for (i in 1:nrow(data)) 
{
  coords %>% rbind(geocode(data[i,"direccion"])) -> coords
  Sys.sleep(0.5)
}
  
# Save data, just in case
data %>% cbind(coords) %>% saveRDS(file="bicicletas.RDS")

data=readRDS(file="bicicletas.RDS")

# Remove non-successfull geolocations
data %>% 
  filter(!is.na(lon)) %>% 
  droplevels()-> data

# Remove non-successfull geolocations
data %>% mutate(Fecha=paste0(as.Date(data$Fecha, "%d/%m/%Y"), " ", TRAMO.HORARIO),
                popup=paste0("<b>Dónde:</b>",
                             direccion,
                             "<b>Cuándo:</b>",
                             Fecha,
                             "<b>Qué pasó:</b>",
                             Tipo.Accidente)) -> data

# Do the map
data %>% split(data$Tipo.Accidente) -> data.df

l <- leaflet() %>% addProviderTiles(providers$CartoDB.Positron)

names(data.df) %>%
  purrr::walk( function(df) {
    l <<- l %>%
      addCircleMarkers(data=data.df[[df]],
                 lng=~lon, lat=~lat,
                 popup=~popup,
                 color="red",
                 stroke=FALSE,
                 fillOpacity = 0.8,
                 group = df,
                 clusterOptions = markerClusterOptions(removeOutsideVisibleBounds = F))
  })

l %>%
  addLayersControl(
    overlayGroups = names(data.df),
    options = layersControlOptions(collapsed = FALSE)
  )

How to Find Equidistant Coordinates Between Two Locations on Earth

Here’s to the ones who dream
foolish, as they may seem
(The Fools Who Dream, ‘La La Land’ OST)

One of the key points of The Meeting Point Locator is to obtain an orthogonal great circle to the bearing defined by any two given locations on Earth. A great circle is the intersection of the sphere and a plane that passes through the center point of the sphere. In other words, a great circle is a false meridian. The orthogonal great circle to the direction defined by any two given locations is the one which passes by all equidistant points to both of them (at least this is what I call orthogonal great circle). This was my first approach to obtain it:

  • Get the midpoint between the initial locations, let’s call it p1
  • Calculate the direction (bearing angle) between the initial locations, let’s call it α
  • Obtain a very close point to p1 (only 1 meter away) with bearing α+90, let’s call it p2
  • Calculate the great circle which passes through p1 and p2

This is the code I used in this first approach:

library(dplyr)
library(ggmap)
library(geosphere)
library(leaflet)
library(ggplot2)
library(scales)
library(extrafont)
windowsFonts(Garamond=windowsFont("Garamond"))

#Starting places
place1="Madrid, Spain"
place2="Toledo, Spain"

# Call to Google Maps API to obtain coordinates of Starting places
p1=geocode(place1, output = "latlon")
p2=geocode(place2, output = "latlon")

#Midpoint of p1 and p2
mid=midPoint(p1, p2)

#Direction between p1 and p2
bea=bearingRhumb(p1, p2)

# Great circle between midpoint and 1-meter separated point with bearing bea+90
points=greatCircle(destPoint(p=mid, b=bea+90, d=1), mid, n=100)

# Arrange the points dependning on the distance to the input locations
data.frame(dist2p1=apply(points, 1, function (x) distGeo(p1, x)),
           dist2p2=apply(points, 1, function (x) distGeo(p2, x))) %>% 
  cbind(points) -> points

opts=theme(
  panel.background = element_rect(fill="gray90"),
  panel.border = element_rect(colour="black", fill=NA),
  axis.line = element_line(size = 0.5, colour = "black"),
  axis.ticks = element_line(colour="black"),
  panel.grid.major = element_line(colour="white", linetype = 2),
  panel.grid.minor = element_blank(),
  axis.text = element_text(colour="gray25", size=6, family = "Garamond"),
  axis.title = element_text(size=10, colour="gray10", family = "Garamond"),
  legend.key = element_blank(),
  legend.position = "none",
  legend.background = element_blank(),
  plot.title = element_text(size = 14, colour="gray10", family = "Garamond"),
  plot.subtitle = element_text(size = 10, colour="gray20", family = "Garamond"))

ggplot(points, aes(x=dist2p1, y=dist2p2), guide=FALSE)+
  geom_abline(intercept = 0, slope = 1, colour = "red", alpha=.25)+
  geom_point(colour="blue", fill="blue", shape=21, alpha=.8, size=1)+
  scale_x_continuous(label=scientific_format())+
  scale_y_continuous(label=scientific_format())+
  labs(title=paste(place1,"and" ,place2, sep=" "),
       subtitle="Equidistant points (2nd approach)",
       x=paste("Distance to" ,place1, "(Km)", sep=" "),
       y=paste("Distance to" ,place2, "(Km)", sep=" "))+opts

#Map
points %>% 
  leaflet() %>% 
  addTiles(urlTemplate = "https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png") %>% 
  addCircleMarkers(
    lng=points$lon, lat=points$lat,
    radius = 6,
    color = "blue",
    stroke = FALSE, fillOpacity = 0.5) %>% 
  addCircleMarkers(
    lng=c(p1$lon, p2$lon), lat=c(p1$lat, p2$lat),
    radius = 6,
    color = "red",
    stroke = FALSE, fillOpacity = 0.5)

I was pretty sure that all points of this last great circle must be equidistant to the initial locations but I was wrong. When the starting points are enough close, everything goes well. This is an example with Madrid and Toledo (separated only by 67 kilometers) as starting points. The following plot shows the distance to Madrid and Toledo of 100 points on the great circle obtained as I described before:


This map shows also these 100 points (in blue) as well as the starting ones (in red):

Quite convincent. But this is what happens when I choose Tokyo and New York (10.873 kms. away) as the starting points:


And the map:

To be honest, I do not know why this happens but, based on the success obtained using close starting points, the final solution was simple: bring the starting points closer preserving the original midpoint. This was my second (and definitive) try:


And the map:

Mission accomplished. The final code:

library(dplyr)
library(ggmap)
library(geosphere)
library(leaflet)
library(ggplot2)
library(scales)
library(extrafont)
windowsFonts(Garamond=windowsFont("Garamond"))

# Starting places
place1="Tokyo, Japan"
place2="New York, USA"

# Call to Google Maps API to obtain coordinates of Starting places
p1=geocode(place1, output = "latlon")
p2=geocode(place2, output = "latlon")

# Midpoint of p1 and p2
mid=midPoint(p1, p2)
# Distance between p1 and p2
dist=distGeo(p1, p2)
# A simple piece of code to bring the starting points closer preserving the original midpoint
 x=p1
 y=p2
 while(dist>1000000)
 {
   x=midPoint(mid, x)
   y=midPoint(mid, y)
   dist=distGeo(x, y)
}
# Direction between resulting (close) points
bea=bearingRhumb(x, y)
# Great circle between midpoint and 1-meter separated point with bearing bea+90
points=greatCircle(destPoint(p=mid, b=bea+90, d=1), mid, n=100)

# Arrange the points dependning on the distance to the input locations
data.frame(dist2p1=apply(points, 1, function (x) distGeo(p1, x)),
           dist2p2=apply(points, 1, function (x) distGeo(p2, x))) %>% 
  cbind(points) -> points

opts=theme(
  panel.background = element_rect(fill="gray90"),
  panel.border = element_rect(colour="black", fill=NA),
  axis.line = element_line(size = 0.5, colour = "black"),
  axis.ticks = element_line(colour="black"),
  panel.grid.major = element_line(colour="white", linetype = 2),
  panel.grid.minor = element_blank(),
  axis.text = element_text(colour="gray25", size=6, family = "Garamond"),
  axis.title = element_text(size=10, colour="gray10", family = "Garamond"),
  legend.key = element_blank(),
  legend.position = "none",
  legend.background = element_blank(),
  plot.title = element_text(size = 14, colour="gray10", family = "Garamond"),
  plot.subtitle = element_text(size = 10, colour="gray20", family = "Garamond"))

ggplot(points, aes(x=dist2p1, y=dist2p2), guide=FALSE)+
  geom_abline(intercept = 0, slope = 1, colour = "red", alpha=.25)+
  geom_point(colour="blue", fill="blue", shape=21, alpha=.8, size=1)+
  scale_x_continuous(label=scientific_format())+
  scale_y_continuous(label=scientific_format())+
  labs(title=paste(place1,"and" ,place2, sep=" "),
       subtitle="Equidistant points (2nd approach)",
       x=paste("Distance to" ,place1, "(Km)", sep=" "),
       y=paste("Distance to" ,place2, "(Km)", sep=" "))+opts

points %>% 
  leaflet() %>% 
  addTiles(urlTemplate = "https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png") %>% 
  addCircleMarkers(
    lng=points$lon, lat=points$lat,
    radius = 6,
    color = "blue",
    stroke = FALSE, fillOpacity = 0.5) %>% 
  addCircleMarkers(
    lng=c(p1$lon, p2$lon), lat=c(p1$lat, p2$lat),
    radius = 6,
    color = "red",
    stroke = FALSE, fillOpacity = 0.5)

The Meeting Point Locator

Hi Hillary, It’s Donald, would you like to have a beer with me in La Cabra Brewing, in Berwyn, Pensilvania? (Hypothetical utilization of The Meeting Point Locator)

Finding a place to have a drink with someone may become a difficult task. It is quite common that one of them does not want to move to the other’s territory. I am sure you have faced to this situation many times. With The Meeting Point Locator this will be no longer an issue, since it will give you a list of equidistant bars and coffees to any two given locations. Let’s see an example.

I do not know if Hillary Clinton and Donald Trump have met each other after the recent elections in United States, but the will probably do. Let’s suppose Hillary doesn’t want to go to The White House and that Donald prefers another place instead Hillary’s home. No problem at all. According to this, Hillary lives in Chappaqua, New York and Donald will live in The White House, Washington (although he supposedly won’t do full time as he announced recently). These two locations are the only input that The Meeting Point Locator needs to purpose equidistant places where having a drink. This is how it works:

  • Generates a number of coordinates on the great circle which passes through the midpoint of the original locations and is orthogonal to the rhumb defined by them; the number of points depends on the distance between the original locations.
  • Arranges these coordinates according to the distance to the original locations, from the nearest to the most distant.
  • Depending also on the distance of the original locations, defines a radius to search around each point generated on the great circle (once calculated, this radius is constant for all searches).
  • Starting from the nearest point, looks for a number of places (20 by default) to have a drink using the radius calculated previously. To do this, it calls to the Google Places API. Once the number of locations is reached, the proccess stops.

This map shows the places purposed for Hillary and Donald (blue points) as well as the original locations (red ones). You can make zoom in for details:

These are the 20 closest places to both of them:

listHillaryTrumpDT

Some other examples of the utility of The Meeting Point Locator:

  • Pau Gasol (who lives in San Antonio, Texas) and Marc Gasol (in Memphis, Tennessee) can meet in The Draft Sports Bar, in Leesville (Louisiana) to have a beer while watching a NBA match. It is 537 kilometers far from both of them.
  • Bob Dylan (who lives in Malibu, California) and The Swedish Academy (placed in Stockholm, Sweden) can smooth things over drinking a caipirinha in Bar São João, in Tremedal (Brasil)only 9.810 kilometers far from both of them.
  • Spiderman (placed in New York City) and Doraemon (in Tokio, Japan) can meet in Andreyevskaya, in Stroitel (Russia) to have a have a hot drink. Since they are superheroes, they will cover the 9.810 kilometers of separation in no time at all.

I faced with two challenges to do this experiment: how to generate the orthogonal great circle from two given locations and how to define radius and number of points over this circle to do searchings. I will try to explain in depth both things in the future in another post.

You will find the code below. To make it work, do not forget to get your own key for Google Places API Web Service here. I hope this tool will be helpful for someone; if yes, do not hesitate to tell it to me.

library(httr)
library(jsonlite)
library(dplyr)
library(ggmap)
library(geosphere)
library(DT)
library(leaflet)

# Write both addresses here (input)
place1="Chappaqua, New York, United States of America"
place2="The White House, Washington DC, United States of America"

# Call to Google Maps API to obtain coordinates of previous addresses
p1=geocode(place1, output = "latlon")
p2=geocode(place2, output = "latlon")

# To do searchings I need a radius
radius=ifelse(distGeo(p1, p2)>1000000, 10000,
              ifelse(distGeo(p1, p2)>100000, 2500, 1000))

# And a number of points
npoints=ifelse(distGeo(p1, p2)>1000000, 2002,
               ifelse(distGeo(p1, p2)>100000, 7991, 19744))

# Place here the Google Places API Key
key="PLACE_YOUR_OWN_KEY_HERE"

# Build the url to look for bars and cafes with the previous key
url1="https://maps.googleapis.com/maps/api/place/nearbysearch/json?location=lat,lon&radius="
url2="&types=cafe|bar&key="
url=paste0(url1,radius,url2,key)

# This is to obtain the great circle orthogonal to the rhumb defined by input locations
# and which passes over the midpoint. I will explain this step in the future
mid=midPoint(p1, p2)
dist=distGeo(p1, p2)
x=p1
y=p2
while(dist>1000000)
{
  x=midPoint(mid, x)
  y=midPoint(mid, y)
  dist=distGeo(x, y)
}

bea=bearingRhumb(x, y)
points=greatCircle(destPoint(p=mid, b=bea+90, d=1), mid, n=npoints)

# Arrange the points dependning on the distance to the input locations
data.frame(dist2p1=apply(points, 1, function (x) distGeo(p1, x)),
           dist2p2=apply(points, 1, function (x) distGeo(p2, x))) %>% 
  mutate(order=apply(., 1, function(x) {max(x)})) %>% 
  cbind(points) %>% 
  arrange(order) -> points

# Start searchings
nlocs=0 # locations counter (by default stops when 20 is reached)
niter=1 # iterations counter (if greater than number of points on the great circle, stops)
results=data.frame()
while(!(nlocs>=20 | niter>npoints)) {
  print(niter)
  url %>% 
    gsub("lat", points[niter, 'lat'], .) %>% 
    gsub("lon", points[niter, 'lon'], .) %>% 
    GET %>% 
    content("text") %>% 
    fromJSON -> retrieve
  
  df=data.frame(lat=retrieve$results$geometry$location$lat,
                lng=retrieve$results$geometry$location$lng,
                name=retrieve$results$name, 
                address=retrieve$results$vicinity)
  results %>% rbind(df)->results
  
  nlocs=nlocs+nrow(df)
  niter=niter+1 
}

# I prepare results to do a Data Table
data.frame(dist2p1=apply(results, 1, function (x) round(distGeo(p1, c(as.numeric(x[2]), as.numeric(x[1])))/1000, digits=1)),
           dist2p2=apply(results, 1, function (x) round(distGeo(p2, c(as.numeric(x[2]), as.numeric(x[1])))/1000, digits=1))) %>% 
  mutate(mx=apply(., 1, function(x) {max(x)})) %>% 
  cbind(results) %>% 
  arrange(mx) %>% 
  mutate(rank=row_number()) %>% 
  select(-mx)-> resultsDT

# This is the Data table
datatable(resultsDT, 
          class = 'cell-border stripe',
          rownames = FALSE,
          options = list(pageLength = 5),
          colnames = c('Distance to A (Km)', 
                       'Distance to B (Km)', 
                       'Latitude', 
                       'Longitude',
                       'Name', 
                       'Address', 
                       'Rank'))

# Map with the locations using leaflet
resultsDT %>% 
  leaflet() %>% 
  addTiles() %>% 
  addCircleMarkers(
    lng=resultsDT$lng, lat=resultsDT$lat,
    radius = 8,
    color = "blue",
    stroke = FALSE, fillOpacity = 0.5,
    popup=paste(paste0("<b>", resultsDT$name, "</b>"), resultsDT$address, sep="
")
  ) %>% 
  addCircleMarkers(
    lng=p1$lon, lat=p1$lat,
    radius = 10,
    color = "red",
    stroke = FALSE, fillOpacity = 0.5,
    popup=paste("<b>Place 1</b>", place1, sep="
")
  )%>% 
  addCircleMarkers(
    lng=p2$lon, lat=p2$lat,
    radius = 10,
    color = "red",
    stroke = FALSE, fillOpacity = 0.5,
    popup=paste("<b>Place 2</b>", place2, sep="
")
  )

Climatic Change At A Glance

Mmm. Lost a planet, Master Obi-Wan has. How embarrassing (Yoda, Attack Of The Clones)

Some time ago I published this post in KDnuggets in which I analyze historical temperatures to show how we are gradually heading toward a warmer planet. Simple data science to obtain a simple (and increasingly accepted) conclusion: the global warming is real. Despite I was criticized I still believe what I said then: you don’t have to be a climatologist to empirically confirm global warming.

This experiment is another example of that. It is still simpler than that since it is only based on visual perception but I think is also quite conclusive. In this case, I represent U.S. temperature outliers from 1964 to 2013; a map per year. Dataset contains station ID, name, min/max temperature, as well as degree coordinates of the recorded weather. Original weather data collected from NOAA and anomalies analysis by Enigma. You can download data here.

Anomalies are divided into four categories: Strong Hot, Weak Hot, Weak Cold and Strong Cold. For each station, I represent difference between number of Cold and Hot anomalies (independently of the strength) so Blue bubbles represent stations where total number of Cold anomalies during the year is greater that total number of Hot ones and Red ones represent the opposite. Size of bubbles is also proportional to this indicator. As an example, following you can see the map of year 1975:

tonopah
It seems 1975 was hot in the right a cold on the left side. Concretely, in TONOPAH Station (Nevada) were registered 30 anomalies and most of them (26) where due to cold temperatures. This is why bubble is blue. This GIF shows the evolution of all these maps from 1964 to 2013:

anomalies

Maybe it is just my personal feeling but don’t you see how red bubbles are gradually winning to blue ones? Maybe I am a demagogue.

This code generates a dynamic map by year in html format:

library(data.table)
library(stringr)
library(leaflet)
library(RColorBrewer)
library(classInt)
library(dplyr)
library(htmlwidgets)
anomalies = fread("enigma-enigma.weather.anomalies.outliers-1964-2013-05ce047dbf3e67f83db9ae841545a333.csv")
anomalies %>%
  mutate(year=substr(date_str, 1, 4)) %>%
  group_by(year, longitude, latitude, id, station_name) %>%
  summarise(
    Strong_Hot=sum(str_count(type,"Strong Hot")),
    Weak_Hot=sum(str_count(type,"Weak Hot")),
    Weak_Cold=sum(str_count(type,"Weak Cold")),
    Strong_Cold=sum(str_count(type,"Strong Cold")),
    total=n()) %>%
  mutate(score=sign(-Strong_Hot-Weak_Hot+Weak_Cold+Strong_Cold)) %>%
  mutate(color=ifelse(score==1, "Blue",ifelse(score==0, "White", "Red"))) -> anomalies2
for (i in unique(anomalies2$year))
{
  anomalies2 %>%
    filter(year==i) %>%
    leaflet() %>%
    fitBounds(-124, 34, -62, 40) %>%
    addProviderTiles("Stamen.TonerLite") %>%
    addCircleMarkers(lng = ~longitude,
                     lat = ~latitude,
                     radius = ~ifelse(total < 20, 2, ifelse(total < 27, 4, 8)),
                     color= ~color,
                     stroke=FALSE,
                     fillOpacity = 0.5,
                     popup = ~paste(sep = "
", paste0("<b>", station_name, "</b>"),
                                    paste0("Strong Hot: ", Strong_Hot),
                                    paste0("Weak Hot: ", Weak_Hot),
                                    paste0("Weak Cold: ", Weak_Cold),
                                    paste0("Strong Cold: ", Strong_Cold))) -> m
    saveWidget(m, file=paste0("m", i, ".html"))
}

A Segmentation Of The World According To Migration Flows ft. Leaflet

Up in the sky you just feel fine, there is no running out of time and you never cross a line (Up In The Sky, 77 Bombay Street)

In this post I analyze two datasets from Enigma:

  • Migration flows: Every 10 years, since 1960, the World Bank estimates migrations worldwide (267.960 rows)
  • World population: Values and percentages of populations for each nation examined beginning in year 1960, by the World Bank’s Health, Nutrition and Population project (4.168.185 rows)

Since the second dataset is very large, I load it into R using fread function of data.table package, which is extremely fast. To filter datasets, I also use dplyr and pipes of magrittr package (my life changed since I discovered it).

To build a comparable indicator across countries, I divide migration flows (from and to each country) by the mean population in each decade. I do this because migration flows are aggregated for each decade since 1960. For example, during the first decade of 21st century, Argentina reveived 1.537.850 inmigrants, which represents a 3,99% of the mean population of the country in this decade. In the same period, inmigration to Burundi only represented a 0,67% of its mean population.

What happened in the whole world in that decade? There were around 166 million people who moved to other countries. It represents a 2.58% of the mean population of the world. I use this figure to divide countries into four groups:

  • Isolated: countries with both % of inmigrants and % of migrants under 2.58%
  • Emitter: countries with % of inmigrants under 2.58% and % of migrants over 2.58%
  • Receiver: countries with % of inmigrants over 2.58% and % of migrants under 2.58%
  • Transit: countries with both % of inmigrants and % of migrants over 2.58%

To create the map I use leaflet package as I did in my previous post. Shapefile of the world can be downloaded here. This is how the world looks like according to this segmentation:

Migration Flows

Some conclusions:

  • There are just sixteen receiver countries: United Arab Emirates, Argentina, Australia, Bhutan, Botswana, Costa Rica, Djibouti, Spain, Gabon, The Gambia, Libya, Qatar, Rwanda, Saudi Arabia, United States and Venezuela
  • China and India (the two most populous countries in the world) are isolated
  • Transit countries are concentrated in the north hemisphere and most of them are located in cold latitudes
  • There are six emitter countries with more than 30% of emigrants between 2000 and 2009: Guyana, Tonga, Tuvalu, Jamaica, Bosnia and Herzegovina and Albania

This is the code you need to reproduce the map:

library(data.table)
library(dplyr) 
library(leaflet)
library(rgdal)
library(RColorBrewer)
setwd("YOU WORKING DIRECTORY HERE")
populflows = read.csv(file="enigma-org.worldbank.migration-remittances.migrants.migration-flow-c57405e33412118c8757b1052e8a1490.csv", stringsAsFactors=FALSE)
population = fread("enigma-org.worldbank.hnp.data-eaa31d1a34fadb52da9d809cc3bec954.csv")
# Population
population %>% 
  filter(indicator_name=="Population, total") %>% 
  as.data.frame %>% 
  mutate(decade=(year-year%%10)) %>% 
  group_by(country_name, country_code, decade) %>% 
  summarise(avg_pop=mean(value)) -> population2
# Inmigrants by country
populflows %>% filter(!is.na(total_migrants)) %>% 
  group_by(migration_year, destination_country) %>% 
  summarise(inmigrants = sum(total_migrants))  %>% 
  merge(population2, by.x = c("destination_country", "migration_year"), by.y = c("country_name", "decade"))  %>% 
  mutate(p_inmigrants=inmigrants/avg_pop) -> inmigrants
# Migrants by country
populflows %>% filter(!is.na(total_migrants)) %>% 
  group_by(migration_year, country_of_origin) %>% 
  summarise(migrants = sum(total_migrants)) %>%  
  merge(population2, by.x = c("country_of_origin", "migration_year"), by.y = c("country_name", "decade"))  %>%
  mutate(p_migrants=migrants/avg_pop) -> migrants
# Join of data sets
migrants %>% 
  merge(inmigrants, by.x = c("country_code", "migration_year"), by.y = c("country_code", "migration_year")) %>%
  filter(migration_year==2000) %>% 
  select(country_of_origin, country_code, avg_pop.x, migrants, p_migrants, inmigrants, p_inmigrants) %>% 
  plyr::rename(., c("country_of_origin"="Country", 
                    "country_code"="Country.code", 
                    "avg_pop.x"="Population.mean",
                    "migrants"="Total.migrants",
                    "p_migrants"="p.of.migrants",
                    "inmigrants"="Total.inmigrants",
                    "p_inmigrants"="p.of.inmigrants")) -> populflows2000
# Threshold to create groups
populflows2000 %>% 
  summarise(x=sum(Total.migrants), y=sum(Total.inmigrants), z=sum(Population.mean)) %>% 
  mutate(m=y/z) %>% 
  select(m)  %>% 
  as.numeric -> avg
# Segmentation
populflows2000$Group="Receiver"
populflows2000[populflows2000$p.of.migrants>avg & populflows2000$p.of.inmigrants>avg, "Group"]="Transit"
populflows2000[populflows2000$p.of.migrants<avg & populflows2000$p.of.inmigrants<avg, "Group"]="Isolated"
populflows2000[populflows2000$p.of.migrants>avg & populflows2000$p.of.inmigrants<avg, "Group"]="Emitter"
#Loading shapefile from http://data.okfn.org/data/datasets/geo-boundaries-world-110m 
countries=readOGR("json/countries.geojson", "OGRGeoJSON") 
# Join shapefile and enigma information 
joined=merge(countries, populflows2000, by.x="wb_a3", by.y="Country.code", all=FALSE, sort = FALSE) 
joined$Group=as.factor(joined$Group) 
# To define one color by segment 
factpal=colorFactor(brewer.pal(4, "Dark2"), joined$Group) 
leaflet(joined) %>%
  addPolygons(stroke = TRUE, color="white", weight=1, smoothFactor = 0.2, fillOpacity = .8, fillColor = ~factpal(Group)) %>%
  addTiles() %>%
  addLegend(pal = factpal, values=c("Emitter", "Isolated", "Receiver", "Transit"))

A Simple Interactive Map Of US Prisons With Leaflet

The love of one’s country is a splendid thing. But why should love stop at the border? (Pablo Casals, Spanish cellist)

Some time ago, I discovered Enigma, an amazing open platform that unifies billions of records from thousands of government sources to make the world of public data universally accessible and useful. This is the first experiment I have done using data from Enigma. This is what I did:

  1. Create a free account, search and download data. Save the csv file in your working directory. File contains information about all prison facilities in the United States (private and state run) as recorded by the Department of Corrections in each state. Facility types, names, addresses (or lat/long coordinates) ownership names and detailed. In sum, there is information about 1.248 prison facilities.
  2. Since most of the prisons of the file do not contain geographical coordinates, I obtain latitude and longitude using geocode function from ggmap package. This step takes some time. I also remove closed facilities. Finally, I obtain a data set with complete information of 953 prison facilities.
  3. After cleaning and filling out data, generating the map is very easy using leaflet package for R. I create a column named popup_info pasting name and address to be shown in the popup. Instead using default OpenStreetMap basemap I use a CartoDB one.

In my opinion, resulting map is very appealing with a minimal effort:

This plot could be a good example of visual correlation, because it depends on this. Here you have the code:

library(dplyr)
library(ggmap)
library(leaflet)
setwd("YOUR WORKING DIRECTORY HERE")
prisons = read.csv(file="enigma-enigma.prisons.all-facilities-bd6a927c4024c16d8ba9e21d52292b0f.csv", stringsAsFactors=FALSE)
prisons %>% 
  mutate(address=paste(facility_address1, city, state, zip, "EEUU", sep=", ")) %>%
  select(address) %>% 
  lapply(function(x){geocode(x, output="latlon")})  %>% 
  as.data.frame %>% 
  cbind(prisons) -> prisons
prisons %>%  
  mutate(popup_info=paste(sep = "
", paste0("<b>", facility_name, "</b>"), facility_address1, city, state, zip)) %>% 
  mutate(lon=ifelse(is.na(longitude), address.lon, longitude),
         lat=ifelse(is.na(latitude),  address.lat, latitude)) %>%
  filter(!is.na(lon) & !grepl("CLOSED", facility_name)) -> prisons
leaflet(prisons) %>%
  addProviderTiles("CartoDB.Positron") %>%
  addCircleMarkers(lng = ~lon, 
                   lat = ~lat, 
                   radius = 3, 
                   color = "red",
                   stroke=FALSE,
                   fillOpacity = 0.5,
                   popup = ~popup_info)