By how much will Clinton win?

American politics is great for statistics: there are huge amounts of polls being conducted every week, some positions are up for re-election every other year, and there is really only two parties. Moreover, the complicated nature of the whole election process, which for example involves the electoral college for the presidential election, makes it more interesting than most democracies around the world. It’s for all these reasons that an incredible website like FiveThirtyEight is possible.

Read more

Oscar 2016 predictions

For the past three years, I have tried to predict the winners in all categories at the Academy Awards. But last year, I was able to combine my passion for both movies and statistics: as part of my Data Analysis course at McGill University, we had to come up with a prediction model for four categories: Best Picture, Best Director, Best Actor, and Best Actress. And my model performed quite well: it was the only one to predict correctly the four winners.

This year, I decided to repeat the experience again, especially since the Best Picture category is more competitive this year than last year. I have shared my predictions below, for all categories; however, I have used a statistical model only for the four categories mentionned above. All other categories are based on my own judgement (and readings I have done). My predictions are in bold font.

After the Academy Awards, I will update this post and point out the winners (I will indicate them in italics). I may also write a post on my prediction model.

Update (2016/02/28): Well, I didn’t do as well as I would have liked: 14/24.

Read more

Makefile and Beamer presentations

I have been wondering about Makefiles for some time now, and recently I finally got around learning about them so that I could use make to regenerate all the different versions of a manuscript I’m working on. And I thought I would take the opportunity to explain how they can be useful for Beamer presentations.

Read more

Tutorial: Optimising R code

The R language is very good for statistical computations, due to its strong functional capabilities, its open source philosophy, and its extended package ecosystem. However, it can also be quite slow, because of some design choices (e.g. lazy evaluation and extreme dynamic typing).

This tutorial is mainly based on Hadley Wickam’s book Advanced R.

Before optimising…

First of all, before we start optimising our R code, we need to ask ourselves a few questions:

  1. Is my code doing what I want it to do?

  2. Do I really need to make my code faster?

  3. Is considerable speed up even possible?

Read more

Test case: Optimising PCEV

I will give an example of code optimisation in R, using Noam Ross’s proftable function and Luke Tierney’s proftools package, which I discuss in my tutorial on optimisation. The code we will optimise comes from the main function of our PCEV package. A few months ago, while testing the method using simulations, I had to speed up my code because it was way to slow, and the result of this optimisation is given below.

For background, recall that PCEV is a dimension-reduction technique, akin to PCA, but where the components are obtained by maximising the proportion of variance explained by a set of covariates. For more information, see this blog post.

Read more