Posted by

Brian Keng

on July 4, 2017

Share to

More Posts...

Building A Table Tennis Ranking Model

by Brian Keng | July 4, 2017

At Rubikloud, our wonderful Operations team regularly plans fun activities that the whole company can participate in such as movie nights, ceramic painting, and curling to name a few. However, my favourite activity by far is visible as soon as you get off the elevator.

Championship match at our annual ping pong tournament

Many of our Rubikrew are big fans of table tennis, in fact, we’ve held an annual table tennis tournament for all the employees for three years running (and I’m the reigning champion). It’s an incredibly fun event where everyone in the company gets involved from the tournament participants to the spectators who provide lively play-by-play commentary.

Unfortunately, not everyone gets to participate either due to travel and scheduling issues, or by the fact that they miss the actual tournament period in the case of our interns and co-op students. Another downside is that the event is a single-elimination tournament, so while it has a clear winner the ranking of the participants is not clear.

Being a data scientist, I identified this as a thorny issue for our Rubikrew table tennis players. So, I did what any data scientist would do and I built a model.

Note: If you’re not too interested in the math, just skip over the next section to see how we implemented it.

Bradley-Terry Model

The Bradley-Terry model [1] is a popular statistical model that can be used to predict the outcome of anything that involves comparisons. For example, a variation of this model is used in the popular Elo chess ranking system.

In the case of our table tennis ranking, we wanted to build a model that could give the probability that player \(i\) beats player \(j\) in a single game1, which we denote by \(P(i > j)\). If each player records their wins and losses versus every other player, the game record from each pair of players serves as our observations for which we can fit a model.

In more detail, the Bradley Terry model2 defines a parameter for each player, \(p_i\), and models the probability of player \(i\) winning a game against player \(j\) as:

P(i > j) = \frac{p_i}{p_i + p_j} \tag{1}

If we can estimate each \(p_i\), then we immediately have a ranking system. Moreover, the ratio of \(p_i\) to \(p_j\) defines the odds of \(i\) beating \(j\) in a single game.

Fitting the Bradley-Terry Model via the MM Algorithm

We can fit this model by defining its likelihood function and finding the \(p_i\) parameters that maximize it. If we have observed that player \(i\) beat player \(j\) \(w_{i,j}\) times, the probability of this event is:

\big[\frac{p_i}{p_i + p_j}\big]^{w_{i,j}} \tag{2}

the likelihood is this expression over all pairs of players:

L({\bf p}) = P({\bf W} | {\bf p}) = \prod_{i=1}^{N} \prod_{j=1}^{N} \big[\frac{p_i}{p_i + p_j}\big]^{w_{i,j}} \
\mathcal{L}({\bf p}) = \log L({\bf p}) = \sum_{i=1}^{N} \sum_{j=1}^{N} w_{i,j}\log(p_i) – w_{i,j}\log(p_i+p_j) \tag{3}

where \(W\) is our observations of the pair-wise wins, \(N\) is the total number of players, \(w_{i,i}\) is defined to be \(0\) and \(\mathcal{L}\) is the log-likelihood function.

Unfortunately, it gets a bit messy if we try to optimize this function directly. However, there is a clever little trick where we can iteratively optimize another objective function that will converge to the same optimum as Equation 3. If we notice that for positive \(x\) and \(y\):

-\log(x) \geq 1 – \log(y) – (x / y) \tag{4}

with equality if and only if \(x=y\).  Using Equation 4 we can define \(Q_k({\bf p})\), our iterative approximation for the likelihood function, by substituting \(\log(p_i + p_j)\) for the RHS of equation 4 and some “constant” \(y\) defined by \(y = p_i^{(k)} + p_j^{(k)}\), where \(p_i^{(k)},p_j^{(k)}\) are  our estimates from the previous iteration:

Q_k({\bf p}) = \sum_{i=1}^{N} \sum_{j=1}^{N} w_{i,j}\big[\log(p_i) – \frac{p_i + p_j}{p_i^{(k)}+p_j^{(k)}} – \log(p_i^{(k)}+p_j^{(k)}) +1 \big] \leq \mathcal{L}({\bf p}) \tag{5}

A function that satisfies Equation 5 is said to minorize \(\mathcal{L}\) [2]. This leads to this property of minorizing functions:

Q_k({\bf p}) \geq Q_k({\bf p^{(k)}}) \text{ implies } \mathcal{L}({\bf p}) \geq \mathcal{L}({\bf p^{(k)}}) \tag{6}

This condition suggests an iterative algorithm to compute \({\bf p}\):

  1. Select some arbitrary initial value for \({\bf p^{(0)}}\)
  2. Normalize \({\bf p^{(k)}}\) such that \(\sum_{i=1}^{N} p_i^{(k)} = 1\)
  3. Maximize \({\bf p^{(k+1)}} = \underset{{\bf p}}{\mathrm{argmax}} \text{ } Q_k({\bf p})\)
  4. Repeat step 2 and 3 until convergence

In fact, this is a more general set of algorithms call minorization-maximization or MM algorithms.

Fortunately, Equation 5 is much easier to maximize because \(p_i + p_j\) is no longer in a \(\log\). By taking the partial derivative with respect to each \(p_i\) and setting it to \(0\), we can easily derive an expression to maximize Equation 5:

p_i^{(k+1)} = W_i\big(\sum_{j\neq i} \frac{N_{i,j}}{p_i^{(k)} + p_j^{(k)}}\big)^{-1} \tag{7}

where \(W_i\) is the total number of wins for player \(i\) and \(N_{i,j}\) is the total number of games played between player \(i\) and player \(j\).

As it turns out since our original likelihood is convex, our MM algorithm will eventually converge to the global optimal solution (if it exists). I would advise you to check out [2] if you’re interested in these details.

Adding a Prior to the Bradley-Terry Model

One of the drawbacks of the vanilla Bradley-Terry model is that it is ill-defined for some common conditions. Namely, there must be a chain of wins from any player \(i\) to any other player \(j\). This condition is especially problematic when starting out; surely there aren’t enough games for everyone to be linked.

Furthermore, if you have only a player who has either won or lost all their games, the vanilla model Bradley-Terry model falls apart. As usual with these types of situations, we can add a Bayesian prior on the situation (or if you like regularization) to handle these ill-defined cases [3].

Formulating this model as Bayesian problem, we can write the posterior distribution for our \({\bf p}\):

f({\bf p} | {\bf W}) = \frac{P({\bf W} | {\bf p}) f({\bf p})}{P({\bf W})} \tag{8}

where we’re using \(f\) notation because \({\bf p}\) is a continuous value. Here’s where it can get a bit messy if we choose a difficult to use prior. Fortunately, we can circumvent this issue by using a prior on \(p_i\) that centers it on the probability of winning versus “average” player.

That is, if we hypothetically introduced a dummy player that has exactly \(C\) wins and \(C\) loses against each player, it would naturally introduce a prior that pulls the estimate of \(p_i\) towards the “average” player. This is also good because it’s very simple to implement: just create a dummy player and have its win/loss record be 1-1 for every player.  This is the approach I used in our model to handle these ill-defined cases.

The Official Rubikloud Table Tennis Rankings

While it’s fun to work out the math, it’s even more fun to implement it. First, I set up a Google spreadsheet to record our game history and then shared it with the entire company:

Game Data Spreadsheet

Each person was responsible for putting their game record in the sheet. We used a very simple record format: date, player A, player B, win A, wins B. Next, I wrote a little script in Python using gspread and the Google drive API that reads in the game records, computes the rankings above (normalizing between 1 and 1000), and then writes it out to another tab in the spreadsheet. The code is up on my GitHub account:

You can see it in action here:

Official Rubikloud Table Tennis Ranking

The script runs on a cronjob hourly so that players can get the most up-to-date results. So far, the reception has been positive (with respect to table tennis, maybe not productivity :P), our Rubikrew players are now playing with a wider variety of opponents more often (to move up the rankings).

Rubikloud is a great place to work. We tackle interesting problems from recommender systems to our table tennis rankings. If you enjoy table tennis and get excited about building probabilistic models, Rubikloud is always looking for ambitious and curious data scientists:


[1] Bradley-Terry Model (Wikipedia)
[2] “MM Algorithms For Generalized Bradley–Terry Models”, David R. Hunter, The Annals of Statistics 2004, Vol. 32, No. 1, 384–406
[3] “Using Bayesian statistics to rank sports teams”, John T. Whelan, [presentation]


  1. We used games instead of matches because we don’t have a standardized match format.
  2. Some texts will reparameterize Equation 1 by using setting \(p_i = e^{\beta_i}\), which conveniently transforms Equation 1 into a logistic regression: \(\text{logit}(P(i>j)) = \beta_i – \beta_j\).