Analysis – Stocky vs Reality: Did your team outperform? (Pt I)

The Stocky is the main forecasting tool driving the analysis on this site. It’s a simulator of the season ahead, using the Monte Carlo method and based on Elo ratings, that gives insight into the future performance of each club. My main interest has been the number of wins, as it determines ladder positions which in turn have a big impact on the finals. The Stocky might not be able to tell you which games a team will win, but it is good at telling you how many wins are ahead.

But how does a computer simulation (in reality, a very large spreadsheet) compare to reality? To test it, I’ve put together a graph of each team’s performance against what the Stocky projected for them. Each graph shows:

  • The Stocky’s projection for total wins (blue)
  • Converting that projection to a “pace” for that point in the season (red)
  • Comparing that to the actual number of wins (yellow)

It will never be exactly right, particularly as you can only ever win whole numbers of games and the Stocky loves a decimal point, but as we’ll see, the Stocky is not too bad at tracking form and projecting that forward.

This week is Part I, from Brisbane to Newcastle. Part II, from North Queensland to Wests Tigers, will be next week.

Continue reading “Analysis – Stocky vs Reality: Did your team outperform? (Pt I)”

Analysis – The more competitive the season, the more bums on seats

Most rugby league commentators wouldn’t know what a linear regression is or how do one. I’m no different but I do like to compare two variables and see if they’re correlated. A scatter plot with a linear trendline and an R-squared – remember R-squared goes from 0, no correlation, to 1, perfect correlation; I usually need at least 0.2 to raise an eyebrow – is all I need to keep me entertained for hours on end.

Last week, we looked the concept of competitiveness and how to measure it. This week, I want to see if (more or less) competitiveness impacts on other aspects of the game. Using my preferred ratings gap as a proxy for how competitive a season is, this post looks at a few variables to see if they’re correlated.

If you want a specific variable looked at, give me a yell.


draws vs gap

Surprisingly, there’s no link between the number of draws and how competitive the season is. There’s basically a correlation of nothing with an R-squared of 0.03 . I think draws are more about the specific teams in question and I think golden point may play a role but the overall season competitiveness doesn’t matter.

As an aside, it appears that the number of draws are increasing over time. I’ve looked into it and basically, that’s not a thing. Yet. 2016 was a particularly bad year and maybe if 2017 continues on trend, then we may be able to establish there’s something happening but we’ll have to wait and see.

Points scored

points vs gap

I had to adjust it for point scored across the season per regular season game but there’s no correlation between competitiveness and points scored. They’re independent of each other.


attendance vs gap

Look, I’ve know I’ve gone on about attendance this year, but this is interesting. Promise. The more competitive the season, the more people turn up. This relationship holds even if you remove an outlier year like 1998, which had really poor attendance due to an excess of games and it immediately followed the Super League shemozzle.

It folds in nicely to some work done by Tony Corke for the AFL. We see that people prefer to attend games where there’s a bit of uncertainty. If it’s expected to be a lopsided flogging, people stay home compared to when there’s a chance of it going either way, i.e. when its competitive.

If the NRL wants more bums on seats, all it has to do is tighten up the competition. Introducing a spending cap on performance would go a long way to evening it up.

Analysis – Is the NRL getting more competitive?

The short answer is yes and no. Yes, the NRL is more competitive now than when it started but no, it doesn’t seem to be a thing that improves consistently year-on-year.

Let me explain. On this blog, we use Elo ratings to measure teams’ performances and assess each team’s probability of winning a game in advance. Surely we can use our ratings system to assess the competitiveness of each NRL season.

Philosophically, what is a high level of competitiveness? It has to be a situation where the teams are fairly close in performance resulting in a hard to predict outcome. Here’s two ways of measuring that closeness of performance with pros and cons.

  1. You could look at the spread of teams ratings. Pro – makes an assessment based on all teams. Con – if all teams are pretty average but one team is excellent and another awful, then there isn’t a big spread of talent. This would imply a highly competitive season even though there is really only one potential premiership contender.
  2. You could look at the difference between top and bottom ratings. Pro – simple. Con – if one team is truly terrible and three or four are pretty good, then the season is pretty competitive but the difference between top and bottom might be exaggerated due to the crapiness of the bottom team. This would imply a not particularly competitive season despite there being multiple potential champions.

Which is better, measuring the spread or measuring the difference from top to bottom? Neither way of doing this is immediately obvious as a better method. Let’s look in more detail.

Continue reading “Analysis – Is the NRL getting more competitive?”

Analysis – Your team and their finals appearances

There was a remark on Twitter a while back that Mitchell Moses had left Wests because he wanted to play finals footy but had chosen to go to the only club that had been without a final appearance longer than the Tigers. That didn’t seem right but I looked into it and it was true.

That got me thinking. How often do teams turn up to the finals? Some, like the Storm and Cowboys, seem to be regular fixtures but how do the rest fare?

Warning: this is going to be one of those “Well, yeah, I knew that” type posts. This is not about showing off some fancy analysis, just a bit of curiosity.

Double warning: Pie charts ahead.

Continue reading “Analysis – Your team and their finals appearances”

Analysis – Golden Point is (close to) a coin toss

In 2003, the Golden Point system was introduced to decide games that were drawn at full time. Prior to that, instead of the two points awarded to the outright winner, the drawn teams would split the points and take home one competition point each.

Golden Point continues play for a further ten minutes after full time in two five minute halves or until a team scores at least one point, winning the game. A field goal is the most common way these games are decided. If the game has no points scored after the ten minutes, then the game remains drawn and the teams share the points.

I don’t love it and I think it’s a crapshoot.

(Crap-shoot or craps-hoot? One sounds more fun than the other)

Look! Here’s some evidence to suggest it is a craps-hoot.

Continue reading “Analysis – Golden Point is (close to) a coin toss”

Opinion – Sydney, relocations are better than mergers

Before Origin ruined everything, I was looking at expansion opportunities using Cap (parts 1, 2, 3, & 4), a metric that measures the ratio of the population of the city over the number of people who turn up to a game. Using a bit of data from AFL and NRL, opportunities in Perth and Brisbane offered themselves with some less likely chances in Adelaide and Central Queensland.

Until now, I’ve ignored the obvious elephant in the room, although the media recently had a field day with it (no links because it was almost all garbage). Sydney has 8.5 clubs: Easts, Souths, Manly, Cronulla, Penrith, Parramatta, Canterbury, Wests and half of St George Illawarra. The long run Cap in Sydney from 1988 to 2011 was 3.5, which is great but given all these clubs regularly lose money, something needs to change. The obvious, but not the only, solution would be to reduce the number of clubs so there’s fewer teams chasing the same pool of dollars.

There’s four options for pursuing that goal:

Continue reading “Opinion – Sydney, relocations are better than mergers”

Opinion – More Cap test cases for NRL expansion

Last time, we went off-piste and moved from rigorous analysis to speculative analysis. Let’s go further off-piste and apply Cap in ways that it was never meant to be (that’s why we’ve switched from “Analysis” to “Opinion”, even I have my limits).


Because of the capacity constraints discussed a few weeks ago, it’s a bit hard to judge just how popular NRL expansion would be in Brisbane. Using some Cap metrics, we can estimate what an “optimal” number of clubs might look like.

If we only care about maintaining a 15,000 average attendance, then:

  • 6 teams would give a Cap of 2.2
  • 5 teams = 2.6
  • 4 teams = 3.3
  • 3 teams = 4.4
  • 2 teams = 6.5

On the other hand, if we use the relationship between teams per capita and Cap developed a few weeks ago:

Continue reading “Opinion – More Cap test cases for NRL expansion”