Welcome to Evan Murray's DCI Forecast for the 2018 season! You can read more at
and check out the code
. In short, the model uses each corps' pace of improvement and current rank to simulate DCI's
Finals Week shows. You can see the results here.
tab is the model itself. You can run it for any day in the season, and
the model uses all scores up to and including that day. If you choose a day in the future, the
model will rank all the corps using its best guess for their scores on that day. This means
choosing a later day in the season makes the model weight the exponential piece of the model more
than the rank piece. Try it out!
tab allows you to see all the scores for a corps in more detail. If they have
enough data, you can also see the exponential fit to their data, aka thier pace of improvement.
The plot includes the approximate 95% confidence interval for each caption, which gives you a sense
for the model's confidence. To see how much the uncertainty can vary, compare Open Class
error bars with Bluecoats'.
tab compares two corps head-to-head. You can see how their scores compare now
and during Finals Week (specifically Prelims, becuase all corps perform that day). The comparison
will tell you which corps wins the head-to-head and compare them caption by caption. There is also an
Open Class specific comparison, which compares Open Class corps on the day of Open Class Prelims.
tab compares up to 4 corps on the odds that they succeed in something - from
making Semifinals to winning it all. This is another good way to compare corps - for example, looking at the
odds of winning Gold for the top 3 or 4 corps as the season has progressed is pretty interesting.
If there's anything else
you'd like to see, let me know and the forecast will be updated every day or two for the rest of
the season, so check back in from time to time!
For a corps to be included in the model, they need to have performed at least 6 times and the model
needs to be able to fit an exponential to their scores for each caption. Corps will be added to the
forecast as soon as they meet these two conditions, but in the meantime, the model proceeds as though the
corps don't exist.
Some early-mid season shows have incomplete judging panels. When that happens, sometimes one of the
Visual or Music captions is worth 40 points and the other 20, as opposed to DCI's standard 40-30-30.
In those cases, the model adjusts the scores to the standard 40-30-30 to keep things consistent.
However, it's entirely possible there was an error or mistake in the data collection. If you think
you've found an incorrect score, send me an email at firstname.lastname@example.org.
Open Class scores tend to get inflated once their tour breaks away from World Class in late July, which
means the model will likely overrate Open Class corps somewhat in early August and going into Finals Week.
Early in the season, World Class scores on the East Coast were also inflated, and that effect could remain
until early August. Until then, Bluecoats, Carolina Crown, Boston Crusaders, and Spirit of Atlanta may be
overrated in the model compared to their peers.
The model uses two things to forecast Finals Week - each corps' pace of improvment and their current
rank. Choosing the day on the
tab sets the day the corps are ranked - the
model uses its best guess for scores on that day. If you choose a day in the future, makes pace of improvement
more important than current scores. The farther you go into the future, the less significant current
rankings are. If you choose a day in the past, the model ignores all scores that came after that day, so
you can see what the model thought a week ago (for example) versus today.