With the NASCAR Sprint Cup 2013 season in the books, I’ve gone through all my final race rankings this year and begun evaluating how they did compared with all the actual race results. Here are the three areas I’ve looked at first:
1. Race Winners
In 2013, I picked only four winners out of the 30 non-plate and non-road course races. That’s a lowly 13% win rate, but the good news is those 30 races don’t provide statistical relevance because NASCAR Sprint Cup racing features so much variance.
To increase the sample size, I’ve added the data from my 2010, 2011 and 2012 picks, which gives me 115 non-plate/road-course races. (Sharp-eyed readers will notice four years of non-plate/road-course picks should equal 120 races, not 115; it’s 115 because I didn’t handicap two races in 2010 and three in 2011.)
The results: Over 115 races, I’ve picked 23 of 115 winners. That’s a 20% win rate, which works out to 4-1 odds for break-even betting.
2. Top 5 Percentage
This year I participated in Fantasy Racing Cheatsheet‘s (FRC) Experts Picks section. In this section, FRC lists the top five picks for each NASCAR race from six NASCAR handicapping and fantasy experts (including me) and the aggregate picks of its readers. It also lists how successful each expert was picking the top five drivers all season.
The experts’ 2013 top-five pick rates ranged from 32%–41%. I came in at 36%, right about mid-pack.
I see two interesting things in the Expert Picks end-of-year results. First, the top-five pick rates this year are, overall, roughly 5.5% lower than last year’s when they ranged from 34%–43%. This dovetails with the increased standard deviation (SD) I saw this year in my weekly rankings (see below).
Second, the readers scored well! I think this speaks to the wisdom-of-crowds notion; i.e., collective wisdom tends to beat individual opinions, even expert opinions. I plan to dive into this topic in a future post.
3. Standard Deviation
For each race this year, I ranked all 43 drivers; that is, I predicted where each driver would finish in each Sprint Cup race. I’ve since gone back and calculated the difference between each driver prediction with their actual race result. Example: At Homestead, I predicted Jimmie Johnson would finish 1st, but he actually finished 9th, so the difference was 8.
I’ve also calculated the overall SD for all my 2013 predictions. Calculating SD involves squaring each difference, adding all the results, calculating the mean and then calculating the square root of the mean. The result: 10.88.
This number is 13.3% worse than my 2012 overall SD, which came in at 9.60. That’s quite a bummer, but I do have two theories why my accuracy fell off.
Theory 1—the Plate Races
My rankings always perform relatively poorly in the plate races, simply because those races feature so much more variance than non-plate races. But this season, the SD at the first three plate races was off-the-charts bad. Check out the 2012 plate SD compared to the 2013 plate SD:
2012 Plate Race SD
- Daytona I: 15.74
- Talladega I: 13.62
- Daytona II: 12.06
- Talladega II: 11.5
2013 Plate Race SD
- Daytona I: 17.73
- Talladega I: 17.51
- Daytona II: 15.97
- Talladega II: 12.86
Why was the plate SD so much worse this year? I theorize it’s primarily due to the return of pack racing. With the design of the new Gen6 car, drivers can no longer do 2×2, tandem racing at the plate tracks. That means they’re back to running in giant packs, and I think that’s cranked the variance back up through the roof.
Of course, I haven’t been able to do some sort of scientific study to test my theory, so it remains only that—a theory. But regardless, those inflated SD numbers this year definitely helped push my overall SD higher than last year’s.
Theory 2—the New Car
My driver handicapping formula relies heavily on past performance. In a typical race week, I weight the formula 71% toward the driver loop data, practice data and qualifying data from previous races at that week’s track and similar tracks, and only 29% toward the practice and qualifying data from the current week (here’s a discussion on why).
I sometimes must reach back as much as two years when gathering past performance data, and therein lies the rub. As noted above, this year marked the debut of NASCAR’s Gen6 car, which featured quite a departure from the Gen5 car (also known as the COT car, or Car of Tomorrow). How much predictive value does two-year-old loop, practice and qualifying data for the Gen5 car offer in races featuring the Gen6 car? I don’t know for sure, but I’ll bet it’s significantly reduced.
I’ve noticed a trend that seems to support this theory. As the 2013 season progressed, my rankings’ SD mostly improved, especially at the 11 tracks the Sprint Cup series visits twice a year. My SD improved at nine of those 11 tracks in the second race, often by a great deal:
2013 Standard Deviation from Race 1 to Race 2
|Track||Race 1 SD||Race 2 SD|
That makes sense to me. As the year progressed, new Gen6 data slowly populated the formula and pushed old Gen5 data out. The Gen6 data offered more predictive value, and therefore SD improved. But again, it’s just a theory.
That’s it for now. In the coming weeks, I plan to examine other interesting tidbits in my 2013 results data and blog about them. If you have anything in particular you want me to look up and post, write a request in the Comments section below.