With the NASCAR Sprint Cup 2013 season in the books, I’ve gone through all my final race rankings this year and begun evaluating how they did compared with all the actual race results. Here are the three areas I’ve looked at first:

**1. Race Winners**

In 2013, I picked only four winners out of the 30 non-plate and non-road course races. That’s a lowly 13% win rate, but the good news is those 30 races don’t provide statistical relevance because NASCAR Sprint Cup racing features so much variance.

To increase the sample size, I’ve added the data from my 2010, 2011 and 2012 picks, which gives me 115 non-plate/road-course races. (Sharp-eyed readers will notice four years of non-plate/road-course picks should equal 120 races, not 115; it’s 115 because I didn’t handicap two races in 2010 and three in 2011.)

*The results:* Over 115 races, I’ve picked 23 of 115 winners. That’s a 20% win rate, which works out to 4-1 odds for break-even betting.

**2. Top 5 Percentage**

This year I participated in Fantasy Racing Cheatsheet‘s (FRC) Experts Picks section. In this section, FRC lists the top five picks for each NASCAR race from six NASCAR handicapping and fantasy experts (including me) and the aggregate picks of its readers. It also lists how successful each expert was picking the top five drivers all season.

The experts’ 2013 top-five pick rates ranged from 32%–41%. I came in at 36%, right about mid-pack.

I see two interesting things in the Expert Picks end-of-year results. First, the top-five pick rates this year are, overall, roughly 5.5% lower than last year’s when they ranged from 34%–43%. This dovetails with the increased standard deviation (SD) I saw this year in my weekly rankings (see below).

Second, the readers scored well! I think this speaks to the wisdom-of-crowds notion; i.e., collective wisdom tends to beat individual opinions, even expert opinions. I plan to dive into this topic in a future post.

**3. Standard Deviation**

For each race this year, I ranked all 43 drivers; that is, I predicted where each driver would finish in each Sprint Cup race. I’ve since gone back and calculated the difference between each driver prediction with their actual race result. *Example:* At Homestead, I predicted Jimmie Johnson would finish 1st, but he actually finished 9th, so the difference was 8.

I’ve also calculated the overall SD for all my 2013 predictions. Calculating SD involves squaring each difference, adding all the results, calculating the mean and then calculating the square root of the mean. *The result:* 10.88.

This number is 13.3% worse than my 2012 overall SD, which came in at 9.60. That’s quite a bummer, but I do have two theories why my accuracy fell off.

**Theory 1—the Plate Races**

My rankings always perform relatively poorly in the plate races, simply because those races feature so much more variance than non-plate races. But this season, the SD at the first three plate races was off-the-charts bad. Check out the 2012 plate SD compared to the 2013 plate SD:

*2012 Plate Race SD*

- Daytona I: 15.74
- Talladega I: 13.62
- Daytona II: 12.06
- Talladega II: 11.5

*2013 Plate Race SD*

- Daytona I: 17.73
- Talladega I: 17.51
- Daytona II: 15.97
- Talladega II: 12.86

Why was the plate SD so much worse this year? I theorize it’s primarily due to the return of pack racing. With the design of the new Gen6 car, drivers can no longer do 2×2, tandem racing at the plate tracks. That means they’re back to running in giant packs, and I think that’s cranked the variance back up through the roof.

Of course, I haven’t been able to do some sort of scientific study to test my theory, so it remains only that—a theory. But regardless, those inflated SD numbers this year definitely helped push my overall SD higher than last year’s.

**Theory 2—the New Car**

My driver handicapping formula relies heavily on past performance. In a typical race week, I weight the formula 71% toward the driver loop data, practice data and qualifying data from previous races at that week’s track and similar tracks, and only 29% toward the practice and qualifying data from the current week (here’s a discussion on why).

I sometimes must reach back as much as two years when gathering past performance data, and therein lies the rub. As noted above, this year marked the debut of NASCAR’s Gen6 car, which featured quite a departure from the Gen5 car (also known as the COT car, or Car of Tomorrow). How much predictive value does two-year-old loop, practice and qualifying data for the Gen5 car offer in races featuring the Gen6 car? I don’t know for sure, but I’ll bet it’s significantly reduced.

I’ve noticed a trend that seems to support this theory. As the 2013 season progressed, my rankings’ SD mostly improved, especially at the 11 tracks the Sprint Cup series visits twice a year. My SD improved at nine of those 11 tracks in the second race, often by a great deal:

**2013 Standard Deviation from Race 1 to Race 2**

Track |
Race 1 SD |
Race 2 SD |

Phoenix | 9.98 | 7.19 |

Bristol | 11.36 | 13.44 |

Martinsville | 9.17 | 8.81 |

Texas | 9.61 | 9.51 |

Kansas | 10.24 | 9.60 |

Richmond | 11.47 | 10.10 |

Charlotte | 13.25 | 6.26 |

Dover | 12.86 | 8.10 |

Pocono | 8.86 | 9.57 |

Michigan | 13.22 | 11.29 |

New Hampshire | 10.01 | 8.68 |

That makes sense to me. As the year progressed, new Gen6 data slowly populated the formula and pushed old Gen5 data out. The Gen6 data offered more predictive value, and therefore SD improved. But again, it’s just a theory.

That’s it for now. In the coming weeks, I plan to examine other interesting tidbits in my 2013 results data and blog about them. If you have anything in particular you want me to look up and post, write a request in the Comments section below.

Erik AllenThanks for writing this up Jed, and good comments. I am fully on-board with the Gen6 car leading to increased uncertainty (i.e. theory #2). My impression is that practice also mattered way more this year. Kyle Wiseman (who ended up at 41% of Top 5 picks) had the best rate of Top 5 picks, and he appeared tp incorporate practice data heavily into his picks.

KARL ALTHAGEVery interesting. I went way heavier on practice and also qualifying data this year than in other years and did very well in all games except, of course, the money winning ones! I ended up 99

KARL ALTHAGEoops.. 99th percentile in Yahoo.. and even won a Dave Despain bobble head in Super 7 (LOL)!

I think that the injuries to Hamlin and Stewart and Vickers made some interesting options. I calculated the differential on my picks to actual finish of the top 35 drivers. I had a low at Charlotte 4.9 ! too a redicoulous high of 20 at Daytona 2. Avg just over 9.

Jed HensonPost authorNice work Karl! Yes, the plate races were crazy last year, particularly the first two.

Pingback: The King on Danica: Is He Right? | NASCARpredict.com

Party Barge Capt.Just wanted you to know that the rankings you provide to me every week is generally very successful for me. I’m not a fantasy player, but I’m in a couple o NASCAR pools where I just have to pick the top 5 finishers in a race in no particular order. There are 2 18-week seasons – a 1st & 2nd half.

A first place pick awards 40 points, a 2nd place pick awards 30 points, than 20-10-5, for a total of a possible 105 points per week. The first half of 2013, your predictions weren’t great when I first found your site. However, I came in 1st place the 2nd half of 2013!

This year so far, I came in 2nd the first half of 2014, and I’m in 3rd for the 2nd half of 2014.

Thanks to your rankings, I’m very successful. Thanks for your help!

Jed HensonPost authorThanks for the feedback, Captain! Yes, the early 2013 rankings kinda stunk. I think it was primarily because that’s when Sprint Cup first began running the Gen 6 car, which greatly reduced the predictive value of all the previous loop, qualifying and practice data generated by the Gen 5 car.

Also, in general I’ve found my rankings improve over the course of a season even without a total car change. Probably primarily due to the rule and team changes that occur each off season.