Before the 2016-17 men’s college basketball season, ESPN’s Sports Analytics Team unveiled an upgraded Basketball Power Index (BPI). This was the first time that BPI had rankings before the season. Along with Preseason BPI, we introduced several new metrics -- including strength of record -- that help evaluate college basketball teams. These metrics can be found at a redesigned ESPN.com/BPI. For a review of how BPI and SOR are calculated, read this explainer.
Before we review how the Basketball Power Index performed this past season, it is important to remember that BPI doesn’t predict what is going to happen; rather, it gives the most likely of outcomes and the chances that they occur. In that same breath, it is hard to say BPI got a particular team’s performance “wrong” or “right,” as BPI is just giving the chances given the relevant hard data available.
Below we review BPI’s performance this past season and highlight a few teams that exceeded BPI’s expectations and a few that did worse than expected.
If someone participating in ESPN’s Tournament Challenge had on Selection Sunday entered only the most likely teams to advance to each round according to BPI, it would have produced 1,120 points, good for the 90.5th percentile among participants. Although this is certainly impressive, it is not the best way to judge success in a mathematical system such as BPI. There is considerable randomness in the NCAA tournament, and Tournament Challenge points can change drastically when an unlikely event occurs.
To assess the overall accuracy of BPI’s tournament projections, we compare it to both KenPom.com and FiveThirtyEight.com’s Selection Sunday projections. There are two ways we do this: absolute error and squared error.
If BPI gave a team a 15 percent chance to reach the Final Four and it did, then the absolute error is 0.85. If the team did not reach the Final Four, then the absolute error is 0.15. The squared error for each team is the absolute error squared. Accuracy can be compared among each system by seeing which had the smallest average absolute and squared error.
Compared with these two other prominent projection systems, BPI was more optimistic on North Carolina on Selection Sunday than either system, but the same results would have held true even if North Carolina had lost during that final weekend.
Villanova, Louisville, Duke and Saint Mary’s all were eliminated earlier than BPI expected, but in a single-elimination tournament, there will always be some teams that lose earlier than expected and others, such as South Carolina, that make it much further than expected.
North Carolina, which was third in BPI on Selection Sunday, is the eighth NCAA champion in the first 10 seasons of BPI to have been in the top four of BPI on Selection Sunday (Connecticut in 2011 and 2014 are the sole exceptions).
When rebuilding BPI, we had to make some decisions. One of them was whether to minimize the error in games with respect to “chance to win” or with respect to pace-adjusted point differential. Since BPI’s main goal is to give the chance of future events occurring, we decided to focus on making it as accurate as possible for a team’s chance to win a game.
BPI also produced predicted point differentials this season, but since that is not the main goal of BPI, when comparing accuracy in point totals, BPI will always fare worse than when comparing accuracy in chance-to-win percentages. Below is an analysis of both.
According to thepredictiontracker.com, BPI was 51.1 percent against the spread over the course of the season. In mean absolute and mean squared error of point spreads, BPI was respectable and roughly equal to Massey ratings but worse than both Sagarin and TeamRankings.com.
In accuracy of the percent chance to win, BPI had an average mean absolute error of 0.328. Teams that were given a greater than 50 percent chance to win a game ended up winning 74.9 percent of them. BPI projected more games with lopsided probabilities than evenly matched games, and in the end the expected number of teams won -- in both lopsided projections and games predicted to be close.
Preseason rankings and projections
BPI preseason rankings are meant to predict what a team’s BPI will be at the end of the season. That said, we will compare preseason BPI to the preseason Associated Press poll and the final BPI rankings of the season to reflect on which teams exceeded, met and failed to meet BPI’s preseason expectations.
Duke was the clear-cut No. 1 team going into the season with a BPI rating of plus-18.0. After injuries derailed many of their players (and coach Mike Krzyzewski), the Blue Devils lost in the round of 32 in the NCAA tournament and finished No. 7 in BPI. Although that might seem far from preseason expectations, their final BPI rating was plus-17.7, just short of their preseason number.
Teams that did not live up to BPI preseason expectations included NC State, Ohio State and Georgetown. None were in the preseason AP Top 25 but were in the BPI preseason top 25 and had disappointing seasons.
BPI had much lower expectations for Texas, Connecticut and Maryland than reflected by the preseason AP poll, and those teams did worse than the average performance BPI gave them. Michigan was also unranked in the preseason by the AP, but the Wolverines were 16th in preseason BPI, and that is right where BPI left them at the end of the season after their run in the NCAA tournament.