College Baseball Tweaked The NCAA Tournament Selection Process. But Will It Change Anything?
Image credit: (Photo by Jay Biggerstaff/Getty Images)
At the American Baseball Coaches Convention in Dallas last week, NCAA Division I baseball committee chairman Matt Hogue unveiled to the coaches a few changes to the NCAA Tournament selection procedure.
While the committee kept the RPI formula unchanged, it adjusted the selection process and how it presents the data to the committee. The first is to bring the quadrant system to baseball. This was a tweak applied to the basketball process a few years ago and helps account for where a game was played rather than just focusing on the opponent.
The selection committee will also this year be given a new rating system, KPI (Kevin Pauga Index), to compare teams. KPI is not replacing RPI, and it specifically will not be a selection criterion for the tournament. Several other sports use a similar setup for their selections.
There also has been a change to the makeup of the regional advisory committees, which previously have been made up of current head coaches. Those coaches will continue to serve on the committee, but now the baseball administrator from each Division I conference will also sit on those committees, in an effort to provide more hard data and ease the awkwardness of a coach having to advocate for his own team or a rival.
While none of those changes individually is monumental, taken together it does represent the biggest change to the selection process since baseball last tweaked the RPI formula a decade ago.
“We want to give ourselves the best tools to work with,” Hogue said. “I’m pleased and I think the committee is pleased with the effort to move forward.”
Hogue did acknowledge that some people will undoubtably believe they haven’t gone far enough with their reforms. In the aftermath of last year’s selections, then-committee chairman John Cohen called for RPI reform during the selection show on ESPN2.
Hogue said the committee wanted to capitalize on the momentum from last year’s selection process to make progress. A larger change will take more time, but the committee was able to at least make some immediate reforms.
“As we exited last year, there was a lot of momentum, a strong sentiment with our committee, the previous members and then moving into our summer meetings with the new folks, there was a strong sentiment that we needed to make progress in terms of team evaluation, particularly with regard to RPI,” Hogue said. “We spent a good deal of time trying to focus on running some models, how much of an impact do road games, do neutral games have. With the data that was presented, this was the step that was decided to move forward. Let’s make sure that those types of contests are properly highlighted, particularly in the condensed timeline of selections.
“First and foremost, there was a sentiment that we needed to move the ball forward. We didn’t know how far, but we needed to do something.”
Now that the committee has rolled out those changes, the obvious question is how will they impact Selection Monday in May? To answer that and other questions about the reforms, here’s a Q&A.
How did the quadrants change?
Previously, the team sheets the selection committee is given displayed a team’s record in games against teams ranked 1-25 in RPI, 26-50, 50-100 and 100+. Now, the location of a game will factor into what quadrant it counts toward. Quad 1 is home games against teams 1-25 in RPI, neutral games against 1-40 and road games against 1-60. Quad 2 is home games against 26-50, neutral games against 41-80 and road games against 61-120. Quad 3 is home games against 51-100, neutral games against 81-160 and road games against 121-240. Quad 4 is home games against 101+, neutral games against 161+ and road games against 241+.
So, what does that actually mean?
Frankly, very little. I looked at eight examples of teams that in 2023 fell just on either side of the bubble for the tournament or hosting. In most cases, literally nothing changed in aggregate. Sure, some games move up or down a quadrant, but those changes evened out. Five of the eight teams (Arizona, Auburn, Boston College, Campbell and Southern California) had the exact same record in all four quadrants. The others didn’t see much change. Indiana State went from 9-1 in Q2 and 16-4 in Q3 to 10-2 in Q2 and 15-3 in Q3. Oklahoma went from 8-7 in Q2 and 7-4 in Q3 to 8-6 in Q2 and 7-5 in Q3. UC Irvine went from 16-11 in Q3 and 12-2 in Q4 to 13-11 in Q3 and 15-2 in Q4. None of the eight teams I studied had any change in their Q1 records.
None of that is moving the needle. The eight teams I chose were not random, but they were the kinds of teams for which a change in their quad records might have had an impact on Selection Monday. I didn’t see one. Now, the committee itself was shown more data than I collected and perhaps there’s some more nuanced change than I’m seeing. But this is not a sea change.
What about the addition of KPI? What is it?
Kevin Pauga first created KPI as a basketball rating system. Pauga, who is now an associate athletic director at Michigan State, has since adapted KPI for sports from baseball to field hockey to volleyball. It rates every win and loss on a positive-to-negative scale, where the best possible win is about 1.0 and the worst loss is about -1.0. The scores are then averaged across the entire season to create the team’s rating.
KPI assesses a variety of factors when it’s scoring a win or loss, including the strength of the opponent, the game’s location and the score (that is one significant factor that differentiates KPI from RPI, which treats a one-run win the same as a 20-run win). During the 2023 season, the best win according to KPI was Alabama’s 12-1 victory at Arkansas on March 31, which was rated 1.36. The worst loss by an at-large NCAA Tournament team was Tennessee’s 12-5 home loss to Tennessee Tech on April 18, which was rated -0.744.
What does it mean that KPI will be a resource but not a “selection criterion?”
That all sounds a bit murky, but it boils down to this: the committee members will be given the information but will not be required to consider it. RPI, not KPI, continues to be the base for all other metrics—strength of schedule, the quadrants, etc. It’s not yet clear if or how committee members will include KPI on the team sheets for every team in discussion. However, in basketball, the team’s KPI ranking (and other ranking systems) appears on the team sheet, right next to the NET (the basketball version of RPI). That can help inform a committee member’s thoughts, either by showing that multiple metrics align in their ranking or highlighting the fact that RPI and KPI disagree.
How closely do RPI and KPI align?
Looking at the 2023 season, you’ll see a lot of similarities. On Selection Monday, Wake Forest was No. 1 in both metrics. Things veer a slightly from there. The top five by RPI was Wake, Kentucky, Arkansas, Florida and LSU. For KPI, it was Wake, LSU, Kentucky, Clemson and Florida. The biggest difference was with teams from outside the Power Five conferences. In KPI, only Coastal Carolina (20) and Southern Miss (25) cracked the top 25. In RPI, Indiana State (9), Campbell (13), Coastal (14), Dallas Baptist (16), Southern Miss (21) and Connecticut (22) all made the top 25. Indiana State, one of last year’s RPI darlings and a host, rated No. 32 in KPI, while UConn dropped all the way to No. 40 in the metric. DBU (28) and Campbell (29) were not hit nearly as hard.
Last year was not an outlier. Mid-majors generally fare worse at the top of KPI than in RPI. In 2022, the top-ranked team from outside the Power Five according to KPI was Southern Miss at No. 23. The Golden Eagles were No. 17 in RPI and were behind East Carolina (8) and Georgia Southern (11). We’ll skip past 2021 because of the weirdness of that schedule and rewind the clock to 2019. ECU (14) and DBU (24) were the only non-Power Five teams ranked in the top 30 of KPI. Seven such teams appeared in the top 30 of RPI, led by ECU at No. 5. And while I keep writing Power Five, KPI really sees it as Power Four. It hasn’t rated teams from the Big Ten any better than it has Sun Belt or AAC teams.
Undoubtably, some college baseball fans would welcome the KPI interpretation. For fans of mid-major programs or those who like seeing a DBU or Georgia Southern pop up on the host line or a mid-major team squeeze into the Field of 64, however, that interpretation may be cause for concern. Until we hear that the committee is emphasizing KPI over RPI—which isn’t supposed to happen—I wouldn’t worry too much. A committee member might hold a bad KPI rating against a team or use a good KPI rating to bolster a team’s case, but it shouldn’t be making or breaking anyone.
What about the regional advisory committees?
The RACs have long been a part of the selection process. We never get to see the rankings they create, which makes it hard to know how much influence they have, but Hogue did note that they give the committee important context about injuries and relative strengths of teams. They, in effect, provide the eye test. The upcoming change involves adding conference baseball administrators to the existing committees, expanding their depth of knowledge and relieving individual head coaches from the pressure of advocating for teams in their conference, whether opposing teams or their own. This change probably won’t make a material difference in who gets in and who doesn’t, but should help get better information in the selection committee’s hands, which is always a good thing.
Is all this really going to change anything?
That’s impossible to know until the committee gets in a room this May but probably not. The committee saw more data than I have on how the tweaks effect things, but from what I’ve looked at, we’re talking about very minor changes. Honestly, the biggest change is going to be the makeup of the committee. Five of the 10 members of the committee will be new this year. Some of that turnover is normal, as terms are staggered to ensure some change happens every year. But there has been additional attrition this year due to job changes for some of the members. Because of the subjective nature of the selection committee’s work, having five new people looking at the data and bringing their own opinions to the mix will naturally lead to different outcomes. But the actual data they’ll be looking at? That’s not changing much.