Another way of looking at MVP’s: Most Valuable Player Caliber Seasons 1.0

January 5th, 2013 by


People like shiny things and nice ceremonies. We should also like the rightful player getting the award (PHOTO:AP/Wilfredo Lee)


By Jay Ramos

Ask someone about a great players rank in history, and the conversation can quickly create complex answers. Because people like shiny things.

When we look back at a players legacy, everything from statistics, awards, individual accomplishments, team titles, records and impact become a part of the conversation. Subjectivity slides into our perspective depending on how much we value everything outside of the empirical evidence that exists for a player, because of shiny things.

It’s a lot less subjective to simply compare which player was better if we can focus on a players individual merits.

The legacy conversation is not one that needs to be abolished, I just think the approach to the conversation almost always is devoid of enough context. Meaning we can make more accurate assertions if we remove ourselves from fancy things like jewelry and trophies.

One of these things is awards, such as the Most Valuable Player award. It’s a tricky beast that rears it’s head in discussions about a players greatness, and is used as coin on a resume.

When we have conversations about legacy, can you hear that awful  noise of people counting awards? We’ll, not that, but the awful noise of counting them superficially. For instance, when a player receives an award and another player doesn’t, it doesn’t mean the right call was made.

Steve Nash has two MVP’s on his mantle, but should he? (PHOTO: Lisa Blumenfeld/Getty Images)

Sometimes there are multiple players that deserved the MVP nearly equally. The 2008-09 season is one example, one on which LeBron James and Dwyane Wade finished 1st and 3rd, respectively, and both finished with a Player Efficiency Rating over 30. Kobe Bryant finished second in voting that season for the award, and shouldn’t have, which bring me to another point.

Awards are often slanted by team results. Bryant won the MVP the season before, even though he was nearly equally outplayed by James. The only substantial difference was that James’ Cavaliers won an unprecedented 66 games in 2009, party because the team was climbing the ladder as a group that was gaining continuity and because they made a swift acquisition of Mo Williams in the off season.

But team results will cloud these things. So will narrative. When Bryant won the MVP in 2007-08, Tim Duncan had an identical season in terms of production, but finished 7th in voting. This award was forced, with some voters, I think, trying to give him a legacy award as opposed to truly rewarding him for an MVP season. On the other end, Bryant had arguably three seasons better than his 2007-08 campaign at that point, but he didn’t receive an MVP because Luke Walton and Kwame Brown played too many minutes. Isn’t it absurd that these awards are awarding circumstances?

My point here is that human error mars awards like MVP, and a blind resume comparison isn’t fair without context. Circumstances and perception of a player can slant voting in one direction, and it doesn’t reflect which player actually produced the most on the court.

This got me thinking about forgetting even using MVP’s as part of debating legacy.

Why do it? Why not eliminate this subjective measure, but still account for seasons in which a player played like an MVP?

So I decided to craft the first version of a measure of MVP Caliber Seasons, or MVCS. It would allow us to see how many seasons a player played like an MVP, without fixating on the actual result, which could be slanted.

This is a subjective exercise, and surely the first version of it. I’m sure I will add to the criteria to make it more detailed and evolve MVCS in the future.

To come to my conclusion, I want to take into account the best cumulative statistics we have available.

They aren’t perfect, but they are respected and objective.

Player Efficiency Rating (PER) will be the first one. Donned by John Hollinger, PER creates a per minute picture of a players production on the floor. It has it’s weaknesses, such that it overrates volume scoring and doesn’t take into account defense, but is nonetheless a respectable measure if we take those weaknesses into context, and pretty accurately gauges a players overall production.

But we want to take more into account than just an encapsulation of the box score. If  MVP really means that a player is the most valuable, than certainly their performance has to translate to wins. There are several measures that look to account for this.

Bill James originally coined ‘Win Shares’ in his 2002 book, and the statistic has been improvised by Basketball Reference. Hollinger created Value Added and Estimated Wins Added and Kevin Pelton of Basketball Prospectus made a metric called Wins Above Replacement player. I’m also intrigued by Dave Berri’s Wins Produced stat, described here.

I’m not going to debate the merits of all of them, but all of those guys are very smart people. I do find, however, that Win Shares is arguably the best of the group. There is a historical database readily available for it, and it is at least provable that Win Shares suffer from less average error than EWA.

Calculating defensive value outside of how much we value counting numbers like blocks and steals still remains a challenge, so that is the one red flag to point out about this measure. We’re going to eliminate Defensive Win Shares and focus on Offensive Win Shares, due to the way the shares are calculated defensively stemming solely from team production, therefore being capable of misleading the value of an individual. Nonetheless, the average error with Win Shares is very small, and a teams’ total Win Shares usually add up to roughly 82. It works.

DEFINING MVCS

Hollinger defines a player with a 25 PER as a weak MVP candidate, with a 27.5 PER indicating a strong MVP candidate and anything over 30 being a runaway winner. We’re going to take the baseline 25 PER as a minimum requirement to qualify for MVCS, and take an arbitrary amount of Offensive Win Shares, 8, as a minimum to complement it. To avoid the question of totals vs. averages if a player misses time, Win Shares is a counting measure, so a player can play a lot less than another and still contribute to more wins on the season. We will still develop a baseline, however, and in order to qualify, a player must play in at least 75 percent of his teams games (62 games in an 82 game season).

The reason for the baseline of games is because of what Win Shares doesn’t take into account. If one player misses time and is replaced by an ‘average,’ or ‘replacement’ player, the drop off is significant. Missing time impacts your team in this manner.

For our initial study, we’re going to take every player in the modern era into account, beginning with 1979-80, because that is when 3-pointers we’re officially added into the box score.

Check out the results of the sample here. This indicated that since 1979-80, 87 players have had MVP caliber seasons.

The measure does a good job of encapsulating absolutely outstanding seasons, and if we can agree that offensive efficiency is the factor that most leads to winning basketball games, it’s very fair.

The rationale has two holes, however. One minor hole in that it doesn’t adjust precisely to how much value was lost by a players missed time in a particular season, relative to being replaced by an average NBA player. Instead we simply create a minimum requirement, which shores up the potential for error, but it nonetheless exists.

Secondly, we just aren’t measuring defense here. Tim Duncan only has two MVP Caliber Seasons in his career, but if we take into account that he was a terrific defensive player, it makes up the gap in a few other seasons where he just misses the cut for this measure.

But either way, when people are talking about MVP’s, we’re not precisely taking into account defense anyway. All we have is Defensive Win Shares to encapsulate a players total defensive production, and we already debated the merits of it. Unless we we’re to sit down and break down the splits of every players opponent field goal percentage and use film to take into account good defense that doesn’t show up statistically, we cannot precisely measure it. Plus, did defensive shortcomings stop a herd of sheep from wrongly heaping MVP’s onto Steve Nash and Derrick Rose?

What this measure is saying, however, is that these players had outstanding offensive seasons whose value is not debatable. Some of them played better defense than others, but even the worse defensive player on this list is still an MVP.

I will probably add to this in the future, but this should begin to serve as a way to talk about players who had deserving MVP-type seasons and how many, which would serve us better than who actually won it.

Comments

comments