I’ve been absent again. I know, unacceptable. This is the last time though, I promise….. Okay maybe not but I have a legit excuse (but who doesn’t, right?). The spring training season is right around the corner, which means it’s time to gear up for performance testing. You know, the “Where were you at the end of the training season last year, and where are you now” kind of testing. I fully support testing to establish baselines and give players an idea of where they need to improve and where they should expect to be at the end of the off-season. However, I am conflicted on whether or not I agree with the baseline measurements as being good indicators of where a player stands on the performance continuum. All players want to progressively improve their game from year to year so comparing their end of off-season training scores with their beginning of off-season training scores seems like an unfair way of doing that. Why is that?
Players enter the off-season detrained, meaning their testing scores will be below those of the previous fall. If we base their needs and gains off their initial testing scores then we can only expect the players to get back to the point where they were at the start of the season. This does not do them any favors. Instead, we should evaluate players come mid-offseason training and then compare those to the scores at the testing date one year ago and to the scores at the conclusion of the previous year’s off-season training. This way, we could truly see if the player is developing and improving from year to year. It would make more sense to compare the initial baseline testing to the end of off-season training scores with regards to a performance decrement. By doing so, we can see the rate of performance decrement throughout the season and help formulate a plan to minimize that for the following year. Through this strategy, as a strength and conditioning coach, you can better track their yearly performance and hope to continually see an upward trend in performance. Comparing testing sessions to the same session a year previous allows us to establish an improvement rating for a player. By having a positive improvement rating, we know a player is improving.
Another portion of testing I do not agree with is combine style testing. I believe that gaining knowledge of a player’s strength through whatever-RM is a valuable tool for training, but most of what combine testing entails is ridiculous and does not indicate performance. Take for instance the 40-yard dash in the NFL combine. Players spend countless hours specifically training for this skill that does not measure performance in 90% of players on the field. For quarterbacks, it has been established that the slower quarterbacks have more success in the NFL. Who cares if a running back can run a quick 40, if they cannot dynamically move in different directions they are of minimal use. Most combine testing fails to do the one thing it is supposed to be measuring and therefore is not of much use. Combine tests should be much more specific to the sport, with players doing tests that replicate the dynamics found in a game. In the NHL, performance is determined better by aerobic capacity than the Wingate test (a heavily weighted test found in the NHL combine). The time spent doing useless tests can be better used doing tests that better measure performance in the sport.
This being said, it is hard to change testing ideas. Having players perform sport specific tests means little if they are not adopted at an elite level. If the NFL combine continues the 40, and if the NHL combine includes the Wingate test, an athlete who is not prepared for that will most likely suffer. Change needs to come, but it may have to come from the top down. More research needs to be conducted and presented to those who have the power to cause change.
Take this with a grain of salt. There is no wrong in applying tests that better measure performance. If it works and is a good measuring tool for you as the coach, and a result where a player can see improvement, why not include it.