Tuesday, August 2, 2011

The problem with applying science to training.

John Hobbs, MEd
Senior Consultant

Many of our athletes have a history of marginally following haphazard training plans or blindly trusting current and old training fads, which had effectively prevented them from realizing most of their athletic potential.  As we have learned from our clients, equally detrimental is the blanket application of scientific literature to training.  Studies are frequently referenced in on-line articles, chat rooms, and sometimes discussed to some extent in group workouts. Discussion does empower the athlete that is more knowledgeable and allows them to effectively debunk obsolete methods and “back in my day” training regimens.  However, often missed are the intricacies of research that lead to inappropriate application of study findings.

In reality, scientific literature can be about as dry as warning labels that come pasted to equipment.  Most people don’t have the desire to stay current on the literature and really have no need to—that’s what the consultant is for.  But in the event that somebody has a topic of interest or sees a reference to a journal article, it is important to be able to have a correct interpretation of what is presented and understand the limits.  Below are just a few of the common caveats of studies that require attention before training methods are implemented or removed.

With the nature of exercise physiology, a plethora of research designs exist.  They range from molecular level comparisons of rat muscles to kangaroos on treadmills to average speed during a time trial.  The articles that most people will never see involve direct measures of performance in the given sport.  When looking at applying the information and questioning its relevance to your training, several issues should be addressed.

First, who are the researchers using as their guinea pigs?  Often, untrained individuals are used due to their availability.  Most athletes don’t want to risk their training to become part of a study that may be detrimental or use a placebo.  This may or may not have an effect on the data, depending on what is being analyzed.  In many cases, however, a stronger argument relating to performance can be made if trained individuals are used.  This is due to the fact that untrained individuals will usually show an improvement just by the nature of becoming trained.   On the flip side, if an intervention can’t produce an improvement in untrained individuals, it’s safe to question its efficacy in trained athletes until proven otherwise.

A library of literature exists showing “interval x,” “routine y” and “exercise z” showed improvements in performance.  But the context of the studies has to be questioned.  As already noted, getting competitive athletes to become part of a training study is difficult at best, let alone during the race season after many hours dedicated to intensity.  So a great time to herd up some road cyclists in to a lab is in the late fall or winter.  However, athletes have usually detrained a bit and the typical duration of a study is usually relatively short can skew data.  As a result, the training improvements may be magnified or just due to the fact that Joe Racer is doing hard intervals.  

With the periodization model currently accepted as the most effective training design, the implementation of a change in training based on literature can be difficult to place.  Will the athlete get the most benefit with early season work?  Would it be more productive when they are stronger later in the season?  Or, will it even be effective once intense structured training begins?

Study Design
With a limited number of trained volunteers and a short period to follow them, how do researchers design a study to ensure that interval x will make you faster?  The honest answer is that they don’t.  Numerous comparisons can be completed with just one intervention.  In studies looking at diet, questions arise regarding a placebo group, the amount of benefit from the change, if the subjects are just eating more, if the percent change in another part of the diet affects the results, changes in calories burned versus calories consumed and so on.  With limited resources, different studies have to be done to chip away at the different possibilities.  This is one reason why it seems that some research seems redundant—it’s analyzing a different aspect.  So when an improvement is shown to occur, it is important to look at the comparisons being made.

So, to illustrate the points, lets say we’re looking at body weight squats and a study that showed they increase time trial performance.  We’ll say two groups of cyclist are used with one group completing their normal training and another group does an additional set of squats twice a week for two months.  It’s hard to differentiate if the gains are due to the use of squats specifically or the fact that Joe Racer is training more and has periods of higher intensity.  Plus, with the study done in the off season, it can’t be determined if the benefits will be seen in six months at the peak event.  To further the problem, will the squats provide a benefit when replacing or supplementing high intensity training in a trained athlete during the meat of the training program?

One role of exercise physiology is to connect application in the real world to data in the lab.  The catch statement of “well, that works in the lab, but not the real world” is a cop out to not delve deeper in to how something may or may not affect athletes. The role of science in providing sound training strategies is invaluable.  It keeps us from adhering to silly things, such as using cigarettes to clear our lungs before each workout.  And while an article may seem to provide an avenue to revolutionize the way we see training and performance, the fact is that these discoveries are far and few; rather, each one is a piece of a larger project giving us direction.  

Asking about various training ideas is vital.  It keeps us on our toes and gives us new ideas.  And when an athlete brings ideas to us, especially backed up by a reliable literature source, it’s important that they understand why we may be apprehensive in immediately implementing them and why we could say “I don’t see the harm, why don’t we give it a shot?”