Burgomaster, Kirsten A., Scott C. Hughes, George J. F. Heigenhauser, Suzanne N. Bradwell, and Martin J. Gibala. Six sessions of sprint interval training increases muscle oxidative potential and cycle endurance capacity in humans. J Appl Physiol 98: 1985–1990, 2005. First published February 10, 2005; doi:10.1152/jappphysiol.01095.2004.— Parra et al. (Acta Physiol. Scand 169: 157–165, 2000) showed that 2 wk of daily sprint interval training (SIT) increased citrate synthase (CS) maximal activity but did not change “anaerobic” work capacity, possibly because of chronic fatigue induced by daily training. The effect of fewer SIT sessions on muscle oxidative potential is unknown, and aside from changes in peak oxygen uptake (V̇o2 peak), no study has examined the effect of SIT on “aerobic” exercise capacity. We tested the hypothesis that six sessions of SIT, performed over 2 wk with 1–2 days rest between sessions to promote recovery, would increase CS maximal activity and endurance capacity during cycling at ∼80% V̇o2 peak. Eight recreationally active subjects [age = 22 ± 1 yr; V̇o2 peak = 45 ± 3 ml·kg−1·min−1 (mean ± SE)] were studied before and 3 days after SIT. Each training session consisted of four to seven “all-out” 30-s Wingate tests with 4 min of recovery. After SIT, CS maximal activity increased by 38% (5.5 ± 1.0 vs. 4.0 ± 0.7 mmol·kg protein−1·h−1) and resting muscle glycogen content increased by 26% (614 ± 39 vs. 489 ± 57 mmol/kg dry wt) (both P < 0.05). Most strikingly, cycle endurance capacity increased by 100% after SIT (51 ± 11 vs. 26 ± 5 min; P < 0.05), despite no change in V̇o2 peak. The coefficient of variation for the cycle test was 12.0%, and a control group (n = 8) showed no change in performance when tested ∼2 wk apart without SIT. We conclude that short sprint interval training (∼15 min of intense exercise over 2 wk) increased muscle oxidative potential and doubled endurance capacity during intense aerobic cycling in recreationally active individuals.
The following is the abstract of the article discussed in the subsequent letter:
To the Editor: I read with great interest the paper by Burgomaster et al. (1) and the accompanying editorial by Coyle (2) in the Journal. The article indicated that six sessions of very brief high-intensity interval training (four to seven 30-s periods) over a 2-wk period produced remarkable physical conditioning effects, including a doubling of endurance capacity. However, one significant aspect unaddressed in either the paper or editorial, and typically ignored in human physiology publications, relates to behavioral variables that may confound results, such as those of Burgomaster et al. In line with the overwhelming majority of human physiology publications, Burgomaster and colleagues do not consider this major caveat in their paper, an omission that could conceivably lead to false inferences regarding the major findings of their study. With respect to this article, at least three issues may serve to compromise findings, none of which are considered as possible limitations by the authors:
1) First and foremost, there was no control mentioned for level of activity outside the laboratory, nor could I find any report that subjects were instructed to maintain their normal levels of daily activity during the exercise period. Human physiological research has its unique difficulties, at least partially due to the fact that human subjects, fortunately, cannot be controlled like laboratory animals. Nevertheless, one might include a diary of daily activity into such a protocol (or, better yet, ambulatory accelerometry or ventilation data) that could address this issue. Without such information, it is not difficult to imagine that daily nonlaboratory exercise regimens, general activity levels, or other factors might have been influenced by the laboratory training and exerted their own effects on the results.
2) No details are given about who the student subjects were or how they were selected: Did they derive from a pool of interested exercise physiology students, did they have any knowledge of the hypotheses of the study during the training period that could have influenced their exercise performance during or between the protocol laboratory trainings, and/or were subjects randomly assigned to control and experimental groups (or possibly explicitly or implicitly selected according to criteria that could optimize positive findings, e.g., the experimental subjects may have had more prior experience with exercise studies)? These are just a few aspects of subject recruitment that could have significant impact on the study findings.
3) Related to the above point, awareness of the core hypotheses of the study among subjects or laboratory research staff could affect those exercise measures that are modifiable by means of variations in motivation (effort-dependent variables, e.g., cycle endurance time to fatigue). Motivational factors are rarely directly considered in the human physiological literature, but they can play a role in experimental effects.
My comments are by no means intended as a harsh rebuttal of this or other studies reported in the Journal. Nevertheless, my own research at the interface of physiological and behavioral sciences has sensitized me to the significance of these kinds of concern. My experience strongly suggests that ignoring potential behavioral confounds can lead to substantial error when drawing conclusions from laboratory findings. Some major categories of pitfall, in my opinion, include the following:
1) The assumption that repeated laboratory interventions (e.g., a specific exercise regime) necessarily cause long- or short-term physiological changes that may be associated with the intervention period. A particular intervention can generate various behavioral alterations in daily life that may not be causally related to the intervention itself, for example, changes in normal exercise program, diet, scheduling of activity, or sleeping habits. These and other factors can, in turn, have consequences for outcome measures that are only indirectly related to a particular training protocol. It is, therefore, important to recognize the extent of uncontrolled influences in the natural environment that can potentially confound the effects of a laboratory training intervention, and this concern is all the more critical when the research aim is to establish direct causal links to the laboratory interventions.
2) The assumption that physiological differences between groups in the laboratory necessarily reflect constitutional variations of physiological functioning. Laboratories, with their unfamiliarity, potentially threatening measurement devices, and experimental demand characteristics, are likely to influence the emotional and cognitive functioning of different people in different ways. On the other hand, it is well known that expectations of subjects, whether intentionally or unintentionally induced in the context of a research study (the placebo, or Hawthorne, effect) or as a consequence of earlier personal experience, can influence both behavior and physiological functioning. Under the best of circumstances, these factors can imply a hidden bias creeping in, for example, when comparing the cardiovascular responses of recent myocardial infarction patients with carefully matched controls: the very exposure of myocardial infarction patients to a cardiovascular laboratory environment and measurement equipment is bound to have a very different meaning for myocardial infarction patients than for healthy controls, and it seems reasonable to consider that divergent patterns of behavioral and functional physiological response may also be elicited by clinical vs. control groups. Given that physiological responses are often, at least to some degree, situationally dependent, differences found between patients and controls may more represent variations in physiological concomitants of emotional arousal than they do fundamental cardiovascular differences.
Under the worst of circumstances (and something that quite commonly occurs in modern research centers), those same myocardial infarction patients may be compared, not with healthy controls who are naive to the experimental environment to the same degree as the patients, but, instead, with a group of paid “professional” subjects who have previously participated many times in the same laboratory, with the same experimenters and the same equipment. This frequently expedient practice (many university centers have difficulty in finding sufficient numbers of healthy controls who match age, gender, and race requirements of patient groups) compounds the above-mentioned problems by comparing highly familiarized and habituated, paid healthy volunteers with unpaid patients who potentially experience the laboratory setting as stressful because of their distinct history of physical disorder; the meaning of the experience is drastically different between groups, and there is every reason to believe this may affect physiology by means of central nervous system pathways related to emotion activation. This example is, in fact, also related to the next class of caveat with respect to behavioral factors.
3) Lack of awareness that experimenter bias can influence the results of a physiological investigation. Besides biased selection of subject groups, there are many other ways in which experimenters can prejudice the outcome of a study. Differences in expectations can be explicitly or implicitly communicated to individual groups. When effort-dependent measures are employed (e.g., spirometric evaluations), slight differences in instruction or expectation may lead to significant effects. Subjects may be made aware of the major hypotheses of the study, and this could influence their behavior inside or outside the laboratory. Unstandardized procedures in the experimental settings may also occur, for example, haphazard and varying instruction and behavior of experimenters, flurries of extraneous experimenter activity and disturbance while carrying out a protocol, or varying numbers of experimenters and/or observers during measurements. These factors may create extra “noise,” or error variance, if applied unsystematically or clearly bias experimental hypotheses if applied systematically.
4) An implicit assumption that variations in physiological functioning are physical events that cannot be significantly influenced by psychological or behavioral factors, that psychology is, in fact, some soft discipline that has little bearing for physiology, either human or animal. This belief may be traced back to the Cartesian mind-body dualism or beyond, but it has plainly proven to be erroneous. There is a whole field of “psychophysiology” that has demonstrated beyond reasonable doubt that psychological and behavioral factors can alter physiological processes, sometimes dramatically. Additionally, past and contemporary physiological research clearly attests to the dynamic responsiveness of physiological processes in response to myriad behavioral adjustments, which are inevitably accompanied by psychological changes.
In conclusion, physiology does not always reflect inherently stable biological properties, unperturbed by contextual psychological influences. Behavioral and psychological effects on physiological functioning may, in fact, be a serious and neglected source of bias in human physiology research. Attention to several aspects of these potentially confounding influences can promote development of improved research designs. These include physiological and diary monitoring of daily activity in longitudinal investigations, careful and balanced recruitment of subject samples, consideration of how clinical and control samples may vary in psychologically mediated physiological responses to specific experimental settings, standardized instructions and experimental procedures, and blinding of participants and (if at all possible) research personnel.
Certainly, many contemporary physiological researchers may often already consider many of these issues when they conduct investigations. In fact, Burgomaster et al. (1) may have, indeed, incorporated a number of the above-mentioned precautions into their own research without having mentioned them in their paper. Nevertheless, such concerns represent important criteria for evaluating empirical studies and require written clarification. An open discussion would, furthermore, serve to enhance general awareness of the significance of behavioral factors for the validity of physiological research.
- Copyright © 2005 the American Physiological Society
To the Editor: We appreciate the opportunity to respond to Dr. Grossman's eloquent reminder that behavioral variables can influence physiological functioning. With respect to the three specific concerns that he raised regarding our study (4), and related comments:
1) We instructed our subjects to maintain their normal levels of daily activity throughout the study. As in all of our investigations, subjects were given detailed physical activity and dietary instructions before the study, and we asked them to record any deviations from the prescribed guidelines or their habitual pattern of living (e.g., changes in sleep habits). While no significant deviations in diet or activity pattern were reported, we agree that it would be useful to include written clarification of these self-reported data. With respect to ambulatory accelerometry or ventilation measurements, we are concerned that collecting field data of this sort might inadvertently induce changes in habitual activity, something that Dr. Grossman cautions against (i.e., the Hawthorne effect). Moreover, the accuracy and precision of various measurement techniques for this purpose is equivocal, and a recent study concluded that “accelerometer regression models provided different predictions of time spent in different activity intensities, and large individual errors were apparent” (5).
2) We are keenly aware of the potential for experimenter bias and/or sloppy procedures to create error variance and/or prejudice study outcomes. Indeed, we judiciously guard against the various pitfalls in subject recruitment and other parameters that Dr. Grossman describes. Among other safeguards described in our manuscript, our experimental protocol included the following: 1) extensive familiarization trials before the main experiment (there were 5 such laboratory visits in total); 2) standardized physical activity and dietary controls before each experimental trial; 3) no temporal, verbal, or physiological feedback to the subjects during any performance test; 4) reproducibility determinations for all performance and biochemical measurements; 5) inclusion of a control group for the performance test that was drawn from the same subject population; and 6) application of appropriate and rigorous statistical analyses. With respect to “hidden bias creeping in” to subject selection, Dr. Grossman's “best” case example of myocardial infarction patients being compared with healthy controls bears no relevance to our study, because all subjects in both groups were young, healthy individuals who were used to exercise. Similarly, his “worst” case example of naive individuals being compared with “paid professional” control subjects does not apply to our laboratory. We have no trouble recruiting sufficient numbers of naive subjects for even our most demanding investigations, and we limit the number of studies that an individual is allowed to participate in. Moreover, the hourly remuneration for a typical study is just above minimum wage, which discourages the sort of practice that Dr. Grossman describes.
3) In accordance with our institutional Research Ethics Board guidelines, the study purpose was disclosed to all subjects before their participation. As with any study that involves effort-dependent variables, it is likely that motivational factors affected cycle endurance performance over the course of the study. Obviously, we could not blind subjects in the experimental group to the fact that they performed an exercise training intervention; however, we backed up the impressive performance improvements detected in this group with direct biochemical measurements. The muscle biopsy samples from the trained group showed large increases in muscle oxidative potential, comparable to that reported after several weeks of traditional endurance training and consistent with classically proposed mechanisms of respiratory control. While this represented one potential explanation for the improved performance, we recognized that other factors were likely involved and noted: “We can only speculate on potential mechanisms responsible for the dramatic improvement in cycle endurance capacity” (p. 1989 in Ref. 4). Being physiologists, we proposed a number of different physiological processes that might have contributed to the adaptive response. Subsequent studies from our laboratory that have appeared in abstract form have confirmed our initial findings (3) and verified that the muscle adaptive response to short interval training includes changes in specific factors that we previously speculated on (1, 2).
We did not specifically address potential behavioral or psychological factors in our manuscript; however, this does not mean that we “ignored” these issues, nor does it “compromise” our findings. Indeed, our experimental design included safeguards to try to guard against the potential confounding influences that Dr. Grossman describes. Just as we called for additional research to clarify the physiological adaptations induced by short sprint training (p. 1989 in Ref. 4), perhaps Dr. Grossman's letter will encourage researchers with appropriate expertise in other fields to do the same.