Abstract
Optokinetic nystagmus (OKN) is a reflexive eye movement with targetfollowing slow phases (SP) alternating with oppositely directed fast phases (FP). We measured the following from OKN in three humans: FP beginning and ending positions, amplitudes, and intervals and SP amplitudes and velocities. We sought to predict future values of each parameter on the basis of past values, using statespace representation of the sequence (timedelay embedding) and local secondorder approximation of trajectories. Predictability is an indication of determinism: this approach allows us to investigate the relative contributions of random and deterministic dynamics in OKN. FP beginning and ending positions showed good predictability, but SP velocity was less predictable. FP and SP amplitudes and FP intervals had little or no predictability. FP beginnings and endings were as predictable as randomized versions that retain linear autocorrelation; this is typical of random walks. Predictability of FP intervals did not change under random rearrangement, which is characteristic of a random process. Only linear determinism was demonstrated; nonlinear interactions may exist that would not be detected by our present approach.
 vestibuloocular reflex
 prediction
 fractal scaling
optokinetic nystagmus (OKN) is a reflexive eye movement that can be elicited by a large visual surround that moves slowly and coherently around a subject (17). OKN is characterized by alternating phases of low eye velocity (slow phases) in which eye velocity nearly matches target velocity and fast phases in which position offset accumulated during a slow phase is reset. Our study addresses horizontal OKN, in which the visual scene rotates about an earthvertical axis and the resulting eye movements are horizontal. OKN in this case may contain some “contamination” by the smooth pursuit eye movement system in addition to the desired (reflexive) optokinetic response (17). Smooth pursuit is an eye movement that is used to follow a single target as it moves in space; it is under more volitional control than is OKN. We attempt to reduce the influence of pursuit by instructing subjects to stare straight ahead as the visual scene moves (“stare nystagmus”) rather than to actively follow the scene.
An alternative way to provide the visual stimulus for OKN is by rotating the subject within a stationary visual field. Initially, eye movements will be driven by the vestibular system (the vestibuloocular reflex, VOR). After several minutes of constantvelocity rotation, neural signals from the vestibular system that signal head rotation decay (the vestibular rotation sensors have poor response at very low frequencies and will eventually stop indicating rotation if it is prolonged and constant), and only the relative motion of the visual scene remains to stimulate OKN.
Consider as an elementary unit of OKN a slow phase and the following fast phase. This unit can be characterized by the eye positions at the beginning and at the end of the fast phase, the velocity during the slow phase (which we will for simplicity assume to be constant during a given slow phase), the amount the eyes travel during the slow phase and during the fast phase, and the time interval between the present fast phase and the preceding one. The temporal evolution of these quantities during a long segment of OKN shows seemingly random fluctuations. The aim of this study is to uncover possible nonrandom (deterministic) patterns in this temporal evolution that might shed light on the purpose and organization of the OKN reflex. We appreciate that much is known about the physiology of the systems that generate fast and slow phases of nystagmus, whereas relatively little is known about the processes that determine, for example, such aspects as the timing of fast phase generation. Even the best models in this area (e.g., Refs.2, 3, 7) have a significant probabilistic component. The intent of our study is to see whether these properties are inherently random or whether there may be an underlying deterministic law that guides them. If the latter, then even though this study may not provide the details of that law, we will nevertheless know better whether there is such a rule to be sought after.
The question of randomness vs. determinism in a physiological system is a fundamental one. The presence or absence of deterministic elements in OKN is of particular interest if a mathematical model of the underlying neural circuitry is sought. Various correlations between the parameters characterizing OKN can be studied. For VOR, a high correlation between slowphase amplitude and the following fastphase amplitude and a lower correlation between fastphase amplitude and the following slowphase amplitude have been reported (7). These correlations may be statistical in nature, or they may be the consequence of a nonlinear deterministic rule with important implications for the modeling process.
A new class of powerful tools for the analysis of seemingly random signals has emerged from the field of nonlinear dynamics (for an overview, see Refs. 1, 28). The development of these tools followed the observation that systems in which temporal evolution is described by simple deterministic equations can produce extremely complicated, seemingly random, time series (18). This class of systems has been termed “chaotic.” One of the first measures derived to describe the complexity of these systems was the correlation dimension, which can characterize lowdimensional chaotic systems (as an approximation of the fractal dimension). Random systems have a high (theoretically infinite) dimension (9). It has proven difficult, however, to distinguish between signals from random systems that have undergone a filtering process and signals from deterministic systems on the basis of the correlation dimension alone (19). Another approach to decide between the deterministic or the random origin of a signal is to compare measures derived from the original sequence with those same measures derived from a randomized version of the sequence (a “surrogate”; Refs.3032). If the measure in question (dimension, predictability) does not change when the signal is randomized, then this is good evidence that the original signal had significant randomness to begin with. Different null hypotheses on the data structure lead to different methods to randomize the sequence; the null hypothesis can be rejected if a measure characterizing a sequence changes significantly under randomization.
As a second tool to identify deterministic systems, nonlinear prediction has been introduced by Farmer and Sidorowich (8). Whereas the evolution of a random system cannot be predicted, the behavior of a deterministic system can be predicted, at least for short time spans. One way to obtain a prediction at a given state of the system is to look at the past development of the system for states similar to the present one. The evolution from these past states can then serve as a model that predicts the development under consideration.
Comparisons of the predictability of various subsets of data (OKN under various pathologies, for example) may give insight into the “relative predictability” and thus the relative determinism/randomness of the OKN mechanism under different circumstances. One example of the insights that might be gained from the analysis of predictability can be found in a study of monthly data on epidemics (30), in which the dynamics of different epidemics were related to underlying mechanisms of disease transmission and reporting. In a study more along the lines of ours, Lefebvre et al. (16) analyzed predictability of heart beat intervals and found relations to cardiac pathology.
Shelhamer (27) used the correlation dimension to address the question of determinacy in OKN. In his study, some surrogate procedures produced time series that did not differ in correlation dimension from the original signal, and some surrogates showed higher correlation dimension than the original signal. Thus arguments for both random and deterministic structure in OKN were found. Nonlinear forecasting was then used to help settle the issue. As a preliminary result (29), a high predictability for fastphase beginning and end positions could be demonstrated, along with little or no predictability for fastphase amplitudes and fastphase intervals. The present study extends that investigation by examining the predictability of surrogate data sets, and analyzing the scaling behavior (i.e., the decay of predictability as a function of the number of prediction steps) of the OKN parameters under study. The predictability of first differences is also used to help distinguish between random and deterministic behavior.
METHODS
Data collection.
Horizontal OKN was elicited in three subjects who sat inside a cylinder of diameter 156 cm that was covered with a random blackandwhite pattern. The drum was rotated with a constant angular velocity of 10, 30, or 60°/s around the subject (OKN condition), and in separate trials the subject was rotated at these speeds within the illuminated stationary drum (OKN invoked during head rotation; OKNHR). Rotation lasted between 115 and 210 s. In one subject, we also collected 300 s of data at 20, 40, and 60°/s in the OKN condition. Eye movements were measured with the magnetic search coil technique (21) and were sampled at 500 Hz.
Fastphase beginning and ending positions were identified by use of an interactive computer program. Between 857 and 998 fast phases were recorded for the longer (300s) files, and the shorter files contained 231–638 fast phases. Slowphase gains (slowphase velocity/stimulus velocity) ranged from 0.9 for lowstimulation velocities to 0.2 for the fastest stimulation condition (the latter value is low but still within the broad distribution of normal values; see for example Ref. 6). The following parameters were extracted from each data set (Fig. 1): eye positions at the beginning and end of each fast phase, fastphase amplitude, interval between adjacent fast phases, mean velocity of each slow phase, and position traversed during each slow phase.
Prediction procedure.
Successive values of each OKN parameter form a data sequence {p _{1}, … , p_{n} } (Fig. 2 A). In the engineering analysis of many types of systems, it is common to examine the behavior of the system in a state space, in which various state variables (e.g., position, velocity, etc.) are plotted with respect to each other. In general, we may not know the state variables of the system, may not have access to them, and may not know how many state variables are needed (i.e., the dimension of the space) to fully characterize the behavior. To avoid these issues, timedelay embedding can be used to reconstruct the state space, using the time series of a single measured quantity. Consequently, as the first step in nonlinear prediction of each sequence, a timedelay embedding (24) was performed (Fig. 2 B). Through timedelay embedding in an embedding dimension of M, the state of the system at thenth point in the sequence is characterized by a point in anMdimensional space, with coordinates of that point given by the parameter under consideration for fast phases {n,n − 1, … , n − (M − 1)} (i.e., the present value and theM − 1 previous values of the parameter). The resulting topological object on which these trajectories exist is usually termed an “attractor.” (A true attractor in fact only exists if trajectories return to the same region of state space after perturbations of the system. Certainly, our subjects made occasional voluntary eye movements, closed their eyes, and in other ways interrupted the ongoing OKN, and the trajectories in state space continued to overlay one another, as demonstrated empirically in Ref.27. Therefore, although not verified rigorously, we use the term attractor to describe the trajectories in the state space.)
To understand the necessity for highdimensional embedding, consider the situation of points (describing the state of a system) lying on a helix in a threedimensional state space. The state of the system progresses along the helix from one point to the next. The location on this helix has a physiological interpretation that will not be discovered if the third dimension is ignored because information on the progression of the states in time would be lost. In our study, embedding dimensions from 2 through 7 were used; no improvement of prediction results was seen with larger embedding dimensions.
In neural systems (real and modeled) driven by chaotic and by random stimuli, it has been shown that the determinism in the driving signal is retained (as assessed by predictability) after passing through simple neural dynamics and an embedding procedure (20,23). Thus it has been demonstrated that important dynamic information is conserved in a statespace reconstruction of discretetime data.
We selected a point at fast phase n from which to predictd steps ahead. For prediction of the OKN parameter at fast phase (n + d), a number of points preceding the present one are examined (this segment of earlier points is termed the template), and a number of states similar to the present one (i.e., nearby in state space) are selected (Fig. 2 C). The prediction is then derived from the known development of these nearby states as they propagate d steps ahead. More formally, theM + 1 points at {n_{1}, … ,n_{M} _{ + 1}} that are nearest inMspace to point n were chosen from within a template of values. The template for fast phase n was the points at {n − 50, … , n − 1} (i.e., the immediately preceding 50 values). For the longer files, a template length of 100 points was used. A secondorder polynomial fit was made to the coordinates of the points at {n_{1} + d, … ,n_{M} _{ + 1} + d} as a function of the coordinates of points at {n_{1} , … , n_{M} _{ + 1}}. From this model polynomial, estimates of the coordinates of the point at (n + d) (the prediction) were made by plugging in the coordinates of point n. Note that a new polynomial is derived for each point in the series from which predictions are desired, forming a local topological fit.
To verify the appropriateness of our template size (i.e., the number of earlier points from which the K nearby points were selected), we computed all possible interpoint distances in the entire reconstructed attractor. This represents the extreme case of a template the size of the entire data set. Then we compared these distances to the median of the distances actually used for each set of predictions, with template sizes of 50 and 100 points. In the case of a template of 50 points, only 9.4% of the distances taken from the entire attractor were less than the median of the distances actually used. With a template of 100 points, only 5.6% of all distances were less than the median of the distances among those points used. From this we conclude that, even if nearest neighbors were drawn from the entire attractor, only a small proportion of them would be smaller than the distances that we in fact used on the basis of our actual templates.
After calculating predictions starting from every fast phase of the data set, we assessed the prediction quality by calculating the coefficient of correlation r between the predicted values and the actual future values (Fig. 2, D and E). Prediction was carried out d = 1 to d = 10 steps ahead. Following Tsonis and Elsner (33), log(1 − r) was plotted as a function of prediction step d and also as a function of log(d). Prediction error for a deterministic system will scale approximately as a decaying exponential with time step, so that log(1 −r) vs. d will be a straight line. Prediction error for a random system [more specifically, fractional Brownian motion (FBM)] will scale as a synthesis of power laws, such that log(1 − r) vs. log(d) will appear as a straight line.
Surrogate data sets.
Predictions of the OKN parameters were compared with predictions of randomized, surrogate data sets. A shuffled surrogate (Ssurrogate) was obtained by random rearrangement of the sequence {p_{1}, … , p_{n} } itself. A Ssurrogate tests the null hypothesis that the data sequence can be modeled as a sequence of independent identically distributed random variates. A phaseshuffled surrogate (PSsurrogate) was also obtained, as follows. In the Fourier transform of the parameter sequence under study, the phases were randomly rearranged (respecting the symmetry required for a realvalued signal), and the inverse Fourier transform was applied to the resulting sequence. With a PSsurrogate we can test whether the parameter sequence can be modeled as linearly correlated Gaussian noise. Finally, an amplitudeadjusted phaseshuffled surrogate (AAPSsurrogate, see Ref. 32) was generated by arranging a sequence of Gaussian random numbers such that the rank order of their amplitudes in the sequence matched that of the original parameter sequence, obtaining a PSsurrogate from the result, and rearranging the original parameter sequence according to the rank of this PSsurrogate. With the AAPSsurrogate, the hypothesis tested is that the parameter sequence is linearly correlated Gaussian noise with a static monotonic nonlinear scaling.
For each parameter, 10 surrogate data sets of each type were obtained. The correlation coefficient for prediction of the original parameter sequence must be outside of 99% of the distribution of correlation coefficients from the surrogates to reject the null hypothesis that the surrogates are identical to the original (i.e., that the original sequence has a dominant random component). This statistical determination was made as follows. The correlation coefficients (prediction qualities) were analyzed by taking the Fisherztransforms of the coefficients and finding the mean and standard deviation of the result. Confidence intervals were obtained as the inverse ztransform of the mean of the transformed coefficients ±2.576 × standard deviation of the transformed coefficients (22). This corresponds to a cutoff ofp = 0.01 for significance in comparing predictions of a parameter with its surrogates. The ztransformation adjusts for the fact that, as r approaches 1, small changes in the correlation coefficient correspond to large changes in prediction quality.
Supplementary analyses.
First differences {(p _{2} −p _{1}), … ,(p_{N} −p_{N} _{−1})} of fastphase beginning and ending positions were also examined for predictability, as described above. For a nonlinear deterministic signal, the first differences should be as predictable as the original sequence (13). If the signal is a random walk (i.e.,p_{n} _{ + 1} = p_{n} +x_{n} with {x_{n} } an uncorrelated random sequence), then the first differences should exhibit no predictability.
For the fastphase ending and beginning positions in the longer data files, we also calculated the correlation coefficient betweenp_{n} and p_{n} _{ +d} (d = 1, … , 10). These correlation coefficients reflect the quality of a prediction algorithm that simply assumes that the signal on average does not change fromp_{n} to p_{n} _{ +d}. This would be the best prediction that could be obtained for a random walk.
To determine the variancescaling behavior of fastphase beginning and ending positions, the variance of each of these parameters was determined for segments of the original parameter sequence and plotted as a function of segment length. The Hurst exponent H(10, 12) was then found by determining the slope of the resulting regression line on a loglog plot, which is equal to 2H. The initial segment of the line corresponding to up to 50 fast phases was used.
In addition, the behavior of the physiological system (OKN, OKNHR) was compared with that of simulated random fractal behavior. Simulated FBM was obtained with a Fourier construction (14). In this method, a sequence {a_{n} } of Gaussian random values with mean 0 and unit variance is generated. Phases {φ_{n}} are chosen as random numbers in {0,2π}. To produce an FBM with Hurst scaling exponent ofH, for 0 < H < 1, the amplitude of each component {a_{n} } is further scaled by the factor n ^{(−0.5−H)} to form a series with a power spectrum of 1/f^{2H + 1}. The FBM sequence {p_{n} } is then calculated as the inverse Fourier transform of {a_{n} exp(Iφ_{n})}, ignoring the imaginary part of the result.
RESULTS
We will present our results separately for fastphase beginning and ending positions (group 1 parameters) and all other parameters (group 2 parameters).
Nonlinear prediction of fastphase beginning and ending positions.
The predictabilities (correlation coefficients) for fastphase beginning and ending positions were relatively high (see Fig.3). Notably, these predictabilities did not change significantly under the AAPSsurrogate procedure (no significant change for any prediction step d in 11 of 21 files, no change for d = 1 in 19 of 21 files for both parameters, Fig. 4). There was an unexpected increase of predictability under the PS procedure relative to the original OKN. [This increase in predictability is counterintuitive, but Schiff and Chang (25) found a similarly unexpected decrease in the correlation dimension of (averaged) Hreflex data under phase randomization.] In a loglog plot, (1 − r) was a linear function of prediction step d in most cases (indicative of a random sequence), whereas the graph was curved in the loglinear plot (Fig.4). A comparison of the predictability of fastphase beginning and ending positions in each file with a paired ttest showed no significant difference (P = 0.39).
For fastphase beginning and ending positions, the correlation ofp_{n} with p_{n} _{ +d} was slightly higher than the correlation for the secondorder local predictor. Thus the best assumption for the future values of fastphase beginning and ending positions is that they do not change.
First differences of both fastphase beginning and ending positions produced low predictability that in most cases dropped significantly under (plain) shuffling for d = 1 (9 of 21 files for fastphase beginning, 15 of 21 files for fastphase ending) but not ford > 1 (e.g., 3 of 21 files for both parameters ford = 2).
Nonlinear prediction of other OKN parameters.
The group 2 parameters showed low correlation coefficients (fast and slowphase amplitude and fastphase interval) or correlation coefficients comparable to those for the group 1 parameters (slowphase velocity). In general, there was only a slight reduction in predictability under phase shuffling or amplitudeadjusted phase shuffling; this was most evident at low values of d. The original parameter sequences of the slow phase and fastphase amplitudes and slowphase velocities were significantly more predictable than surrogates generated via (plain) shuffling, whereas the predictability of the fastphase intervals often did not change under this procedure (no change for d = 1 in 13 of 21 files; see Fig. 5).
Comparison of parameters, stimulation conditions.
We performed a twoway repeatedmeasures ANOVA on theztransformed r values for d = 1, with stimulation condition (OKNHR vs. OKN) and parameter type (fastphase beginning position, fastphase ending position, fastphase interval, fastphase amplitude, slowphase amplitude, slowphase velocity) as factors. Stimulus velocity was included in the betweensubjects variable. The influence of stimulation condition was not significant (F _{1,8} = 0.338): there is no difference in predictability between the OKN and the OKNHR conditions. There was no significant interaction of parameter type and stimulation condition (F _{5,8} = 0.497). Parameter type was significant (F _{5,8} = 21.78,P < 0.01): as already noted, there is a difference in predictability of the different parameters.
Variance scaling, Hurst exponent.
The variance of fastphase beginning and ending positions, plotted as a function of the length of the time interval considered, showed a powerlaw scaling (indicated by a straight line in the loglog plot, Fig. 6). Hurst exponents were between 0.05 and 0.30 for beginning and ending positions and below 0.10 for slowphase and fastphase amplitudes and fastphase intervals. Slowphase velocity exhibited exponents up to 0.20. For the specific values, see Fig. 7. No statistically significant difference between the exponents for fastphase beginning and ending positions or between the stimulation conditions could be demonstrated.
There are various and sometimes conflicting definitions (and measurement procedures) in the literature of Hurst exponents and powerlaw scaling exponents in general. Often a first step in the analysis is to integrate the time series. This was not done here, and so in some sense we are finding the scaling law for “increments” of each process. We verified our procedures by finding the Hurst exponent (as defined in methods) of various artificial FBM processes, intentionally generated with different scaling laws.
It should be noted that, in general, Hurst exponents are determined from time series that have been sampled at a constant rate. In our data, we instead determined the variance increase per unit nystagmus segment. Because, however, the variance of up to 50 segments entered the calculation, effects due to statistical fluctuations in the length of the fastphase intervals should cancel. Even so, we can interpret our variance scaling in terms of a time unit equal to nystagmus segment length, which, although variable, is perhaps a more meaningful measure of “physiological time.”
FBM.
For FBMs with lengths comparable to the fastphase parameter sequences, high predictabilities were obtained with nonlinear prediction. The loglog plot of (1 − r) as a function of the prediction step d exhibited a straight line (as expected for a random process); this behavior was not changed by phase shuffling. First differences of FBM showed only a small amount of predictability and only for low and high Hurst exponents (approaching 0 and 1, respectively). Significant predictability occurred only ford = 1.
A full comparison of FBM data with the original OKN parameter sequences was not carried out because, unlike the case with the surrogate data sets, there is no direct method to create an FBM sequence that is comparable to a given OKN sequence. It is, as a matter of fact, a simple matter to create an FBM sequence that has the same statistical properties as any given data sequence, because one can independently specify the parameters (mean, variance, H) that generate an FBM sequence. Thus the only meaningful comparison of FBM to OKN data is the scaling of prediction quality with number of prediction steps. Put another way, for a given prediction method (embedding dimension, etc.), the decay of predictability with prediction step is essentially the same (the slope of the decay depends on H but is always an exponential decay) for any FBM sequence, regardless of how it was generated. Therefore any attempt to “match” an FBM sequence to a given OKN sequence is physiologically not meaningful, and at any rate the information we desire from this analysis does not require this matching.
DISCUSSION
Predictability of different parameters.
The interpretation of our findings has to account for the fact that there seem to be two groups of parameters, one group being somewhat predictable (fastphase beginning and ending positions) and one group being less predictable (slow and fastphase amplitudes, fastphase intervals, slowphase velocity).
As one can see from the example of FBM, random processes can often give rise to significant predictability (high correlation coefficients) with a nonlinear prediction approach. Brownian motion (i.e., FBM with Hurst exponent H = 0.5) is a model for the onedimensional diffusion of a particle in a solution; the dynamics of such a system are described by position increments within a time interval. These increments are independent of previous positions. This can be expressed by conditional expectations as E{x[(n + 1)τ] −x(nτ)/ x(τ), … ,x(nτ)} = E{x[(n + 1)τ] − x(nτ)}. Therefore, the past history of the diffusion process provides no information on future increments (5), and thus increments of a Brownian motion are not predictable. However, this is not the case for future positions: E{x[(n + 1)τ]/x(nτ)} = x(nτ), where x(nτ) is random variates of a Brownian process at discrete times nτ (n = 1, 2, …) and E is expectation of the random variate. Therefore, future positions can be predicted, and the best prediction at each time step into the future is simply the position at the immediately preceding time. This is exactly reflected in the prediction results for artificially generated FBM with H = 0.5, which showed high predictability of the sequence itself and no predictability of the first differences (in which the position information is destroyed and only changes are preserved). With H≠ 0.5, the situation is slightly more complicated, because this implies longrange correlations between future and past first differences in the sense of persistence (past increase implies future increase, H > 0.5) and antipersistence (past increase implies future decrease,H < 0.5). Regardless of the value of H, FBM shows a power law scaling of (1 − r) as a function of the prediction step, which is manifest as a straight line in the loglog plot of (1 − r) vs. d(33). Finally, correlation coefficients obtained in nonlinear prediction of FBM are identical to those from their PSsurrogate; this is to be expected because the phases of the Fourier transform of FBM are random by definition.
Fastphase beginning and ending positions share properties with FBM; future values in each sequence are best predicted by the immediately preceding values, and their first differences form (mostly) unpredictable random sequences. The latter has been used as a criterion to distinguish nonlinear chaotic systems from random systems (13). Furthermore, beginning and ending positions have the appropriate scaling law for the correlation coefficient of nonlinear prediction, which differs from the result expected for a chaotic (deterministic) system (34). Note, however, that in contrast to FBM, the appropriate surrogate for beginning and ending positions is the AAPSsurrogate, because the distribution of fastphase beginning and ending positions is not Gaussian. In fact, application of the PS procedure results in a significant increase in predictability in data sets that violate the Gaussian distribution. This could lead to false negative results, because a significant decrease in true predictability could be masked by an artifactual increase due to an incorrect surrogate.
The variance scaling behavior of FBM is described by its Hurst exponent. Hurst exponents of up to 0.3 for fastphase beginning and ending positions place them between classical Brownian motion, which corresponds to H = 0.5, and a random sequence. The deviation from 0.5 causes a small amount of antipersistence, which implies that a leftward drift is on average followed by a rightward drift. Thus the correlation between adjacent drifts is negative. However, the correlations thereby induced are only on the order of 2^{2H−1} − 1 = −0.25, for (increments of) fastphase beginning and ending positions (4). This correlation accounts for the predictability of the first differences for d = 1.
The second group of variables showed markedly smaller predictabilities (correlation coefficients). Of these, the sequence of fastphase intervals is easily identified as a sequence of independent random variates, because its predictability often does not change (does not decrease, in particular) even under random rearrangement of the parameter sequence. This also demonstrates that the fastphase interval sequences are stationary.
The other three parameters in this group (fastphase amplitude, slowphase amplitude, and slowphase velocity) show a change in predictability under the plain shuffling procedure but not under PS shuffling and AAPS shuffling. This suggests that they are FBM sequences, because preserving the power spectrum (which is f^{−β} for FBM, with β = 2H+ 1 and frequency f) does not change the predictability. Hurst exponents of the sequences of fast and slowphase amplitude were near zero, however. The variance scaling behavior defining FBM (i.e., increase ∼t ^{2H}) approaches the variance scaling of a random sequence (i.e., constant variance) asH approaches 0. At the same time, the correlation coefficient between (x_{n} _{ + 1} −x_{n} ) and (x_{n} −x_{n} _{ − 1}) (equal to 2^{2H−1} − 1; see Ref. 4) approaches −0.5, which is to be expected if x_{n} is a sequence of independent Gaussian random variates. Thus, forH approaching 0, the FBM approaches a random sequence. The low degree of predictability that is revealed by the change under shuffling is therefore only due to nonstationarity in the data sequence, that is, by longterm fluctuations and not by putative shortterm interdependencies that reflect the dynamics of the system. We conclude therefore that slow and fastphase amplitudes also form random sequences.
The sequences of slowphase velocities have a Hurst exponent of ∼0.2. This corresponds to antipersistent behavior that allows a certain degree of prediction. Slowphase velocity is the only parameter that is not a random walk or a random sequence, but is somewhere between these extremes.
Comparison with previous studies of OKN correlation dimension and predictability.
Shelhamer (27) looked for deterministic components in OKN with the correlation dimension. A comparison with the present study is limited by the fact that embedding the complete time series, as in the dimension study, mixes fast and slow phases. In particular, this holds true for PS or APS shuffling. A finding that can be compared with the results presented here, however, is that under shuffling of fast or slowphase segments, the correlation dimension did not change. This is compatible with the assumption that fast and slowphase amplitudes form a random sequence and is in accord with our findings, if one keeps in mind that the dimension was estimated in shorter 10s segments that could be considered stationary.
There is an apparent discrepancy between the results of our earlier study of OKN prediction (29) and this one. The previous study claimed to show evidence for determinism in OKN fastphase starting and ending positions, on the basis of improved prediction using a local nonlinear approximation (as here) vs. a global linear (ARMA, autoregressive moving average) predictor. Surrogate data techniques were not used in that study. In the present study, there is a slight reduction in predictability for surrogates formed from these sequences, but the reduction is small, indicating that the predictability is due in large part to random correlations as for FBM. One explanation for the discrepancy between that result and the ones presented here may be that the dynamic behavior of OKN on a global scale (i.e., over the duration of our data records) is nonstationary, with a parameter or parameters that vary slowly and in a random fashion, whereas the shortterm behavior is more (but not completely) deterministic. This can also be seen in the correlation dimension results (27), which show that the dimension of 10s OKN segments decreases over the course of 2 min. By using data over a long time span, the ARMA predictor unavoidably includes the apparently random nonstationarity or drift, leading to poor predictions. The nonlinear predictor, on the other hand, makes a local topological fit. This is also effectively a local time fit if the attractor is drifting in state space because of some slowly timevarying parameter(s). Thus the localized nonlinear predictor is better able to characterize any shortterm determinism (either linear or nonlinear) in the data series. An obvious point for further study would be the development of a local linear (ARMA) predictor and timelocalized surrogates to help resolve this issue further, as well as an exploration of prediction based on OKN parameters that may be coupled in some nonlinear fashion. The predictability seen in the previous study, although found with a local nonlinear predictor, may in fact be linear predictability, the result of a stochastic process with significant shortterm autocorrelation. We might also interpret the longrange correlations present in an FBM sequence (as the sequence of fastphase beginning and ending positions seem to be) as a form of “determinism” in that these correlations impose a structure on the data; this structure may have been detected as predictability in the previous study.
Nature of the prediction method.
Predictions in this study were carried out for a given OKN parameter sequence by using only previous values of the same sequence. For example, predictions of future fastphase amplitudes were based solely on previous fastphase amplitudes. There is a relationship (the wellknown “main sequence” for saccadic eye movements) between these amplitudes and fastphase velocity and duration (although the latter two values were not studied here), and in that sense the fastphase amplitudes are not entirely random. Although our analysis does not explicitly consider relationships between variables, this is implicit in the statespace approach because all parameters are presumably coupled through the dynamics of OKN, and the timedelay embedding and reconstruction, if done properly, will preserve these interrelations. It is certainly true that the properties of a given fast phase are somewhat constrained by the preceding slow phase, but this does not help to predict subsequent fast phases.
Physiological implications.
The most unambiguous findings in our study are the stationarity and the random character of fastphase intervals. This justifies random approaches for both describing and modeling this parameter. A random parameter is adequately described by its distribution. Stationarity implies in particular that the fastphase intervals during a trial did not change their distribution.
Anastasio (2), in a model of goldfish OKN, hypothesized a quantity that drifts under random influences toward a fixed threshold, which then triggers a fast phase, and thereby obtained an inverse Gaussian distribution (26) for the fastphase intervals, which was compatible with his experimental findings. Balaban and Ariel (3), modeling turtle OKN, assumed that the ratio of adjacent fastphase intervals could be produced by a random number and obtained a lognormal distribution of fastphase intervals.
The random walk nature of fastphase beginning and ending positions implies that, at least within the range that the eyes moved in our study, all orbital positions are equivalent. (We stress that this is within the range of eye movements actually attained, as it is well known that average eye position during nystagmus is deviated in the direction of the fast phases, the socalled “beating field.” Within this range, there is no apparent preference of one position over another.) The eyes tended to reach any arbitrary position (in this range) after completion of a slow phase and the following fast phase. Even the small amount of antipersistence intrinsic to these parameters reflects a reaction to the previous shift in eye position rather than the previous position itself. This is the meaning of the correlations between adjacent fastphase and slowphase amplitudes in cat VOR noted by Chun and Robinson (7). They reported relatively high correlation between the amplitude of each fast phase and the amplitude of the preceding slow phase. Therefore, a fast phase corrects appropriately for the eye position accumulated during a slow phase; thus the ending positions of the fast phases might be expected to exhibit little noise (randomness). Because the correlation of fastphase amplitude with the amplitude of the subsequent slow phase was not as high, the ending positions of the slow phases (i.e., the fastphase beginning positions) should show a larger variation. Therefore, the predictability of the fastphase beginning positions should be higher than the predictability of the fastphase ending positions. This is not reflected in our OKN and OKNHR data, however, given that we could not demonstrate a significant difference in the predictability of these two parameter sequences.
It is noteworthy that we could not demonstrate a difference in the predictability of any parameter in the comparison of OKN vs. OKNHR stimulation. After several VOR time constants, no input from the semicircular canals is to be expected, and therefore both stimulation conditions are used interchangeably in routine clinical testing of the vestibularoptokinetic system (11). However, in the OKNHR condition there is some otolith stimulation due to centrifugal stimulation. It mimics right tilt of the right utricle and left tilt of the left utricle, and therefore no net effect is expected. However, a modification of the fastphasegenerating algorithm is theoretically possible but is ruled out at least by the measures we applied in our study.
Whereas the random walk character of the eye positions at the beginning and end of fast phases is not surprising if they arise by the summation of random shifts due to imprecise matching of fast and slowphase amplitudes, it is noteworthy that slowphase velocity exhibited a dynamic behavior between random walk and random sequence. This implies that future slowphase velocities depend on the past values of this parameter, but in a linear and stochastic manner, not in a nonlinear deterministic manner. This could be an interesting starting point for future modeling efforts in this field.
It is tempting to speculate about the implications of our findings for the function of the neural mechanisms (vestibular and oculomotor) involved in the optokinetic response. There is undoubtedly a gross level of deterministic behavior in OKN, as for example fastphase starting and ending positions are almost completely contained in the beating field (see above). Yet within this range, there is apparently significant random behavior, with a small amount of predictability. The random behavior has overlaid on it a form of longterm correlation in the form of antipersistence. This mixture of dynamics is intriguing and provides a challenge for mathematical modeling efforts. The physiological meaning of these dynamics is open to conjecture. One reasonable interpretation is simply that this system is inherently noisy (random), with only that amount of imposed determinism to accomplish effectively its desired function: if it is not detrimental to performance, no energy (metabolic or evolutionary) is expended in reining in any random aspect of the behavior. This does not rule out the notion that some amount of variability may indeed enhance the performance of the system, perhaps by aiding phase transitions (15), preventing modelocking periodicities, or pushing the system to probe more of the environment through a form of random search. It seems more likely, in our view, that this variability is inherent and allowed to remain, rather than being deliberately created.
Acknowledgments
This work was supported by Deutsche Forschungsgemeinschaft (Grant Tr 449/11), by the National Science Foundation (Grant DBI9630733), and by the Whitaker Foundation.
Footnotes

Address for reprint requests and other correspondence: P. Trillenberg, Medizinische Universität zu Lübeck, Klinik für Neurologie, Ratzeburger Allee 160, D23538 Lübeck, Germany (Email: trillenberg_p{at}neuro.muluebeck.de).

The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked “advertisement” in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
 Copyright © 2001 the American Physiological Society