Abstract
Sammon, Michel, and Frederick Curley. Nonlinear systems identification: autocorrelation vs. autoskewness. J. Appl. Physiol. 83(3): 975–993, 1997.—Autocorrelation function (C ^{1}) or autoregressive model parameters are often estimated for temporal analysis of physiological measurements. However, statistical approximations truncated at linear terms are unlikely to be of sufficient accuracy for patients whose homeostatic control systems cannot be presumed to be stable local to a single equilibrium. Thus a quadratic variant ofC ^{1}[autoskewness function (C ^{2})] is introduced to detect nonlinearities in an output signal as a function of time delays. By use of simulations of nonlinear autoregressive models, C ^{2} is shown to identify only those nonlinearities that “break” the symmetry of a system, altering the mean and skewness of its outputs. Case studies of patients with cardiopulmonary dysfunction demonstrate a range of ventilatory patterns seen in the clinical environment; whereas testing of C ^{1}reveals their breathbybreath minute ventilation to be significantly autocorrelated, theC ^{2} test concludes that the correlation is nonlinear and asymmetrically distributed. Higherorder functionals [e.g., autokurtosis (C ^{3})] are necessary for global analysis of metastable systems that continuously “switch” between multiple equilibrium states and unstable systems exhibiting nonequilibrium dynamics.
 structural stability
 metastability
 symmetry breaking
 respiratory failure
 congestive heart failure
 nonlinear dynamics
recent reviews by Bruce and Daubenspeck (2, 3) on timeseries analysis of respiratory measurements emphasized the issue of “temporal scaling of respiratory pattern variability.” This observation is indicative of the complexity and state dependence of the respiratory control system: metabolism, distribution of brain stem neurotransmitters, blood gas tensions, ventilatory dead space, sensations of dyspnea, cardiac output, pulmonary mechanics, and strength of respiratory muscles can vary with different time constants (4, 5, 7, 9, 10, 1315, 17, 23, 24, 2628, 3032,3537). Under conditions of respiratory distress, highly complex ventilatory dynamics may unfold that would not be observable in the unstressed system. Thus comparative analyses of the ventilatory patterns of healthy adults and patients with cardiopulmonary disease are critical toward understanding mechanisms of respiratory failure.
However, such analyses are complicated by the fact that various assumptions of classical linear statistics may not be valid when the control system is under duress and not operating locally to a single equilibrium state (a ventilatory “set point”) (19, 22, 29). For example, many investigators have found that variables such as depth or rate of breathing exhibit a positive breathtobreath autocorrelation (1, 13, 23, 26, 36), a result that has been interpreted as evidence of “memory” within the central pattern generator (1) or “noise” within the peripheral feedback control loop (23). Regardless of physiological interpretations, nonzero autocorrelation implies that a fraction of the signal variance has a nonrandom component (see appendix ). Some underlying assumptions of linear, Gaussian systems analysis are as follows: 1) mean output of the controller and the system equilibrium are equivalent,2) first and secondorder moments (e.g., mean and variance) are independent,3) autocorrelation and frequency responses are symmetrically distributed about the mean, and4) system response to perturbation is independent of the amplitude and direction of the disturbance (i.e., memory is independent of noise). By definition, however, autocorrelation in nonlinear dynamical systems is more positive in some directions than in others; when the distribution is asymmetric, the mean output becomes skewed away from the system equilibrium toward those domains where autocorrelation is more positive. Furthermore, when the deviation from linearity is continuous, changes in the signaltonoise ratio (SNR) alter not only the mean and variance but also the skewness and correlated fraction of the system output.
Therefore, statistical analyses of the output from a nonlinear control system must take into consideration that such parameters (mean, variance, autocorrelation, and skewness) can be interdependent quantities, particularly as the system becomes less stable. Here, dynamic interactions between the first and secondorder moments are referred to as autoskewness and quantified using a quadratic variant of the autocorrelation function (C ^{1}), which will be referred to as the autoskewness function (C ^{2}). Reviews of nonlinear systems analysis using functional series expansion can be found elsewhere (16, 19, 20, 29); our analysis adopts a pulse amplitude modulation that restricts estimates of higherorder statistical cumulants (29) or Wiener kernels (39) to a single timedelay axis. This greatly simplifies visualization and statistical testing of nonlinearities in an output signal. Probabilistic (8, 25, 29) and geometric (6, 11, 21, 34) analyses of various nonlinear systems are relied on to outline the distinctions of linear vs. nonlinear autocorrelation and local vs. global stability. Inasmuch as deviations from linearity can be specified for these systems, they will be used to generate test signals to demonstrate the utility ofC ^{2} analysis in quantifying the differential responses of a system to weak vs. strong perturbations, thus identifying “symmetrybreaking” nonlinearities (21, 31). These analytic techniques will be shown to have diagnostic value for distinguishing the types of nonlinearities typically seen in the ventilatory patterns of patients with cardiorespiratory dysfunction. Higherorder functionals [e.g., autokurtosis (C^{3})] are necessary for analysis of systems with structural instabilities (6,34), e.g., where dynamics vary bimodally relative to a threshold nonlinearity or intermittently “switch” between multiple equilibria.
METHODS
Ventilatory patterns for the seven subjects of Figs. 7, 9, and 12 were acquired by inductance plethysmography (Respitrace Plus, NonInvasive Monitoring, W. Palm Beach, FL), with tidal volume calibrated via spirometry. These cases were selected for the range of variability and system memory in their respiratory patterns to demonstrate issues related to the statistical methods. Ventilatory records were collected during an afternoon nap from patients A andB (57 and 74yrold men) with mild upper airway obstructions and audible snoring. Five patients were monitored with Respitrace on days of successful “Tpiece” weaning from mechanical ventilation in the University of Massachusetts Medical Center Intensive Care Unit (ICU):patients C, D, andE (67, 74, and 82yrold men) had congestive heart failure (CHF) with left ventricular ejection fractions <50% and marked CheyneStokes respiration (CSR);patients F andG (a 73yrold man and a 57yrold woman) were recovering from pneumonia. Approval was obtained from our institutional review board for human subjects for offline analysis of these measurements, which were recorded as part of standard clinical protocols. The algorithms listed in appendixes and are written in the pseudoC language of the software Matlab (MathWorks, Natick, MA). Inasmuch as the “autofunctionals” are estimated by standardizing each record according to its own probability density function, the analytic results are independent of the absolute volume calibration of the Respitrace signal. Those signals were also not subjected to any filtering before analysis.
Statistical Method and Definition of Terms
Autonomous systems.
An autoregressive system is one in which the current state (X_{t}
) varies with respect to previous states plus a random perturbation:X_{t}
=f(t,
By definition, any system equilibria exist along the diagonal of the phase space (6), i.e., whereX_{t}
=
Autocorrelation and autoskewness functions.
A standard null hypothesis in timeseries analysis is that present deviations in a measurement from its expected value are independent of past “residuals” (25, 29); i.e., the initial model forX_{t}
is simply a point estimate {C
^{0} = mean =E[X_{t}
]}. The functional analysis is not restricted to a constantC
^{0}; e.g., any predicted trajectory
The autocorrelation function (C
^{1}) quantifies the linear correlation between present and past residues ofX_{t}
with respect to C
^{0}, which are standardized so that −1 ≤C
^{1} ≤ +1
For the case where X is a random uncorrelated variable, the distribution ofQ
^{1} is asymptotically normal (29); thus the null hypothesis of a “memoryless” system is rejected if a Student’sttest finds a single value of ‖C
^{1}‖ > C*. To minimize the risk of type I errors, a high sample size (degrees of freedom > 150) and level of significance (P < 0.01) are recommended. The null hypothesis of time invariance can be evaluated by polynomial regression ofQ
^{1} with respect to time; a significant trend would result in a conclusion thatX is not secondorder stationary (appendix
). However, the nonlinear analytic question is: Are the temporal variations inQ
^{1} correlated with those ofZ_{t}
? A null hypothesis that the signal dynamics are symmetrically distributed and that any memory present is strictly linear presumes that residues ofX are independent of the present and past C
^{1}residuals; their crosscorrelation matrix (R
^{2}) is
The null hypothesis of linearity is rejected if a single value of ‖C
^{2}‖ exceeds C* (Eq.3
). This linear deconvolution effectively corrects for the case where X is asymmetrically distributed; it is also geometrically necessary, as, by definition, a plot of X_{t}
vs.
Local vs. global stability analysis.
In this article, the autofunctionalsC^{n}
are interpreted with respect to local and global stability analysis. Toward this end, C^{n}
is itself viewed as aNdimensional dynamical system having initial conditions = [1,0,0,...,0]; systems will be differentiated by evolution of theirC^{n}
functions for τ > 0. In some cases, these dynamics are viewed with respect to the time delay (C^{n}
vs. τ); in others, a “parameter space” representation is preferred (e.g.,C
^{1} vs.C
^{2}). Similarly, it will be useful to alternate between representations of the system output over time (X_{t}
vs.t) within the phase space (X_{t}
vs.
As discussed below, local stability analysis adopts the view that a system is operating within a “neighborhood” of a stable equilibrium point. Perturbations of small amplitude are generally applied to identify the linear components of a system’s impulse response, but larger perturbations are necessary to identify any nonlinear components. By an impulse, it is always meant a “singularity”; inasmuch asC
^{1,2} are statistical vectors, the source of their impulses at lag zero are the statistical scalars found in the denominators of Eqs.1
and
4
. Nonzero variance in X with respect toC
^{0} perturbsC
^{1}; autocorrelation is undefined for all cases whereX_{t}
−C
^{0} is identically zero. Similarly, nonzero variance inX
^{2} perturbs the autoskewness function;C
^{2} is singular for all binary processes, e.g.,X_{t}
= [−1,1,−1,1,−1,−1,1,...], becauseX_{t}
vs.
Systems that are stationary and locally stable will haveC
^{1,2} functions that decay rapidly to [0,0], withC
^{2} generally leading C
^{1}. Another source for impulses to autoskewness in Eq.4
exists whenC
^{1} decays very slowly or does not converge at all, i.e.,
The linear memory of a system will be denotedM
^{1}, i.e., the number of values of ‖C
^{1}‖ > C* (excluding
LINEAR VS. NONLINEAR AUTOCORRELATION
A statistical method for detecting signal nonlinearities is effective only if it quantifies these phenomena independently of the system equations (presumed unknown) that actually generated the signal (20). Consequently, two generalized firstorder, nonlinear autoregressive systems are evaluated, for which the deviations from linearity can be specified using a single control parameter (b). These maps will first be used to familiarize readers with some statistical properties of nonlinear dynamical systems and then as signal generators to evaluate theC ^{2} test for linearity. Accurate interpretations ofC ^{1,2} necessitate a comparison of systems that have some properties in common but also differ in other aspects.
Nonlinear Autoregressive Systems (Maps)
System 1 is one of the simplest nonlinear systems to study (piecewise linear)
Parameter Space (a and b)
As shown in Fig. 1, the local stability ofsystems 1 and2 is determined bya andb, thus dividing the parameter space into four domains (I–IV), depending on the slope and curvature of the map (6):I reflects a map with positive slope and curvature, II a negative slope with positive curvature, etc. The equilibrium forsystem 2 is locally stable for ‖a‖ < 1; i.e.,X_{t}
eventually returns to
For parameter values outside its stable domain, system 1 grows unbounded to ±∞ (domain A of Fig. 1). However, for ‖a‖ > 1,system 2 has regions where periodic and chaotic oscillations occur (domain B); elsewhere,X_{t} grows unbounded or converges to an “apnea” equilibrium located at the origin (a > 1,b <a); this equilibrium is important for global analysis of system 2.
Our present focus is on systems that are stable locally to a single equilibrium; within a neighborhood of (a,b) = (0,0), the qualitative dynamics depend on where the control parameters lie (I–IV). These quadrants are not unique to systems 1 and 2; to a degree of approximation, local stability of any nonlinear control system can be categorized similarly. With regard to system identification, estimates within theC ^{1,2} plane will correspond directly with the (a,b) plane. In the parameter space the null hypothesis of a memoryless system corresponds with a single point (a,b) = (0,0), whereas the null hypothesis of linearity is a restriction to the line (a,0). For local analysis, the statistical question is: How far must the systems be perturbed from the point/line in order for theC ^{1,2} tests to reject the null hypothesis? The flipside question [Is the system sufficiently close to (a,b) = (0,0) that C ^{3,4,...} can be disregarded?] is deferred until sections on global analysis.
Perturbation Analysis
Figure 2 shows responses ofsystems 1 and2 to impulse perturbations (ε_{t}) of various amplitudes and directions. Both systems are scaled such that
In Fig. 2, left, the systems are configured such that the slope of the map at
Stochastic Analysis
What impact does nonlinear autocorrelation have on the statistical properties of the system output, e.g., the mean, variance, or skewness? To address this issue in Fig. 3, ε is configured as zeromean Gaussiandistributed white noise. BecauseC
^{1,2} of an uncorrelated process is an impulse function, system responses to this input are comparable (on average) to the impulse responses of Fig. 2. The SNR is the system equilibrium divided by standard deviation of the input: SNR = (
Because the parameter space for system 1 is “skewed” (Fig. 1), deviation between its mean and equilibrium depends on linear and nonlinear coefficients. The orthogonality of a andb for system 2 results in the difference depending only onb. However, the mean and equilibrium are equivalent only in the linear case for both systems (b = 0)
The coefficient of determination relates the percent increase in variance between input and output signals due to the memory of the system: r
^{2} = 1 −
In response to a symmetrically distributed input, the output of a linear system is also symmetrical about its mean (κ = 0). However, skewness of the output of a nonlinear system varies in proportion to its deviation from linearity and in the direction where autocorrelation is more positive: hypoventilatory nonlinearities perturb skewness toward more negative values, whereas hyperventilatory nonlinearities make κ more positive.
In Fig. 3, deviations between the equilibrium and mean output due to nonlinear autocorrelation depend on the SNR of the system; i.e., increasing stochastic variability amplifies the deviation forsystems 1 and2. Changes in skewness and coefficient of determination also vary with respect to the SNR, but only in systems with nonzero second derivatives (system 2). Note that an asymmetricP_{X} (‖κ‖ > 0) does not imply that the dynamics ofX are nonlinearly autocorrelated any more than nonzero variance implies thatX is linearly autocorrelated. These “lagzero” statistics have no diagnostic value with respect to the nature of the system memory but are essential for scaling theC ^{1,2} functions so that M ^{1,2} can be quantified (appendix ).
Sensitivity of C^{1,2} to SNR
Autocorrelation and autoskewness functions are shown in Fig.4 for the overdamped, hypoventilatory cases of systems 1 and2; for these simulations, the SNR was decreased from 20 to 10 to 6.7. As withr
^{2} and κ in Fig. 3, C
^{1,2} ofsystem 1 are invariant to changes in the SNR. On the other hand, when noise level is low,system 2 remains near equilibrium and dynamics are determined only by the linear coefficient:
Relation Between C^{1,2} and (a,b) Planes
Plots of C ^{1} vs.C ^{2} for simulations of systems 1 and2 are shown in Fig.5(ς_{ε} = 0.10). Estimates at the first time lag vary in proportion to the linear and nonlinear parameters (a andb), thus dividing theC ^{1,2} plane into four quadrants (I–IV). Linear systems correspond with the horizontal axes of theC ^{1,2} and (a,b) planes; memoryless systems correspond with the origin of both planes. For nonlinear overdamped systems (a = +0.5),C ^{2} tends to stay within a single quadrant (I orIV), whereas underdamped systems (a = −0.5) oscillate between quadrants, e.g., II andIV forb > 0. TheseC ^{2} dynamics correspond with the manner in which the systems deviate from linearity in their temporal impulse responses (Fig. 2).
System Nonlinearities at Multiple Time Delays
Interpretation ofC
^{1,2} for systems with nonlinearities at multiple time delays is consistent with that already discussed. To demonstrate, system 2 is expanded to Ddimensions (equilibria are scaled at
With regard to system identification, a minimum of five parameter estimates must be specified to (empirically) characterize a nonlinear oscillation: the slope and curvature at the smallest time lag,
Uniform vs. Nonuniform Sampling
Analyses of the previous sections were conducted with uniform sampling over time; i.e., intervals between measurements ofX were presumed to be constant. However, most physiological processes measured as discrete events (e.g., heartbeats and breaths) do not occur at constant time intervals, leading to the question: Should analysis of such signals be conducted in discrete physiological time (τ = breaths) or continuous “real time” (τ = seconds) (3)? The algorithm listed inappendix covers both cases.
Fortunately, the debate is often unnecessary, because analytic results using the two estimation methods are generally not contradictory, as in Fig. 7, where 30min records of breathbybreath minute ventilation (V˙i) and breath duration (Tt) are shown for four patients who exhibited oscillations in their ventilatory pattern. The discrete and continuous estimates ofC ^{1,2} forV˙i are shown for each patient. When the coefficient of variation of Tt is <1/4 (Fig.7 A), the two estimates ofC ^{1,2} are nearly identical, because the breathbybreath intervals are relatively constant over time. When Tt is more variable (coefficient of variation > 1/4), greater differences between estimates are seen;C ^{2} dynamics tend to dampen out more quickly for discrete estimates.
Which algorithm should be used for breathbybreath data? Although more computative, the continuous version is preferred, particularly for patients in whom Tt is highly variable. If comparisons are to be made between patients with different respiratory rates, it is important thatC ^{1,2} be estimated on equivalent time scales (e.g., seconds). Furthermore, results using this algorithm may be more representative of dynamics of arterial blood gases, which vary continuously over time.
However, breathbybreath rhythms intrinsic to respiratory control may also exist that are asynchronous with respect to absolute time (31,32). Inasmuch as some feedback to the control system is discrete by nature (e.g., breathbybreath neuromechanical feedback),algorithm 1 may be preferred, particularly when highfrequency oscillations in a variable are seen (i.e., C ^{1} < 0 at small time lags). Furthermore, ifC ^{1,2} is to be used for building a nonlinear autoregressive model for analyzing breathbybreath dynamics, it is advantageous to use the discrete estimate.
Relation of C^{1,2} to Return Maps
Similar to the top of Fig. 6,C
^{1} andC
^{2} tend to be negatively correlated when ventilatory oscillations develop, with autoskewness leading autocorrelation (counterclockwise motion in theC
^{1,2} plane). The overdamped hypoventilatory phase (C
^{1} > 0,C
^{2} < 0) reflects the system’s tendency toward apnea, whereas the underdamped hyperventilatory phase (C
^{1} < 0,C
^{2} > 0) reflects negative, delayed feedback, which varies inversely withV˙i of previous breaths. In Fig. 8, return maps for the patients with CHF in Fig. 7 cross the diagonal with negative slope and positive curvature. Patient C has a higher feedback delay (45 vs. 38 s) and a more negative slope and positive curvature at this delay than patient D:
Local vs. Global Stability of Ventilatory Control
Ventilatory patterns analyzed in Figs. 7 and 8 were sufficiently stationary over a 30min interval that any time variations in the statistics could be ignored (appendix ). Issues related toC ^{1,2} analysis of signals with significant trends in the mean, variance, or skewness are discussed in Fig. 9 for CHFpatient E with persistent CSR in the ICU (4). Four 25min subintervals were selected from the ventilatory record for “local” estimation ofC ^{1,2}. This patient exhibited intermittent switching between a typical CSR pattern having a negatively correlatedC ^{1,2}(pattern 1) and a pattern with a positive C ^{1,2}slope (pattern 3); however, transitions between CSR patterns 1 and3 are interceded by patterns with zeroC ^{1,2} slope, corresponding with linear dynamics local to an equilibrium (pattern 2). During a 13h overnight recording, this patient exhibited similar transitions four times, withpattern 1 being the most persistent state (9.1 h); each transition between patterns 1 and 3 was interceded by pattern 2; i.e., theC ^{1,2} slope did not change sign without becoming zero.
For apparent reasons, many analysts would categorize the ventilatory record of Fig. 9 as nonstationary; certainly, an estimate ofC
^{1,2} over the 200min interval would be statistically unstable (appendix
). Still, the analytic question remains: To what degree are the observed variations autonomous? Measurements of other physiological variables would suggest a high degree of determinism: patterns 2 and 3 coincide with reduced heart rate and oxymetric arterial saturation, as well as lower mean ventilation (localC
^{0}). Cycle times of the CSR oscillations increase duringintervals 2 and3, as might be expected with lower cardiac output. Whereas cardiorespiratory interactions are clearly evident, interdependence of these systems with mechanisms controlling sleep state are likely involved as well. Such physiological couplings would result in a synthesized system having a potentially very complex geometry and high number of independent degrees of freedom. As discussed below, if geometry of a control system is such that dX_{t}
/d
AUTOREGRESSIVE SYSTEMS WITH CORRELATED, NONGAUSSIAN INPUTS
Simulations of previous sections employed autoregressive models with parameters (
For the simulations of Fig. 10, a linear firstorder moving average variable was first generated: ε_{t} = 0.5 (u_{t}
+
Linear, TimeVarying Approach for Evaluating Autoskewness
In presuming stationarity, the analyses of Fig. 10 construe the mean (C
^{0}) as constant over a sample interval andC
^{1,2} as varying only with respect to a finite number of time delays. However, the “moving” equilibrium/slope simulations raise the following question: What ifC
^{2} is presumed to be zero and C
^{0,1}to vary in a piecewise constant fashion over the sample interval? One strategy for linear analysis of a time series suspected to be nonstationary is to divide the sequence along subintervals and estimate the mean and autocorrelation locally with respect to time (as in Fig.9; see Ref. 29 regarding the uncertainty principle with respect to this method). In Fig. 10, 64 intervals (of 32 samples each) were used. However, the Gaussian/linear assumption (local variations inC
^{0} andC
^{1} are mutually independent) is evaluated using linear regression analysis. Statistical testing for the linear cases (A andB) would conclude thatC
^{0} and
If parameter estimates for patient Ein Fig. 9 are examined from a similar perspective, it should be apparent that localized estimates ofC ^{1,2} varied with level of V˙i(C ^{0}). Higherorder functionals (C^{n} ,n > 2) are necessary to approximate the geometries of systems that produce such interdependencies.
METASTABLE SYSTEMS WITH THRESHOLD NONLINEARITIES
Quadratic approximations of systems are not globally stable, in that there exists a perturbation of sufficient magnitude to break the intersection of the estimated map with the diagonal, resulting in a divergence toward ±∞; for cases C, D, and E, the divergence was transient, because ε_{t} was not held beyond the threshold level indefinitely (arrows). Dynamical systems become structurally unstable as the slope of their map along the diagonal approaches unity (6); this section demonstrates how the presence of cubic nonlinearities bound such systems from diverging to infinity. Their analysis requires the next highestorder functional (C ^{3}), which is referred to as the autokurtosis function, inasmuch as it quantifies the degree to which dynamics perturb system density (P_{X} ) toward a bimodal distribution. Estimation and statistical testing ofC ^{3,4,...} are consistent with that ofC ^{2}(appendix ).
Simulations of previous sections presumed that system dynamics took place locally to a single equilibrium. For linear and weakly nonlinear control systems, this is a valid assumption; a straight line with slope <1 can intersect the diagonal only at a single point,
Symmetrical Case (b = 0)
Polynomial maps for which the even terms disappear possess an odd symmetry with respect to
Symmetry Breaking (b ≠ 0)
The simulations of Fig. 11
B show how the quadratic term breaks the symmetry of system 3. For b > 0,
These simulations demonstrate that the autoskewness function discriminates only those nonlinearities that break system symmetry and alter the mean and skewness of its output. Thus a finding of ‖C ^{2}‖ > 0 is a sufficient, but not necessary, condition for concluding that a (stationary) signal is nonlinearly autocorrelated; the same can be said for C ^{3}. It should be evident from Fig. 11 why a positive cubic nonlinearity would result in a system that oscillates to ±∞ in response to small perturbations; because of their reliance on bilateral negative feedback, C ^{3}tends to be negative for homeostatic control systems. A finding of ‖C ^{3}‖ > 0 does not necessarily imply that the system is “metastable” (although the converse is true), only that the curvature is asymmetric with respect toC ^{0}. If cubic memory is significant (M ^{3} ≫ 0), it does imply that a model possessing a threshold nonlinearity is the best representation of the system at this order of approximation.
Structural Stability of Ventilatory Dynamics
Whereas ventilatory control of healthy adults is fairly robust, it would be dangerous to presume that respiration is structurally stable in patients with severe cardiopulmonary dysfunction. For example, some patients may intermittently increase their alveolar ventilation at the cost of an elevated work of breathing while later tolerating hypercapnia and dyspnea to rest fatigued respiratory muscles. Weber et al. (38) described similar phenomena in isometrically loaded biceps muscles, where “steady states of [electromyogram] activity can be achieved with light loads, but with heavy loads the dynamic system experiences continuous status transitions that result in task failure” (38). Figure 12 shows ventilatory patterns for two patients with pneumonia who had recently been weaned from mechanical ventilation in the ICU. Ventilatory dynamics of each exhibit highly positiveC
^{1} and negativeC
^{3}; along with the nature of the rapid transitions between respiratory patterns, these statistics suggest that their ventilatory control was structurally unstable in a manner similar to that of Fig. 11. If two distinct ventilatory set points were present, it is clear that the higher one (
The metastable geometry of Fig. 11 consisted of two locally stable equilibria (zerodimensional attractors) with diametric curvature separated by an unstable
PERIODIC AND CHAOTIC OSCILLATIONS
Simulations of previous sections relied on the presence of a timevarying input to perturb the systems from equilibrium to identify components of its impulse response;C^{n}
would become singular for ε_{t} = 0, as the deterministic component of the density function (P_{X}
) was a zerodimensional fixed point (
System 2 is useful for generating a broad range of attractors having very complex distributions. The system of Fig. 13 has an elliptical cycle fora
_{1} = +0.5 and a parabolic density function; however, asX_{t}
narrows toward the diagonal at high values,P_{X}
is negatively autoskewed. As a
_{1}→ +1, X_{t}
approaches an orbit that intersects
For a
_{1} > 0, trajectories follow a continuous path in the phase (X_{t}
vs.
What type of attractors result in higherorder memories, i.e.,M ^{3,4,...} > 0? Recall that C ^{1} is undefined for X_{t} = constant, andC ^{2} is singular for all binary processes. The proximity with which an attractor approaches such states determines how stronglyC ^{3,4} are perturbed; C ^{3,4}deviate farthest from 0, whereX_{t} approaches a saddle point (a _{1}= −0.96) and a saddle cycle (a _{1} = −0.6) in Fig. 13. Higherorder nonlinearites are generally associated with verylowfrequency components that switchX_{t} between states over time (appendix ). The order of system memory (highest nwhere M^{n} > 0) is related to the number and symmetry of deterministic modes that can be deconvolved fromP_{X} (e.g., peaks for equilibria, parabolas for cycles). In general, odd functionals discriminate the modes; evenC^{n} quantify their relative symmetry (appendix ).
DISCUSSION
Nonlinear variants (C^{n}
) of the autocorrelation functionC
^{1} were introduced as analytic tools for identifying nonlinearities within the output of a control system. With primary focus on autoskewness (C
^{2}), the analysis was tested using simulations of several nonlinear systems driven by Gaussian and nonGaussian distributed noise. Analysis of ventilatory patterns of patients with various degrees of cardiorespiratory dysfunction demonstrated the necessity of quantifying the linear (n = 1) and nonlinear components when analyzing such signals. Where statistical testing indicated the presence of significant autocorrelation, the hypothesis of linearity was rejected at the 99% confidence level, indicating the presence of significant nonlinearities at various time delays. Observations that greater persistence inC
^{1} coincides with proportionally greaterC
^{2} memory are consistent with the notion that, as the set point becomes less stable in the local sense (elevatedM
^{1}), quadratic nonlinearities have greater influence on the ventilatory dynamics (elevated M
^{2}). In general, elevatedM^{n}
indicates greater persistence in
Stochastic vs. Deterministic Components of Density Functions
As discussed in appendix
, the functionals up to order N are estimated using basis vectors that have been “standardized” according to the first N + 1 moments of the probability density function (P_{X}
). These vectors are important for scaling the initial conditions:
Any C^{n}
test that concludes that a time series is linearly/nonlinearly autocorrelated can be checked using a surrogate
Similarly, anyC^{n}
test (n > 1) that detects the presence of nonlinear autocorrelation can be verified by comparing results found for the originalX_{t}
with those for surrogate data generated by phase randomization (12). Briefly, this algorithm consists of 1) calculating the fast Fourier transform (FFT) ofX_{t}
,2) randomizing its phase components while holding its magnitude constant,3) making the FFT selfadjoint, and4) calculating the inverse transform back to the time domain. The result is a realvalued time series (
If C^{n} (n > 1) is nonzero for
Dynamic Changes in Mean, Skewness, and Coefficient of Determination
For stationaryX_{t} , a conclusion that C ^{2} > 0 implies that mean output is elevated relative to the system equilibrium; this type of nonlinearity also skews the density function of X_{t} toward more positive values. Similarly,C ^{2} < 0 implies a mean less than the set point and a negative autoskewness. WhenC ^{2} oscillates between positive and negative values, the asymmetries tend to cancel each other somewhat with respect to changes in mean and skewness.
From linear statistics, the concept of variance having random and nonrandom components is familiar to most systems analysts, with the coefficient of determination (r
^{2}) reflecting the correlated fraction of the signal variance (25, 29). For nonlinear systems with nonzero second derivatives,r
^{2} depends also on the SNR, with system memory generally increasing with level of noise (23). Less familiar, however, is the concept that the mean and skewness of the output signal can also be viewed in terms of deterministic and random components. Whereas the system equilibrium is the primary determinant of the mean output, deviations in the mean from
Thus, for patients with strong symmetrybreaking nonlinearities in their ventilatory control, an injection of noise within the system can induce relative changes in meanV˙i (as well as in r ^{2} and κ). Although the presence of such asymmetries can be detected using autoskewness analysis, we are unaware of a way to reliably quantify the magnitude of dynamic hypo or hyperventilation (in l/min) using estimates ofC ^{1,2}.
Relation of C^{n} to Return Maps
For local analysis,C
^{1,2} quantify the slope and curvature of maps along the diagonal (
“Can Periodic Breathing Have Advantages for Oxygenation?”
Levine et al. (17) posed this question recently; on the basis of simulations of their model for ventilatory control, they speculated that asymmetric oscillations can yield higher mean ventilation over time than the controller set point. However, it should be recognized that different types of asymmetries are possible once ventilatory oscillations develop; in Fig. 9, higher arterial saturations were observed when theC ^{1,2} slope was negative. Thus improvements in oxygenation may not be seen with asymmetric oscillations per se, but only in those cases where the delayed feedback has a hyperventilatory nonlinearity, a curvature that is consistent with any feasible model of peripheral chemosensitivity (4, 17, 27). However, the question itself may be also illposed with regard to causality: the global stability of ventilatory control in CHF patients may be such that its set point cannot be raised without also undergoing a bifurcation to an oscillatory ventilatory pattern. If objectives of minimizing variability in breathing and maximizing mean alveolar ventilation become somewhat competitive, then correlated variations between the two over time would be classified as a structural instability.
Structural Instabilities vs. TimeVarying Dynamics
Structurally unstable nonlinear systems have coordinates along which dynamics have very long time constants (a “slow” manifold) (6). In control terms, linearization of the system will produce a positive “pole” that is very near the unit circle, resulting inC ^{1} functions that decay very slowly. Furthermore, small perturbations are capable of moving system poles between the exterior and interior of the unit circle, resulting in rapid transitions in system dynamics as one state becomes unstable (repelling) and another becomes stable (attracting). It is because of these transitions that measurements of outputs from structurally unstable systems may be regarded as time varying by some analysts (appendix ). A question of whether the irregular dynamics are “due” to a high system gain or to a timevarying input is somewhat illposed; unless comparable perturbations or physiological noise result in wildly varying breathing patterns in healthy adults under similar conditions, observations of such dynamics in ICU patients must be attributed to their pathologically nonlinear ventilatory control. As a clinical analogy, an encounter with an allergen may be responsible for the timing of an asthma exacerbation; however, it is ultimately the patient’s high sensitivity to the external input that distinguishes him from a healthy person.
Pulse Amplitude vs. Pulse Width Modulation for Kernel Estimates
C
^{1} can be viewed as quantifying the system’s averaged response to a perturbation att = 0 of firstorder magnitude:E[(X_{t}
)
Differential Response to Small vs. LargeImpulse Perturbations
Because of positive autocorrelation, a breath that is slightly above average may be followed by a sequence of breaths that are also above the mean (overdamped dynamics due to positive slope); however, a substantially deeper breath may be followed by a sequence of breaths that are significantly below the mean (hypoventilatory response due to negative curvature). Examples of this dynamic were shown in Figs.2 B and7 B; the nonlinear response might be mistaken as evidence of underdamped stability by analysts relying strictly on linear methods. Other investigators have had various perspectives on the differential response of the ventilatory controller to small vs. large perturbations. Two groups isolated the dynamics after a spontaneous deep breath to quantify chemoreflex gain (14, 30). Others have considered deep breaths to be statistical outliers to be deleted from the time series or “clamped” at a fixed value (28,36). We advocate using all the measured data and quantifying the averaged response (C ^{1}) and deviations fromC ^{1} when residues of nth order magnitude within the time series are amplified (C^{n} ).
Other Nonlinear Analyses
Recently, a range of analytic methods have been proposed for quantifying such nonlinear phenomena as fractal dimension, entropy, and “selfsimilarity” in physiological signals, results from which some inferences of chaotic dynamics have been drawn (2, 3, 10, 12, 31,32, 38). Analytic perspectives seem to have gone from one extreme (all systems are linear) to the other (all are chaotic) without regard for the interim domains in which a majority of systems are likely to exist, e.g.,I–IVof Fig. 1. Furthermore, many of the techniques from chaos theory are computatively expensive relative to their usefulness in system identification and, in some cases, may be performed by investigators with inadequate training in linear analysis. BecauseC ^{2,3,...} relates the timedelay and geometric properties of nonlinearities when they are present within a time series, we believe estimation of these autofunctionals to be a more relevant approach for analyzing and modeling nonlinear systems. It seems logical that the presence of nonlinearities within a signal should be submitted to tests usingC^{n} (which are easy to estimate and interpret) before other analyses are undertaken.
Summary
Clinicians and physiologists have historically reported their observations with regard to the mean values of system outputs or to the correlation between independent measurements. Largely because of the recent availability of fast, inexpensive computers and an influx of engineers to physiology studies, techniques from linear systems analysis have been employed to characterize the dynamical (i.e., autocorrelated) properties of system variables. However, clinical application of such techniques leads to somewhat of an analytic paradox: the statistical methods that are most broadly understood (linear) presume that the physiological control system is operating close to an equilibrium state (locally stable) and thus become less accurate for those very cases where interest is highest in applying such techniques, i.e., patients with unstable physiology. However, no physical system is perfectly linear or Gaussian (or stationary); approximating models or functionals truncated at linear terms are unbiased only when higherorder terms are small enough to neglect. From this perspective, parameter estimates that quantify physiological deviations from linearity, Gaussianity, or stationarity may provide the greatest insight into pathological control systems. The simple graphical technique of plotting autocorrelation vs. autoskewness functions of system outputs and testing for statistical significance provides a reliable means of classifying the symmetry and stability of ventilatory control.
Acknowledgments
This research was funded by The Whitaker Foundation.
Footnotes

Address for reprint requests: M. Sammon, Div. of Pulmonary and Critical Care Medicine, UMass Medical Center, 55 Lake Ave., N., Worcester, MA 01655.
 Copyright © 1997 the American Physiological Society
Appendix
Algorithm for estimating C^{n}.
Notationally, the method of innovations (29) or partial correlation analysis (25) is easier to review using variablesx andy. The regression model is
The n “zeros” of the polynomialZ_{n} serve to partition the density function ofX_{t} relative to its moments: Z _{1}partitions P_{X} relative to its mean; for the parabola,Z _{2} = 0 at ±1 standard deviation ofX_{t} . For symmetrical P_{X} , the zeros for Z _{3}are at the mean and at mean ± 1 standard deviation; for asymmetricP_{X} , the zeros vary with respect to the skewness coefficient. In every case, the basis vector (Z_{n} ) for approximatingC^{n} is scaled byP_{X} ; consequently, the deterministic modes ofP_{X} can be deconvolved without a priori assumptions on the the initial distribution.
Nonetheless, for some time series it may be advantageous to apply an orientationpreserving, nonlinear transformation to improve the symmetry of a signal before estimatingC^{n} . The logarithmic and squareroot transformations are commonly used (forX_{t} > 0) but do not guarantee that the resultingP_{X} has zero skewness. The second algorithm below usesnthorder residues ofX_{t} from its median value to apply a piecewise linear transformation so thatE[X^{n} ] = 0 (for odd n). This monotonically increasing transform does not alter the sequence ofX_{t} , but by reducing the influence of samples in the tail ofP_{X} (“outliers”), it improves the statistical stability ofC^{n} forn > 1 (appendix ). However, inasmuch as samples in the tail can reflect the presence of asymmetrically distributed positive autocorrelation, the transformation also reduces their influence on estimates of C^{n} for n > 1.
The two pseudoC programs listed on p. 991 are Matlab functions, with “%” denoting nonexecutable comment lines and “;” marking the end of an executable statement.
Appendix
Statistical stability and linear decompositions of X_{t}.
In general, statistical stability refers to the sensitivity of a parameter estimate to sample size (T) or individual samples (e.g., outliers). Stability of estimates of
Where verylowfrequency trends are present in the original signal, some analysts employ differencing; i.e., their predicted value is a vector