tan Δ

*φ*= <

*a*> / <

_{2}*a*> ,

_{1}<

*a*> > 0,

_{r}where "<>" refers to a time average. But the time average isn't as straightforward as might initially be thought. Consider the case where the rider pedals a bit then the bike sits for several minutes. The accelerometers are accumulating data associated with the pull of gravity in a particular direction for an extended period of time. This isn't what's wanted. For the average of the transverse component of acceleration to go to zero (the component associated with the pedal around the circumference of its circle), we want to average over the angles of the circle, not really over time. A time average is a convenience, since accelerometers sample the acceleration at specific intervals in time. But if the pedals are sitting idle, we don't want to continue to accumulate the accelerations associated with that particular angle every single sample.

Ideally, then, to get an average not over time, but rather over angle, we'd take the time average weighted by the rate of change of the angle. I'll use brackets with a subscript

*φ*to refer to the average with respect to angle:

<

*a*>

_{x}_{φ}= <

*a*(∂

_{x}*φ*/∂

*t*)> / <∂

*φ*/∂

*t*>.

So we need a way, even a crude way, of estimating how fast the cranks are turning. Well, the obvious approach is to use that oscillating gravity signal: it oscillates one per pedal revolution. It's probably not a bad approximation to assume the pedal is moving at a constant rate during each full oscillation of the gravity signal.

To isolate the gravity signal, we want to filter the data first. At the high-frequency end, there's all sorts of noise sources which get in the way. We can assume the rider will only be pedaling up to a cadence of for example 220 rpm ≈ 23 radians/sec. This is pretty much the top-end spin of even the best track rider. So the time constant shouldn't be much larger than 1/23 sec = 40 msec. Anything varying faster than this is likely from vibrations in which we have no interest. I'll recommend the smoothing time be set to double this: 80 msec. This will still preserve those 220 rpm oscillations, although the magnitude will be reduced around 50%. For the more common cadence range of 60 to 130 rpm, however, it will do a better job than a shorter time constant.

A simple "causal" low-pass filter ("causal" means referring only to past data, not future data) is to convolve the data with an exponential decay function with the desired time constant. In this case, we want to use a time constant of 80 msec. I'll admit right here I think another filter would work better. Decaying exponentials have a hard edge which preserves a bit too much noise. Gaussians are the best, but are in their pure form non-causal. A good compromise might be a gated sine wave (ie one half-period of a sine wave, using positive values only). I've been a bit lazy about implementing one of these, however. So for now I'll stick with the exponential.

But we also want to filter the data at the other end. Things which vary slowly over time need to be filtered, as well. For example, the bike's acceleration, the rate of change of cadence, and the centripetal acceleration proportional to the square of the cadence likely all vary more slowly than the gravity component. So a "high-pass" filter which filters out low-frequency components is also wanted.

A natural high-pass filter is differentiation: take the rate of change of the accelerometer readings with respect to time. In a discrete sense, differentiation corresponds to the ratio of the change in the accelerometer reading to the difference in time between the accelerometer readings.

So this is the resulting approach:

- Smooth the data with an exponential smoothing function with time constant 80 msec to filter out high-frequency noise.
- Take the discrete-time derivative of the smoothed data to filter out smoothly-varying contributions to the accelerometer readings.
- Measure a pedal stroke as the time between two zero crossings of an accelerometer signal.

Here's the plots:

The top plot show the data before and after smoothing. You can see the effect: the data is obviously smoother, but also since these data are at the upper range of the targeted cadence range, the signal is somewhat attenuated. Smoothing also introduces a small delay, approximately equal to the smoothing time constant, which isn't important to this analysis.

The bottom shows the result of the differentiation step, done either after smoothing or without the smoothing step. The differentiation has eliminated the offset from a zero mean evident in the top plot. Furthermore, here the advantage of the smoothing is even more clear: the unsmoothed curve is essentially useless. The smoothing provides something for which we can have reasonable hope to get cadence from zero crossings if we take care to not accept multiple zero crossings which are too close together.

I'd like to repeat these plots with a better low-pass filter. When I get a chance.... certainly after I'm done scoring tomorrow's Low-Key hillclimb, the final for the 2009 series!

To be continued...

## No comments:

Post a Comment