Tuesday, August 20, 2013

Power meter cadence comparison (analysis of DCRainmaker data)

The cleverest way I've seen to check cadence data is due to Robert Chung. He uses cadence and speed in conjunction with an assumed wheel rolling circumference to calculate the for a bike for each data point. If cadence and speed were measured perfectly, I'd be able to see exactly what gear the rider was in at every point where he was pedaling (no coasting). On the other hand, if the cadence or speed are measured sloppily, then the gear calculation would also be sloppy. The key insight is that gears are discrete: there's a countable number of choices. So if I can extract the gear, I should see only a discrete set of results: plotting gear over time should show steps, with transitions between the steps corresponding to shifting, with deviations from the steps only when the rider is coasting with the cranks stationary, or, hopefully not often, spinning the cranks while coasting.

The issue with this approach is it depends on both speed and cadence being of equal quality. In the case of the DCRainmaker test, he had different speed sensors associated with different power meters. So to judge cadence values alone, you need to establish a uniform standard for speed data. He has this available, since he has synchronized data available as was captured by his WASP Ant+ Sport hub. But working with that would require a bit of effort.

I will then assume that the sample-to-sample cadence variance consists of a component from actual cadence variance, and a component due to error. Since the data for each unit were taken from the same ride, I conclude the variance of actual cadence was the same. That leaves only the variance from the error. So if I rank the various units by the total sample-to-sample cadence variance, I get a ranking of their cadence error.

One sort of cadence error this would not catch would be a systematic error. For example, suppose the cadence error is a 1:1 function of actual cadence, for example that reported cadence is 99% of actual. Or, perhaps the error varies slowly during the ride, drifting from -1 rpm at the beginning to +1 rpm at the end. I assume errors in individual samples are uncorrelated with each other. If it makes a -2 rpm error this second, next second the dice are tossed freshly and without bias: the error will have the same probabilities as if the error this second had been +2 rpm.

So enough... here's the ranking of the RMS change in cadence from the different units used by DC Rainmaker. I limited the analysis to cadence values of at least 30, since cadences less than this are of trivial interest and likely encountered only when coasting.

Edge 800 + Quarq: 3.804 rpm
Edge 800 + Stages: 4.563 rpm
Edge 810 + Powertap: 5.535 rpm
Edge 810 + Vector: 5.904 rpm

But I learned early on that it's a mistake to look at just derived numbers without looking at a plot showing more detail. Here's a histogram of the rpm changes, comparing Vector (the most total variability) to the Quarq (the least).

It's fairly unambiguous: the Quarq has consistently fewer counts in almost every bin for which the magnitude of the cadence change is more than 1 rpm.

So why is the Quarq producing tighter cadence numbers? I'm not sure. But if the reason is Vector is more prone to cadence error, the positive thing is that is handled by their pods, so upgrading cadence accuracy in the future will be relatively cheap and easy, even assuming it required a hardware change. More likely, perhaps, it could be fixed with a firmware update.

As an aside, it is curious the Edge 800 units each rank ahead of the Edge 810 units. I'll need to check the WASP data for verification of these results.

added: I did check the WASP data and the results are very different, with the Vector cadence doing quite well. Curious.