Monday, September 30, 2013

Chris Horner blood values compared with those of Lance Armstrong 2009

Chris Horner published his blood values on his web site. I didn't have much interest in this. Indeed, I was much more interested in the America's Cup then in the Vuelta. It's not because I'm not interested in bike racing. Rather it was because it all seemed so unreal, so "not normal", that I just wasn't interested. Horner won. Curious.

The data were published in raster form, and would require transcription (or OCR) to do anything interesting with. I didn't have the interest + time to do this. But then I saw this blog post which links to a spreadsheet with the transcribed data.

Back in 2009 I made the following plot, which I posted to my blog the following year, showing the reticulocyte percentage in Lance Armstrong's blood, which he published online, plotted versus his hematocrit. It's generally considered a sign of transfusion when the hematocrit increases with a low reticulocyte percentage. Reticulocytes are the young blood cells, and if you're producing your own, you'll have a mix of young and old cells typically. Lance's data fit into two clear groups, one with much lower reticulocyte percentage than the other for the same hematocrit. The low reticulocyte group happened to coincide with the Tour de France. This was suggested by Michael Ashedon and others to be highly suspicious. That the UCI failed to flag the data as suspicious was viewed as a sign it wasn't serious about catching high-profile dopers. Lance later said on Oprah he was clean during the 2009 Tour de France, and I'm not in a position to call him a liar. But then he also previously claimed he'd been clean his whole career.
Lance 2009

So fast-forward to September 2013. Chris Horner posted his blood values, as I noted, and the obvious thing was to compare these to Lance's. Lance's data had just been for a half-year, however, while Horner's are for the history of his testing under the biological passport. So Horner's data are a mix of high-profile races and out-of-competition testing.

Here's the plot:
Horner vs Lance

I split out three races from Horner's data: the 2010 Tour, the 2012 Tour, and the 2013 Vuelta. He rode excellently during all three, winning the Vuelta but against lesser competition than he faced in the Tour. So the question is: is there a clean signature of these races which suggest something's amiss? Certainly the data from the Vuelta this year and the 2010 Tour are right in the same zone as Lance's suspicious data from 2009, although the hematocrit doesn't extend quite as high. But the data from the 2012 Tour have a higher reticulocyte percentage.

Biological passport numbers are unfortunately sparse: it's hard to interpret differences between two big races. Certainly the story from Horner's data is not as compelling as was the story told by Lance's. But I certainly didn't see anything here which reduced the tepidity of my response to the race.

Anyone want to discuss the America's Cup?

Friday, September 27, 2013

California 3-foot passing law at last

Governor Brown, after vetoing two, finally signed a 3-foot passing law for the state of California: AB1371. Hats off Jim Brown and the California Bike Coalition for their dedication and persistance to this issue. It was very much on my mind this year as I experienced several close passes by heavy vehicles. With the status quo of "no blood, no foul", these passes were effectively legal, since what constitutes a "safe pass" is so vague.

Here's the text of the bill-now-law, which I'd like to review here. Note I'm an engineer, not a lawyer, so I'm just interpreting the language, without any insight into lawyer-specific knowledge:

SECTION 1. Section 21750 of the Vehicle Code is amended to read: 21750. (a) The driver of a vehicle overtaking another vehicle or a bicycle proceeding in the same direction shall pass to the left at a safe distance without interfering with the safe operation of the overtaken vehicle or bicycle, subject to the limitations and exceptions set forth in this article. (b) This section shall become inoperative on September 16, 2014, and, as of January 1, 2015, is repealed, unless a later enacted statute, that becomes operative on or before January 1, 2015, deletes or extends the dates on which it becomes inoperative and is repealed.

This seemed an awful like the existing code, so I checked that:

21750. The driver of a vehicle overtaking another vehicle or a bicycle proceeding in the same direction shall pass to the left at a safe distance without interfering with the safe operation of the overtaken vehicle or bicycle, subject to the limitations and exceptions hereinafter stated.

So it's exactly as it was before until September 16, 2014. So good luck everyone. I hope you survive the next year.

Then things change for the better.

First, 21750 is fixed so it refers only to vehicles: the word "bicycle" is removed (bicycles are not vehicles in California). This is clearly because the "safe distance" language is no longer sufficient for passing bicycles. "Safe" implies "no blood, no foul", which is realistically the present standard. Indeed, even a collision doesn't imply a pass wasn't "safe" under current enforcement.

SEC. 2. Section 21750 is added to the Vehicle Code, to read: 21750. (a) The driver of a vehicle overtaking another vehicle proceeding in the same direction shall pass to the left at a safe distance without interfering with the safe operation of the overtaken vehicle, subject to the limitations and exceptions set forth in this article. (b) This section shall become operative on September 16, 2014.

In the present code, there's 9 sections which follow 21750, providing exceptions: 21751 through 21759. These remain as-is. 21760 is added, as follows:

SEC. 3. Section 21760 is added to the Vehicle Code, to read: 21760. (a) This section shall be known and may be cited as the Three Feet for Safety Act. (b) The driver of a motor vehicle overtaking and passing a bicycle that is proceeding in the same direction on a highway shall pass in compliance with the requirements of this article applicable to overtaking and passing a vehicle, and shall do so at a safe distance that does not interfere with the safe operation of the overtaken bicycle, having due regard for the size and speed of the motor vehicle and the bicycle, traffic conditions, weather, visibility, and the surface and width of the highway.

So far it really doesn't change much. The "due regard" language should be obvious to anyone who's ridden a bike as an adult. Unfortunately too many police officers, traffic clerks, district attorneys, etc, seem to have no clue about cyclists rights to road, or the realities of riding a bike on the road. This language makes it explicit that the enumerated factors must be considered in establishing a safe passing distance, and therefore the driver can be found at fault for passing with what under ideal circumstances might have been a safe margin.

(c) A driver of a motor vehicle shall not overtake or pass a bicycle proceeding in the same direction on a highway at a distance of less than three feet between any part of the motor vehicle and any part of the bicycle or its operator.

This is good. Note the "3 foot margin" applies to any part of the cyclist or bike, not just the bike, not the center of mass of the rider or bike. So this is a good margin. There is the question of what would happen if a driver were passing a cyclist with a 3 foot gap, then the cyclist were to reach out with a hand and reduce the gap below the 3-foot limit. Would that result in the driver being in violation of the law? I think the answer is the driver should leave a buffer over 3 feet to prevent that from occurring.

In the bill from 2011, there was a provision which removed the 3-foot requirement when the car was below a certain speed limit. This was essentially necessary due to the situation near intersections, where typically cars and cyclists are in close proximity. Brown vetoed that version due to what was clear confusion over this quantitative speed threshold (arguing drivers would slam on their brakes and be rear-ended so they would be able to pass a cyclist with a less than 3-foot margin). So instead of a quantitative exemption, there's a fuzzy one:

(d) If the driver of a motor vehicle is unable to comply with subdivision (c), due to traffic or roadway conditions, the driver shall slow to a speed that is reasonable and prudent, and may pass only when doing so would not endanger the safety of the operator of the bicycle, taking into account the size and speed of the motor vehicle and bicycle, traffic conditions, weather, visibility, and surface and width of the highway.

I was initially opposed to fuzzy language, since it takes us back to the "no blood no foul" status quo. Any sub-3-foot pass could be justified on the basis that the "driver was unable to comply". But at least it doesn't provide a simple exemption: it further requires the driver "slow to a speed that is reasonable and prudent", which strongly suggests a typical high-speed buzz cut wouldn't pass muster. This is followed by some fairly strong language about not endangering the safety of the rider, taking into account the previous stuff, including the "width of the highway". So perhaps this language doesn't render the bill impotent, after all.

But a point of weakness of this law versus the previous bills is the removal of the provision formalizing what virtually every driver does when passing cyclists on a 2-lane road with a double yellow line: cross the line. It's obviously safe. Without removing this allowance, which is justified by the fact double yellows are put in place based on the much more dangerous and time-consuming maneuver of passing a full-size motor vehicle, a driver could still close-brush a rider, arguing that it would have been illegal to tough the double yellow even when there's clean line-of-sight ahead and no oncoming traffic. Fortunately drivers are sufficiently scofflaw that they typically ignore the inane double-yellow law, placing safety first.

Here's where it gets really tragic:

(e) (1) A violation of subdivision (b), (c), or (d) is an infraction punishable by a fine of thirty-five dollars ($35). (2) If a collision occurs between a motor vehicle and a bicycle causing bodily injury to the operator of the bicycle, and the driver of the motor vehicle is found to be in violation of subdivision (b), (c), or (d), a two-hundred-twenty-dollar ($220) fine shall be imposed on that driver.

$35 for endangering human life? That's beyond a joke. Even if the driver causes "bodily injury" the fine is only $220, which could be contrasted to the $1000 fine for littering. Stuff like this makes me wish California would split into two. I want nothing to do with the Southern California motorheads responsible for these low fines, a result of negotiations during the 2011 bill.

Then there's that date again: nothing for a year:

(f) This section shall become operative on September 16, 2014.

Then the money thing:

SEC. 4. No reimbursement is required by this act pursuant to Section 6 of Article XIII B of the California Constitution because the only costs that may be incurred by a local agency or school district will be incurred because this act creates a new crime or infraction, eliminates a crime or infraction, or changes the penalty for a crime or infraction, within the meaning of Section 17556 of the Government Code, or changes the definition of a crime within the meaning of Section 6 of Article XIII B of the California Constitution.

So that's it. If I had to rate it 0-5, I'd give it a 3. I'd really really have preferred it with the double-line-crossing language. And I also think allowing drivers to pass cyclists with less than a 3-foot buffer without an explicit speed limit for that sort of pass is insane. But CalBike tried, not once but twice, and the Governor rejected both times. I'll take what I can get. It is a lot better than status quo. I just need to make it through the next year until the 3-foot limit goes into effect.

comment on America's Cup in San Francisco

I live in San Francisco and was at the race. I was glad every race Oracle won until the final, because the racing was absolutely incredible, and I wanted it to continue. But it's clear New Zealand came into the finals better prepared, and racing should be about the best competitor winning, not about opening the spending floodgates to gain an advantage. So good job to Jimmy Spithill and crew for sailing an excellent race in the end, but I feel for the New Zealand fans who saw the advantage of their superior preparation squandered. I hope the rules are streamlined before the next iteration to reduce the influence of big money.

I was amazed at the number of Kiwis who made the flight to watch in person. It was a difficult series for that, since the schedule with the strict wind limits was so indeterminate. If fan support could directly contribute to boat speed, Emirates Team New Zealand would have won the race.

Wednesday, September 18, 2013

improved analysis of 9-bidder silent auction: friend's bid on San Francisco condo

After my last post, I commented that my analysis was obviously flawed, because it would lead to the conclusion the winning bidder had a 100% chance of winning. There were two obvious points of weakness in that analysis.

The first is I assumed the expectation value of the number of bidders was the number of bidders, not counting my friend. I don't count my friend because his presence wasn't random (I specifically picked this auction because he was there; I didn't pick it at random). However, that assumption is obviously just a guess. But lacking other information, it is the best I can do.

The next approximation I made was that the chance a random bidder was better than my friend's bid was the number of bidders who ranked better than my friend. This was 25%. This seemed a reasonable assumption, until I apply it to the case my friend wins the auction. In that case the result would be 0%. That's obviously wrong.

So what I assume is all bids can be mapped onto a uniform distribution from 0 to 1 (this isn't dollars, but rather a function of dollars). Then I generate a series of 9 random bids uniformly distributed from 0 to 1. I then take the 3rd ranked bid. The chance any given bid is better than this one is 1 - the value of the third-ranked bid. This might be 0.75. But it might be more or less.

Then using this probability I calculate the chance the friend would win the bid. If the value of the bid was 0.75, I get my previous estimate, which is 13.5%. But if the value were higher, for example 0.8, I'd get a larger chance, for example 20%.

I run this simulation 10 thousand times. In each case I pick a random set of 9 bids, I pick the third ranked one, I figure out the chance that third ranked bid would beat any random bid, and I calculate the chance that bid wins the auction considering all possible bidder turn-outs using Poisson statistics for the probability of that turn-out with the assumption the expectation value of the number of other bidders is 8.

Since I went through all this trouble, I did the calculation not just for the 3rd place bid out of 9, but for all possible places out of 9 from 1 to 9. Here's the result:

rank p
1    0.538398
2    0.286256
3    0.14854
4    0.075185
5    0.0370196
6    0.0174019
7    0.00752983
8    0.00308723
9    0.0011212

So the third place bidder had a 14.9% chance rather than 13.5% as I calculated last time. And the first-place bidder had a 53.8% chance, not 100% as I calculated last time.

A curious aspect of this is the sum is 111%. This is okay: it simply says that no matter what the place was on the current bid, just the fact the bidder participated gave him a chance, since there's always the chance the turnout would have been very small. There's no reason for the probabilities to sum to 1.

Of course, this ignores any information about how big the gaps were between bids. That would add additional information, but also additional complexity if that information were to be useful. I'd then need to add additional assumptions, for example on the probability distribution of bid amounts. I make no such assumptions here. This analysis is based only on bid rank and the assumption that there's no ties. Even in the case of dollar ties, there can be tie-breakers. I additionally assume there's no second round of bidding.

Here's a plot of those results, on a logarithmic scale:

results

I then tried running for larger initial auctions. For the top bidders at these auctions, the winning chance becomes essentially independent of the number of people who were at the auction. The top bidder had around a 50% chance to win, the second bidder around 25%, the third 12.5, fourth 6.25%, etc: each successive bidder in the ranking had approximately half the chance of the bidder who ranked ahead, twice the chance of the bidder who ranked behind.

My friend was of the view that he'd lost: that he'd made a poor bid. But the whole process is stochastic. You've got to play the probabilities to maximize expected gain. And that analysis needs to recognize that the optimal probability of winning a given auction is not 100%, assuming price matters.

Monday, September 16, 2013

probability games: simulating a friend's chance to win a condo bid

A friend of mine big on a condo and lost, ending up 3rd/9. The friend was convinced the bid had been too low. I wasn't so convinced: after all, what if the two people who'd bid higher hadn't bid, and no other higher bidders had shown up? It's an inherently probabilistic process. If the goal was to win at all cost, you'd just bid the most you could possibly spend every single time.

So to estimate the chances that bid would have won the bid requires estimating two probability distributions: the probability distribution of the number of bids and the probability distribution of the values of those bids. Without any information available on these, it's necessary to guess.

The first approximation is that bids are uncorrelated: whether someone bids, and how much, is independent of how many others do so.

The next approximation is that the bids are statistically representative of the probability distributions. So not counting the friend, there were 8 other bids. If the process were repeated 1000 times, there'd be around 8000 total other bids. Within each iteration, the number of bids would have a probability distribution well approximated by Poisson statistics.

Next, I won't try to estimate a probability distribution of bid values. I don't need to. Bids are binary: either they're better or they're worse than the friend's bid. Based on this outcome, there is a 25% (2/8) a given bid would be better than the friend's bid.

So I need to calculate, for all possible number of bidders, the probability that the friend's bid would win given a 75% chance of being better than any competitor, then sum up these results weighted by the probability of that number of bidders.

Given N other bidders, the probability that the friend's bid would have won would be αN, where α is the probability the friend's bid is better than a given other bid. In this case we estimate that at 75%.

Then there's the probability that there were N other bidders. Poisson statistics say that probability is approximated by (λN / N!) exp(-λ), where &lambda is the expectation value for other bidders. In this case I'm assuming that to be 8. N! is N factorial.

So the probability the bid would have won is represented by the infinite sum:

sum N from 0 to infinity [ (λN / N!) exp(-λ) αN ] =

sum N from 0 to infinity [ ((λα)N / N!) exp(-λ) ]

Plugging in the numbers yields 13.5%.

If I had been lazy and assumed the number of bids was fixed at 8, the estimate would have been 10%. It's not surprising this was a bit lower: there's two ways to get lucky. One is the competitors happen to bid lower. The other would be the number of competitors was less. The 10% would exclude the latter possibility.

So 13.5% may not be so bad a probability. If the friend bid on 8 places, there's a good chance with that record he gets one of them, and at a lower price than if he raised his bid high enough to get a higher probability of success. But then again there's a lot of unjustified assumptions here, and those assumptions have only a chance of being accurate.

An example of the limitation of this approach is it would lead to the conclusion the "winner" of the bidding had a 100% chance to do so, which is of course incorrect. A more sophisticated approach would need to be adopted to do a better job with his chances. However, for the 3rd place bid I suspect the approach here does considerably better.

Friday, September 6, 2013

simulation of power error from constant cadence approximation and eccentric chairings

The last time I considered the constant cadence approximation as it applies to circular chainrings. Recall the principal issue is that the constant cadence approximation makes the following assumption for each pedal stroke:

<τ × ω> = <τ> × <ω>

where ω is the angular velocity of the pedals, τ is the propulsive torque, and brackets signify a time-average.

The error from this approximation is obviously:

<τ> × <ω> − <τ × ω>

which is fairly trivially shown to equal:

− <(τ - <τ>) × (ω − <ω>)>

which is proportional to the correlation coefficient between torque and angular frequency, where a positive correlation results in an underestimation of power.

Angular frequency is proportional to what I refer to as "instantaneous cadence": the rate at which the crank arms are revolving.

So the issue comes down to whether the instantaneous cadence is correlated with applied torque, or similarly, if it's correlated with applied power (assuming applied power fluctuations are due more to torque changes than cadence changes). When plotting one value versus the other, circles represent linearly uncorrelated relationships. Diagonal lines represent linearly correlated relationships. With round chainrings the plots of power versus rpm were roughly circular as long as the rider wasn't rapidly accelerating, so the constant cadence approximation worked fairly well.

But here I'll look at non-round chainrings. Non-round chainrings are typically designed to correlate torque with pedal rotation rate, so they become immediately suspect for the constant cadence approximation.

Consider a rider pedaling in a high-inertial condition. The rear hub is turning at a certain rate, the chain is moving at a certain rate, and thus the front crank is rotating at a rate inversely proportional to the radius at which the chain contacts the front chainring at the top (the top run determining the rotation rate of the rear wheel, although typically the top and bottom radii would be essentially the same). Modern eccentric rings are typically designed to slow the chain where the pedal stroke has maximal power, and increase the chain speed where the pedal stroke has minimal power (Biopace was a bit different). This increases the time the rider spends with the cranks at a more favorable angle and decreases the time with the cranks near dead spots relative to a round chainring at the same average cadence.

To model an eccentric chainring, I need to pick a shape. For simplicity, I assume the radius has a cosine-dependence on the angle, with two peaks and two valleys around the chainring circle. The radius of the amplitude of this cosine to the mean radius of the chainring I define as the "eccentricity". An eccentricity of 0 defines a circular chainring, while an eccentricity of 10% defines a ring where the effective number of teeth varies from 10% below to 10% above the number of teeth in a ring with the same mean radius. So a round ring with 50 teeth would be roughly comparable to a ring with 10% eccentricity where the instantaneous gear ratio varied from an effective 45 tooth front to a 55 tooth front, approximately.

Here's a comparison of chainring shapes with eccentricities of 0%, 5%, 10%, 15%, and 20%. 20% is fairly extreme, clearly: the front derailleur would face a substantial challenge with such an eccentric ring.

chainring shapes

In contrast, here's Tour de France winner Chris Froome's rings, which are clearly eccentric, but not as extreme as my 20% case. According to Osymmetric, their commercially available big-ring corresponds to an eccentricity of 7%. I think Froome's rings are custom, however.

Chris Froome's rings (CyclingNews)
Chris Froome's rings (CyclingNews)

I'll look at the power-versus-rpm contours for the pedaling simulation, taking only the ninth full pedal stroke. Here's the result for a flat road with a rider going 80 rpm:

flat road @ 80 rpm

With round rings, the cadence is fairly constant: the inertia of the bike carries the feet through the dead spots. But the effect of the eccentric rings is profound. With the largest radius roughly aligned with the positions of maximum power application, the cadence drops to a minimum when power is at a maximum. The rings are working as designed. But it's clear there's now a strong correlation between cadence and power. There's also a correlation between cadence and torque (not plotted).

Now I consider a second case: where the rider is grinding up a 20% grade at 50 rpm. Here's that result:

20% grade @ 50 rpm

There's now considerable cadence variation through the pedal stroke with the round rings, but for each power, there's a high and low value with roughly the same average. The linear correlation is thus relatively small. With the eccentric rings, the shapes becomes more diagonal. The correlation is much stronger.

This negative linear correlation results in an overestimation of power, as was shown in the equations. Here's an example, pedaling on level ground accelerating from 40 rpm to 120 rpm with a 20% eccentric chainring. Note the rider only gets to 97 rpm during these pedal strokes.

nrot secs dt    avgrpm avgwatts avgtorque avgwattsCC err  ferr
0    1.26 1.26  47.5   453      10.1      482        28.5 0.0629
1    2.28 1.02  59     419      7.68      453        34.3 0.082
2    3.18 0.901 66.6   411      6.74      448        37.5 0.0913
3    4    0.822 72.7   458      6.85      498        40   0.0874
4    4.77 0.775 77.4   422      5.93      459        37.7 0.0895
5    5.51 0.737 81.3   413      5.56      452        39.7 0.0963
6    6.21 0.703 85     458      5.87      499        41.1 0.0896
7    6.9  0.682 88     422      5.23      461        38.5 0.0913
8    7.56 0.662 90.6   413      5.01      454        40.7 0.0984
9    8.2  0.642 93.2   458      5.37      500        41.4 0.0902
10   8.83 0.63  95.3   422      4.84      461        38.7 0.0917
11   9.45 0.617 97.1   414      4.68      455        41.2 0.0997

Note torque is nonstandard units here. The power is overestimated consistently, only 6.3% the first pedal stroke, but increasing to 10% the last pedal stroke of the 12.

Here's the root mean square error versus eccentricity for this case and two others. In each case the power is generally overestimated (although root mean square doesn't indicate if the error is positive or negative). The more eccentric the chainring, the worse the error:

error vs eccentricity

The conclusion here is the constant cadence approximation simply doesn't work for eccentric chainrings. It's possible to make some other, chainring-specific approximation which will allow for better results. But then if the user switches back to round rings the power would be underestimated. Better is to sample cadence on a detailed basis around the pedal stroke, then chainring shape shouldn't matter. Power2Max claims to do this. Perhaps Rotor does, since one hopes they work with their own eccentric chainrings, although they may simply have a different calibration setting for their rings. I'm not sure of any other.

Tuesday, September 3, 2013

Numerical Simulation of Constant Cadence Approximation

A big issue with power accuracy is to not only get the force versus time accurate but also to get an accurate cadence versus time. Power is the instantaneous product of propulsive force times pedal velocity, pedal velocity being proportional to cadence multiplied by crank length, and therefore errors in cadence translate directly to errors in power.

It is typical in the power meter business that cadence is approximated as constant over a full or perhaps half pedal stroke. Indeed, Garmin has announced they are using this approximation on the Vector. This is of course technically incorrect: cadence varies over a given pedal stroke just as it varies from one pedal stroke to the next. Ideally cadence would be sampled at a sufficient rate to get multiple points within a half-pedal-stroke, so the variation in pedal speed between the strong and weak portions of the pedal stroke would be captured.

I have previously looked at this issue and I concluded the power error would be proportional to the correlation coefficient between pedal force and pedal speed. If the resistance was very strong, which is the limit of a zero-inertia condition, then the two would be perfectly correlated. On the other hand, if inertia is dominant, the rate of change of the pedal speed rather than the pedal speed itself will be proportional to the force, and this will result in a low correlation coefficient. So the question is which of these two conditions applies in real-life cycling.

For completeness, I'll put the relationship between correlation and error here:

The constant cadence approximation makes the following assumption for each pedal stroke:

<τ × ω> = <τ> × <ω>

where ω is the angular velocity of the pedals, τ is the propulsive torque, and brackets signify a time-average.

The error from this approximation is obviously:

<τ> × <ω> − <τ × ω>

which is fairly trivially shown to equal:

− <(τ - <τ>) × (ω − <ω>)>

which is proportional to the correlation coefficient between torque and angular frequency, where a positive correlation results in an underestimation of power.

Metrigear published instantaneous cadence data measured with their Speedplay Vector prototypes. These data showed cadence varied substantially around the pedal stroke. Indeed those data suggested the correlation was substantial, and therefore the power error associated with a constant cadence approximation would be significant relative to a 2% power accuracy target. However, the accuracy of these cadence variations was uncertain. Even 1% accuracy on cadence is insufficient if the goal is to gain accuracy on the variation of cadence, where the variation is on order 1% of the mean. So determining the correlation between cadence and force from those data would be risky. Additionally, the data applied to a particular riding condition, for example an indoor trainer, which is not generally representative. I wanted to be able to analyze a variety of conditions.

So I recently decided to try something different. I took the Metrigear power-versus-time data for the left and right foot. I then took used their cadence data to convert time to a pedal angle. This produced a nice periodic, regular pedal stroke for each foot as long as I increased these cadence numbers by 1.5%. I therefore assumed that the pedal power versus angle was fixed, while pedal speed could be determined from physics. Given a rider mass, a gear development, a road grade, a coefficient of rolling resistance, a wind resistance coefficient, a wind speed, and an inertial mass ratio I could calculate the cadence as a function of time from this power as a function of angle. This is a self-consistent calculation (to calculate watts I need angle, to determine angle I need speed, to know speed I need watts). So I iterated the solution for each time-step until it was self-consistent. With this approach I could test the constant cadence approximation for different road grades, different cadences, during periods of acceleration versus steady-state, and even compare one-legged versus two-legged pedaling.

To pad out the data from Metrigear, the plot showing only three complete pedal strokes, I replicated the data to create a repeating series of three pedal strokes. Each of the repeating sequences would differ slightly due to inertia.

For inertia, I assumed 102% of total mass. It's not exactly equal to total mass due to the rotating mass effect of the wheels and tires. I made various "reasonable" assumptions about rolling resistance, wind resistance, transmission losses, rider and bike mass, etc.

With this approach, I can test the error associated with the constant cadence approximation for different pedal strokes under different conditions. For example, here was the result for pedaling on flat ground with no wind, starting from 40 rpm but ramping up toward 90 rpm:

nrot dt       avgrpm  avgwatts avgtorque avgwattsCC err       ferr
0    1.35538  44.1263 422.339  9.40279   414.91     -7.42958  -0.0175915
1    1.17845  50.9276 388      7.55999   385.012    -2.98736  -0.00769938
2    1.07454  55.8079 378.433  6.74782   376.582    -1.8515   -0.00489255
3    0.997814 59.9389 424.597  7.06713   423.596    -1.00106  -0.00235767
4    0.950991 63.1089 389.941  6.1571    388.568    -1.37369  -0.00352281
5    0.912108 65.7464 379.463  5.75707   378.506    -0.956906 -0.00252173
6    0.876475 68.2356 425.084  6.22075   424.477    -0.607937 -0.00143016
7    0.854664 70.2218 390.54   5.54978   389.715    -0.82464  -0.00211154
8    0.833606 71.9376 379.85   5.27197   379.253    -0.597296 -0.00157245
9    0.812025 73.6507 425.294  5.76878   424.875    -0.419305 -0.000985918
10   0.799999 75.0202 390.827  5.20225   390.274    -0.553386 -0.00141594
11   0.786445 76.2202 379.958  4.97967   379.552    -0.406289 -0.0010693

The columns are: rotation number, seconds for that rotation, average rpm during that pedal stroke, average watts for the pedal stroke, average torque (nonstandard units: power divided by rpm), and the average watts calculated per the constant cadence approximation. Then I show the error in watts, and the fractional error, associated with the constant cadence approximation.

During the first pedal-stroke, the error is 1.8%. This isn't good, but the bike is strongly accelerating here. As the speed becomes more constant, the error of the constant cadence approximation is reduced. It drops to a small fraction of a percent in the latter pedal stroke.

The first question is what happens when the pedal stroke is less uniform, for example pedaling right leg only? For this, I took the right leg data only, truncating negative powers to zero (to avoid the risk of the cadence dropping to zero). I automatically adjust the rider's gear to yield the same initial and target cadence. Here's that result:

nrot dt       avgrpm  avgwatts avgtorque avgwattsCC err       ferr
0    1.41507  42.2648 224.381  5.0463    213.281    -11.1     -0.0494694
1    1.23773  48.4886 236.113  4.76911   231.247    -4.86513  -0.0206051
2    1.12117  53.4869 223.193  4.12155   220.449    -2.74437  -0.0122959
3    1.04627  57.1625 228.249  3.95441   226.044    -2.20493  -0.00966019
4    0.997117 60.1895 238.918  3.9306    236.581    -2.33734  -0.009783
5    0.951472 63.0263 224.556  3.53946   223.079    -1.47637  -0.00657462
6    0.916343 65.2666 229.201  3.4919    227.905    -1.29648  -0.00565653
7    0.892921 67.2132 239.836  3.54645   238.369    -1.46727  -0.00611783
8    0.867219 69.1493 225.095  3.24135   224.137    -0.957759 -0.00425491
9    0.845919 70.6995 229.626  3.23548   228.747    -0.87909  -0.00382836
10   0.832684 72.0755 240.286  3.31948   239.253    -1.03282  -0.00429831
11   0.815539 73.5022 225.203  3.05467   224.525    -0.677269 -0.00300738

It can be seen during the first three pedal strokes, starting from 40 rpm, the error exceeds 1%, as large as 5%. However, the error during the later pedal strokes is on order 0.5%. This is nontrivial, but not terribly bad, despite the abnormally rough pedal stroke.

Then I considered the effect of road grade. I'll assume a lower target cadence, 50 rpm, with a grade of 20%. This is extreme hillclimbing. I'll start at 50 rpm:

nrot dt    avgrpm avgwatts avgtorque avgwattsCC err     ferr
0    1.207 49.54  422.3    8.389     415.6      -6.709  -0.01589
1    1.218 49.27  389.4    7.882     388.4      -1.009  -0.00259
2    1.225 48.95  378.8    7.725     378.2      -0.5963 -0.001574
3    1.171 51.07  423.1    8.249     421.3      -1.784  -0.004218
4    1.205 49.82  389.8    7.81      389.1      -0.6728 -0.001726
5    1.22  49.14  378.9    7.7       378.4      -0.4862 -0.001283
6    1.17  51.13  423.1    8.24      421.3      -1.761  -0.004163
7    1.204 49.85  389.8    7.807     389.1      -0.6588 -0.00169
8    1.22  49.15  378.9    7.699     378.4      -0.4813 -0.00127
9    1.17  51.14  423.1    8.239     421.3      -1.76   -0.004161
10   1.204 49.85  389.8    7.807     389.1      -0.6583 -0.001689
11   1.22  49.15  378.8    7.698     378.4      -0.4779 -0.001261

Next I'll try headwind, assuming riding at 70 rpm target into a 5 mps headwind:

0  0.8549 69.96 425.6 6.018 421   -4.587    -0.01078
1  0.8583 69.92 391.3 5.596 391.3 -0.04421  -0.000113
2  0.8585 69.85 380.4 5.446 380.4 -0.01177  -3.094e-05
3  0.854  70.04 425.6 6.075 425.4 -0.1448   -0.0003401
4  0.8575 69.99 391.3 5.591 391.3 -0.04024  -0.0001028
5  0.8577 69.91 380.4 5.442 380.4 -0.00877  -2.305e-05
6  0.8533 70.09 425.6 6.07  425.4 -0.1429   -0.0003358
7  0.8569 70.03 391.3 5.587 391.3 -0.03759  -9.606e-05
8  0.8573 69.95 380.5 5.439 380.4 -0.006768 -1.779e-05
9  0.8529 70.12 425.6 6.067 425.4 -0.1417   -0.0003329
10 0.8566 70.06 391.4 5.585 391.3 -0.03583  -9.154e-05
11 0.8566 69.98 380.4 5.435 380.4 -0.005154 -1.355e-05

So far, only during rapid acceleration is the error worth worrying too much about. On the other hand, if I suspend inertia, things are very different. Here's what I get if I assume speed (and rpm) responds instantly to power fluctuations:

nrot dt     avgrpm avgwatts avgtorque avgwattsCC err    ferr
0    0.9424 63.47  362.4    5.037     319.7      -42.75 -0.118
1    0.997  60.2   324      4.69      282.3      -41.64 -0.1285
2    1.039  57.73  302.5    4.432     255.9      -46.66 -0.1542
3    0.9415 63.52  362.2    5.081     322.8      -39.43 -0.1089
4    0.997  60.2   324      4.69      282.3      -41.64 -0.1285
5    1.039  57.73  302.5    4.432     255.9      -46.66 -0.1542
6    0.9415 63.52  362.2    5.081     322.8      -39.43 -0.1089
7    0.997  60.2   324      4.69      282.3      -41.64 -0.1285
8    1.039  57.73  302.5    4.432     255.9      -46.66 -0.1542
9    0.9415 63.52  362.2    5.081     322.8      -39.43 -0.1089
10   0.997  60.2   324      4.69      282.3      -41.64 -0.1285
11   1.038  57.73  302.4    4.431     255.8      -46.65 -0.1543

The error is enormous, but this condition would be approximated only when riding a trainer with a low-mass flywheel.

These results can be understood by looking at how power and rpm are related. I plot watts on the y-axis, rpm on the x-axis for the final three of my 12-pedal-stroke simulation (recall there's only three unique pedal strokes, which I replicate). Here's 3 cases.

First, riding at 90 rpm on flat ground:

case 1

Then, the same condition except riding right-foot-only:

case 1

Then at 40 rpm on a 20% grade:

case 1

In each of these conditions inertia is sufficient that rpm and watts over a given pedal stroke are poorly coorelated, so errors in cadence associated with assuming cadence is constant over a pedal stroke generally cancel: for the same power, there are cadence values relatively equally distributed above and below the average, so assuming the average is a decent approximation.

However, when I assume inertia is zero I get the following:

case 1

Here power and cadence are perfectly correlated. Above-average powers tend to have higher-than average cadence, while below-average powers tend to have below-average cadence. The constant cadence approximation is a poor one.

So the result of all of this is I think, for round chainrings, that over a given pedal stroke inertia has a strong influence in real-life riding conditions and the constant cadence approximation isn't too terrible an error. On a trainer, inertia will be relatively small without a heavy fly-wheel (or computer control) and the constant cadence approximation may not do as well.

Eccentric chainrings, however, are another matter. Their whole purpose is to correlate power and cadence. With them, the errors will be larger than for round chainrings.

Garmin is well aware of this issue and have stated that using the constant cadence approximation with round chainrings doesn't violate their claimed power accuracy. Indeed, except for unusual circumstances, like rapid accelerations from low cadence, and exceptionally low inertia conditions, it appears that statement is justified.