Thursday, January 31, 2013

low-order analysis of effect of crosswind on riding speed

Back in July 2009 I did a series of first-order calculations on the effect of various parameters on riding speed. Calculating speed from power is difficult to do explicitly, but to first order the calculations become straightforward. First-order analysis is where much of the intuition is, anyway. One of these calculations was the effect of wind resistance on riding speed. Then in November 2009 I extended that analysis to the effect of wind speed on riding speed.

To my dismay, I found an error in that result. I'd even rationalized the wrong result with incorrect arguments. I had to track the consequences of that error through the following two blog posts. I think I fixed everything. The corrected result was:

d s / d sw = 2f / [ 2f + 1 ‒ sw / s ]

where s is the rider speed, sw is the tailwind speed, and f is the initial fraction of retarding force due to wind resistance. There appears to be a singularity issue for strong tail-winds (see the denominator) but then if s = sw, f = 0, so further analysis is needed in this case.

However, this only covers a wind in the same direction as the rider. The wind may also have a cross component.

First, a digression... in a cross-wind it's tempting to think that there should be no effect on speed. After all, if the rider is going north, and wind molecules are striking him from the east, then they are imparting on him a momentum component to the west, which induces a force to the west. But his wheels keep him moving north, and instead of the wind inducing an acceleration to the west, he leans his center-of-mass to the east, using gravity to balance the wind force. I've found myself falling into the trap of this reasoning.

The error is that the force of wind isn't just proportional to the wind's momentum relative to the rider (and the perpendicular momentum is unaffected by rider speed) but additionally by the rate at which the air molecules are striking the rider. And this is what is affected by rider speed in a cross-wind.

Consider a stationary rider with cross-section A (including his bike) is being hit by a wind from the side coming at speed sw. The volume of air striking the rider per unit time = sw A. Now the rider starts riding forward with speed s. The relative speed of the wind is now sqrt(s2 + sw2) > sw. The result is he is struck by an increasing volume of air per unit time relative to when he was stationary, assuming the relative cross-sectional area is direction-independent (not an excellent approximation, but without it the analysis gets too complicated for me to avoid making errors, something I'm likely to do anyway, as demonstrated already here). Relative to the rider, the wind has a speed in the headwind direction of −s, and a speed in the perpendicular direction of sw. The perpendicular momentum imparted goes into bike lean, and is thus canceled out without power penalty. But the momentum in the riding direction is what must be overcome with leg muscles, so yields a drag force. The drag force is thus proportional to s sqrt(s2 + sw2). This yields a power component proportional to s2 sqrt(s2 + sw2).

So if I define fw as the coefficient relating the square of relative wind speed to force magnitude, typically approximated by:

fw = ½ ρ CD A,

where ρ is the air mass-density, CD is the coefficient of wind drag, and A is the effective cross-sectional area of the rider with his bike (assumed angle-independent), then the component of power due to wind resistance pw is:
pw = fw s2 sqrt(s2 + sw2).   (cross-wind)

This is fairly close to the analysis for a wind along the direction of rider motion, which is, using the convention that a positive wind is in the direction of rider motion (tail-wind):

pw = fw s (s − sw)2.   (tail-wind)

Back to the cross-wind: what I'm mostly interested in isn't the effect of wind on power, but more the effect of wind on speed. This is most readily considered under the assumption of constant power: dp = 0. For that, you need to add in the power due to climbing and rolling resistance, which is typically modeled as a weight-proportional force independent of speed. There's also drive-train losses, but the common (inaccurate) assumption is these are proportional to power, and thus unchanged under a constant power assumption.

First, there's wind resistance power. The differential of that is:

d pw = fw d [ s2 sqrt(s2 + sw2) ].   (cross-wind)

Initially I made the mistake of assuming this difference was zero. In other words, I assumed constant wind resistance power But if the wind changes rider speed, then rolling power (which is mass-proportional) is also changed. It's total power, not just wind resistance power, which is fixed. The differential of rolling power is:

d pm = fm d s

where fm is a coefficient for rolling power (which includes rolling resistance and road grade).

To solve ds versus ds, I set:

d p = d pw + d pm = 0

I'll spare the details, but the end result is, written in terms of f which I define to the the fraction of propulsive power going into wind resistance (after removing drivetrain losses but not rolling resistance):

d s / d sw = −fw s2 sw / [ fw s (3 s2 + 2 sw2) + fm sqrt(s2 + sw2)] .

For small cross-wind speeds the rate of loss of speed with respect to cross-wind speed is proportional to the cross-wind speed, which implies to lowest order the speed loss from cross-winds is proportional to the square of the cross-wind.

The important point is while there is a first-order dependence of speed on head/tail winds, for cross-winds the dependence is second-order. It is therefore tempting to break the wind down into components: a head-wind component and a tail-wind component, then assuming each component is small, calculate a net dependence of rider speed on wind speed in each direction by summing. The problem with this is second-order terms for the headwind were already discarded in the first-order analysis there, so the net second-order wind dependence will be incorrect if just the cross-wind component is included: a proper first-order analysis is that the cross-wind is irrelevant. So to capture the cross-wind dependence, the tail/headwind analysis should be extended to second order. But if one doesn't care about about second-order errors unless the wind is close to a pure cross, then this approach might still be justified.

Monday, January 28, 2013

Calibrating Heuristic Bike Speed Model to 2011 Terrible Two winner

For the Terrible Two, I used data from Adam Bickett, who finished first (PDF results). A fast rider is good for model validation because he paced himself well and minimized rest time. The model doesn't include consideration of rest or recovery.

Rather than do any formal fitting, I fit "by eye". To do this I examined VAM (rare of vertical ascent), speed, and time relative to schedule, all versus distance.

The resulting parameters were the following. I won't attempt to assign error bars:

vmax:17 m/s
v0:9.5 m/s
VAMmax:0.52 m/s
rfatigue:2%/hr

I didn't try to fit the cornering penalty, which I kept at 40 meters / radian, and which seemed to work fairly well. Note the VAM number is comparable to Contador on Spanish beef, but this is climbing in the infinite-sine-angle limit (mathematically impossible) and actual VAMs produced by the model are substantially less.

delta t vs distance

First I plot how the time to a given point on the course compares between the ride data and the model (red curve, in seconds). In the plot, positive values imply the rider was slower to that point than the model predicted, while negative values imply the rider was faster. The black curve with green fill is the course profile, in meters. Click on the plot to expand it.

The start is neutral and the rider started out a bit slow, losing around 3 minutes in the first 14 km. Then the model did a good job matching the data through the climb and descent of Trinity Grade. On the flats pacelines can really fly, and the rider gained close to 4 minutes versus the model through 85 km. The rider stopped for 2 minutes then rode to km 98 where he stopped for 6.5 minutes, rode at close to model pace for 5 km, then stopped again, this time for 2 minutes. He then stays within 2 minutes of modeled pace to the lunch stop at 174 km.

His stop here was extraordinary, just 1.5 minutes, then he tackled Skaggs Springs Road. The model somewhat overpredicts his climbing rate here, since he lost time on the climbs, around 6 minutes to the next rest stop where he stopped for only 1 minute. He then lost another 1.5 minutes off the modeled pace out to the coast. But ride down the coast is usually a tail wind and it appears 2011 was no different: he flew down the coast well faster than schedule, gaining back 4.5 minutes. Fort Ross went well: he lost only a minute, but the descent from Fort Ross is on rough roads and he descended these slower than the model predicts (it knows nothing of road conditions). He recovered a bit by the final rest stop where he was remarkably brief (around 50 seconds) before jamming it ahead of model pace the rest of the way to the finish.

Overall he finished 10 minutes slower than the calibrated model. He spent close to 20 minutes total stopped, so his overall ride pace was 10 minutes faster than the model, all of that attributable to his speed down the coast and from the final rest stop to the finish.

Of course this isn't validation of the predictive aspect of the model since I crudely fit the coefficients to match these data. But the model, with resonable coefficients set to rough precision, does seem to capture the rider's behavior over the course of the ride and on the different terrain fairly well.

Next to gain further insight I turned to rate of vertical ascent (VAM). For most analysis I used meters/second for speed, but for VAM I more often defer to the more traditional meters/hour. The model without fatigue and, to a lesser extent, turning penalty would assume a fixed VAM as a function of grade, but with fatigue climbs along different portions of the course yield different VAMS (later = slower). After data smoothing I decimate the data to 10-second time spacing, to avoid clutter.

Here's the VAM versus grade.

VAM vs grade

In the background of the plot I put color contours corresponding to various estimated ride powers: less than 3.0 W/kg (blue) to more than 4.5 W/kg (red). It's hard to see, but these contours track the actual data worse than the heuristic model.

To investigate this further, I plot power versus road grade. The rider had a power meter, so power is measured directly. I smooth power in the same way I smooth altitude (and implicitly road grade) to maintain a proper correspondance. I also calculated power from the heuristic speed model. This required certain assumptions. I assumed CRR = 0.3% (roughly fit to data), CDA = 0.45 m2, rider mass = 77 kg (fit to data), bike mass = 8 kg (a bit heavy, but I assume a pump and tools and partially filled water bottles), and extra mass = 2 kg (clothing, shoes, helmet). All that matters is total mass, so maybe rider mass is a few kg high but bike mass is a few kg low. I neglected inertial power since the heuristic model can predict artificially high accelerations coming out of corners (it's designed more for speed than accurate accelerations). I model air density exponentially decaying with altitude but that's insignificant on this route. I assumed still air for now.

power vs grade (no wind)

First, it's clear the explanation for the relative failure of the constant power assumption is the rider isn't riding constant power. His data generally increase with grade. For negative grades, he typically coasts (near-zero power) while for grades over 10% his power is solidly high. The heuristic model combined with the analytic bike power-speed model captures this behavior very well. It's important to remember that the heuristic model has nothing to do with the bike power-speed model. It simply tries to match typical behavior in speed. It's more common to model speed based on assumed power, but here I model speed directly, and from that derive power.

Note, however, that the power at near-zero grade and for descents to -5% the model yields higher powers than the rider data. Since I intentionally matched the flat-road speed to the rider's flat-road speed this isn't a matter of the heuristic model making different assumptions about rider behavior but rather the power-speed model not accurately describing the riding conditions. The most obvious source of this inaccuracy is the zero-wind assumption. It was already noted the rider went quickly south along the coast, so I simply modeled this with an assumption of a north wind at a relatively modest 3 m/sec. Here's the result:

power vs grade (3 mps N wind)

Now there's a surplus of points near zero grade with higher powers than were measured. However, the trend is much better. The obviously simplification is the assumption of constant wind over the entire course as the rider traversed it. But I'm not going to try and model time-and-position-dependent wind.

This takes care of climbing, but what about descending? My a priori assumption on descending was riders tend to ride to a certain safe speed but no further, until for grades steeper than -10% they slow. This is in contrast to the "terminal velocity" model where a coasting rider's speed will inrease proportional to the square root of the sine of the road angle (which is close to road grade). I need to check that this assumption is consistent with the ride data. Here's a plot of speed versus grade, highlighting the speed of negative grades:

speed vs grade

You can see that indeed the rider behaved much like the heuristic model assumes. At grades much steeper than -10%, the speed drops.

All of this has validated that my relatively simple heuristic model is reasonably capturing the behavior of an exemplary cyclist on a hilly course. It seems like a fantastic stroke of luck since the formula I used has no basis in physics. But it's not luck: the reason is the model makes reasonable assumptions about hills (approaching constant VAM), descents (approaching constant speed), steep roads (decreasing speed when the road gets super-steep), then connects these extremes with an analytically smooth function. I put in an additional fitting parameter to match the speed on flat roads. Then I add a fatigue term and a small correction for curvy descents and that's basically it. The primary issue is I don't capture the effect of different conditions on different portions of the course. But the goal is simplicity and universality at the expense of precisely matching data for a particular rider on a particular course on a particular day.

If I were going to extend the model I'd probably add wind effects and road condition effects.

Wind would affect v0 only: gusting winds can affect the safe cornering speed, for example, but steep climbs tend to be shielded from the wind, and descending at the safety limit isn't systematically affected by steady wind. It's flat roads and gradual grades which are primarily affected, and v0 takes care of that. So it would need to be made a function of heading for the wind at a particular location on the course. For example, for Terrible Two, I might model a wind based on a proximity from the coast, which can be modeled geometrically.

Road conditions would affect all three: vmax, v0, and VAMmax. For example, a dirt or heavily graveled road both increases the power for a given flat or climbing speed and affects the safe descending rate.

These enhancements would improve the fit to the Terrible Two data. But for most purposes, I think the present model strikes a good balance between simplicity and accuracy.

Saturday, January 26, 2013

customization of cycling heuristic speed formula

Recently I've been playing with a heuristic power-speed formula for cyclists on mixed terrain. I originally developed this for time-domain smoothing of hill profiles for rating the climbs: converting from altitude and distance to time requires a model for speed versus grade. It was designed based on two philosophies: that on descents a rider approaches a maximum safe speed, going no faster, and on climbs the rider approaches a maximum sustainable rate of climbing. Between these extremes I wanted it to be analytic (continuous in all derivatives). Here was the formula:

v = vmax / [ 1 + ln | 1 + exp( 50 g ) ] | .

Here g is the sine of the road inclination angle and vmax is the maximum safe descending speed. It worked great for this purpose. However, my interest in the model deepened when I wanted to apply it to predictions for riding times on the Portola Valley Hills course of the 2013 Low-Key Hillclimbs to establish checkpoint cut-off times. So for that I added two enhancements: a time penalty for turning at near the maximum safe descending speed, and a speed reduction term for climbing or descending at extremely steep grades.

For the steep hill portion, I added an exponential term which kicks in extremely strongly at steep grades:

v =vmax exp(−3 g4) [ 1 + ln | 1 + exp( 50 g ) ] | .

This steep hill penalty is realistic only if the grade data ia realistic. Best would be to use data from an iBike or similar, which has an inclinometer and thus measures grade directly. But lacking that, it's important to smooth altitude data with respect to distance first: this reduces the problem of a 1 meter error flip in measured altitude causing a huge spike in extracted grade for two closely-spaced points. I used a biexponential smoothing function (application of an exponential filter in each direction, forward and reverse: more details here) with reference length 50 meters. This is a good distance anyway because it's characteristic of inertia, and I don't model inertia here (I could, but I'm lazy). The smoothing function does a decent job of it, though.

For the turning part, I added time evenly split to the interval preceding and following each point, equal to a time per radian (Δtradian) of turning multiplied by the heading difference in radians. I'll make a slight change to that here: instead of a fixed time delay I'll assume a fixed distance array. With this correction faster and slower riders will each be fractionally affected the same by the same corner:

Δtturn = (Δsradian / vmax) ( v / vmax)2 |Δθ| ,

where θ is the heading. Some care must be taken to avoid the heading flipping by 2π: for |Δθ| > π I take 2π − |Δθ| instead.

I used Δsradian = 40 meters.. This means at 20 meters/second (72 kph) a rider going at his maximum safe approaching a one-radian (57 degree) corner is delayed 2 seconds, half decelerating into the corner, half accelerating out.

This equation requires some justification. First, it's linear on Δθ, independent of the tightness of the turn. This is over-simplistic, but with typical GPS precision and sampling rates I lack adequate precision on the details of the corner to be more detailed. Second, it's in terms of time rather than velocity. That's also because of variability in sampling rates: the same rider going through the same corner should take the net same time whether it's with Garmin 1-second sampling, Strava-app 3-second sampling, or Garmin "smart sampling" which could be up to 8 seconds. So I focus on time. Third, it is scaled proportional to the square of the ratio of speed to vmax. This is because slowing in corners is inspired by safety, and only if a rider is going close to the limit of safety will a maximal deceleration be necessary: a speed-squared dependence seemed about right to me. Then there's the distance cost per radian of turning. This is obviously rider-specific: some riders turn faster than others, but I assume generally the time cost is inversely proportional to vmax, so specifying a distance cost is more general than a time cost per radian.

This was fine but it had too few fitting parameters to fit actual ride data. The 50 in the exponent is based on the ad hoc assumption that the theoretical peak rate of altitude gain on climbs is 2% of the peak safe speed on straight descents. Additionally there is the implicit assumption the speed on flat, straight roads = vmax / (1 + ln 2) ≈ 59% vmax. Each of these assumptions are plausible but there's no reason to believe they're universal or even representative of any particular cyclist subpopulation.

So a more general model would recognize that rider speed versus terrain is characterized by several parameters which describe a typical rider's ability and behavior in the context of a particular ride. First, there's the maximum sustainable speed on straight descents, which I already included. Second, there's the maximum theoretical climbing rate (in the absence of any rolling resistance and with a tail wind matching speed). Third, there's the sustainable rate on flat straight roads. There's also the obvious parameters of the time cost of turning, the exponent of how that varies with speed, and the coefficient on the exponential term for super-steep roads. I'll resist adding those, though, to limit the number of fitting terms.

Here's the more generalized model, then, considering the straight-road speed to which the turning correction must still be applied:

v =vmax exp(−3 g4) / [ 1 + ln | 1 + Kv0 exp( KVAM g) | ] ,

where KVAM = vmax / VAMmax, and Kv0 = exp(vmax / v0 - 1) - 1, where VAMmax is the maximum sustainable rate of vertical ascent, and vmax is the maximum sustainable speed for descending.

There's an additional problem: it assumes riders ride at always the same relative pace. Low-Key running legend and accomplished master's ultrarunner Gary Gellin has shown nicely that top ultra runners generally run "positive splits" in multi-lap courses, running slower in a second lap than a first lap, even with improving conditions. Analysis of Race Across America results shows the winner virtually always slows as the race progresses. These are just a few examples. The reality is riders have a tendency to slow as a long ride goes on: I know this applies to me. The primary exception is probably tactical races. So if the goal is to create realistic rider schedules, the fatigue factor needs to be considered.

The simplest approach to this is to assume power-related speeds decay exponentially with time. But if power fades at a certain rate, climbing speed will fall relatively faster than flat-land speed, since wind resistance increases rapidly with speed while climbing power increases proportional to speed. I'll assume flat speed decays at half the rate as climbing speed. The maximum safe descending speed should remain unchanged as long as the ride remains in daylight. The rate of decay is another fitting parameter. So for example:

v0 = v0,init exp(−rfatigue t / 2)
VAMmax = VAMmax,init exp(−rfatigue t)

rfatigue is approximately the rate of power reduction per unit time of riding. For example, in the Terrible Two example I'll describe next time, 2%/hour seemed to work fairly well, which is a drop in power to 82% over 10 hours.

I'll look further at calibrating this model to Terrible Two data next.

Tuesday, January 22, 2013

Caltrain commute: by foot or by bike?

Until last fall, I would ride my bike to the Caltrain station, take the train to Mountain View, and ride from there to work. Since then, however, I've been leaving the bike at home.

This is because in August I started training for early December's California International Marathon, then after that redirected my focus to the Napa Marathon. Napa is looking extremely grim due to extended training lost from post-holiday-travel sickness, but I prefer to not dwell on that. I've continued to go by foot to the train.

It's interesting to compare the relative times, using the 314 train which departs San Francisco @ 7:14 am.

Caltrain
Another day on the Caltrain bike car (StreetsBlog)

Train:

  1. Leave home at 7:02 with bike.
  2. Ride to 4th and King station. 22nd Street station is closer (0.8 miles versus 1.3 miles), but if I go there I risk the bike spaces filling up before I arrive.
  3. It takes me around 7 minutes to ride to the 4th and King. However, if I leave any later than 7:02, I risk the bike spots filling before I arrive.
  4. The train is scheduled for Mountain View at 7:58 am. I must close my laptop 4 minutes earlier, since I need to unstack my bike from the 4-deep racks.
  5. The ride from there to work is 2.4 miles and takes me approximately 15 minutes, getting me to work at approximately 8:13 am. This is the worst part of the trip, since the ride crosses several major freeways and freeway-like roads.

Foot:

  1. Leave home at 7:08.
  2. It takes me approximately 1 minute to leave, and 7.5 minutes to run to the train station @ 22nd Street, getting me to the train by 7:17 am.
  3. The train is scheduled for 22nd Street @ 7:19 am, but there is no issue with available seats without a bicycle.
  4. The train is scheduled for Mountain View at 7:58 am. I close my laptop when the train arrives.
  5. At Mountain View, I got the VTA Light Rail train waiting at the adjacent platform. I log into wireless there.
  6. The light rail leaves at 8:03 am.
  7. The light rail deposits me at my nearby stop at approximately 8:08 am.
  8. I walk 0.5 miles to work. This is the worst part of the trip, as the roads here are pedestrian-unfriendly, and I spend my time waiting for long-period, low-duty-cycle pedestrian walk signals, and dodging second-hand-smoke from fellow commuters. It takes approximately 8 minutes.
  9. I arrive at work at 8:16 am.

So for the morning commute, I estimate walking is three minutes faster and includes an extra 4 minutes of productivity on the train. Additionally I trade time riding for time logged into the light rail wireless. I enjoy this time, since the rail is comfortable, and while riding is always fun, crossing freeway ramps isn't my favorite.

The return is somewhat similar. However, there are two primary differences. One is the ride back home from 22nd Street Station involves a 26% grade, and is painful. The other is that the light rail is poorly synchronized with the Caltrain in that direction and I often need to wait up to 15 minutes at the Caltrain platform. Fortunately if I wait on the light rail platform instead I get wireless. I prefer walking home from the 22nd Street station to riding it, although I lose the flexibility of extending the trip to stop at stores not directly on the route. Bus options (MUNI) are very limited and slow.

The clincher is I don't need to deal with storing my bike in my cube at work. I could store it in a locker outside, but that would add more time to my bike commute.

So walking/running gets the win, unless I plan to do a ride at lunch, or unless I need to go somewhere else either during my morning or evening commute.

Monday, January 21, 2013

Applying speed model to Terrible Two and testing cosine-squared grade distribution

Previous post I applied a heuristic power-speed model to a simple distribution of road grades. Instead of assuming grades are normally distributed, I assumed they are distributed proportional to cosine-squared. The cosine-squared distribution goes truly to zero. Since the normal distribution is characteristic of random processes, per the central limit theorem, it's widely applicable, but roads are designed and not random, so it's reasonable to expect a deviation from normal.

With this distribution, each set of similar roads is described by a parameter, gmax, which is the maximum grade encountered. I decided to apply this to Terrible Two to see what I get.

I use 2011 Strava data from Adam Beckett, who road a blazing fast T2 with minimal rest stops. Data from fast riders is best, because they are least likely to have traversed on steep climbs, and that would reduce the apparent grade. They additional are least likely to have spent time in rest stops, and moving back and forth in rest stops provides further misleading data.

I smoothed the data with a 50 meter biexponential smoothing function (50 meters of road distance). This reduced the effect of altitude errors and yet retains any hills of meaningful duration (less than 50 meters can be facilitated with inertia). I then resampled it at 50 meter intervals. I then generated a histogram with 1% bins. Here's the result:

histogram

You can see the cosine-squared distribution doesn't fit the data globally, but two such distributions superposed does a very good job by qualitative assessment. 52% of the distance is characterized gmax = 4%, and 48% is characterized with gmax = 18%.

My fitting here was quick and crude and "by eye". In retrospect I think the steep portion is slightly underweighted and the flat section slightly overweighted. But this is just a rough analysis.

This bimodality is consistent with perception on the Terrible Two course: there's a lot of relatively flat and a lot of brutally steep. Combined they yield the combined statistics of, using what Strava reports: 313.1 km with 5,476 m climbing. Note this is a bit less than 200 miles, which is well documented. The course was shortened relative to the original. 313 km is quite enough, I assure you.

Applying my formula to these distributions is likely optimistic since the descents on Terrible Two are relatively slow due to the frequent corners and exceptionally rough Sonoma County asphalt (and in places no asphalt). On the other hand, the section on the coast has typically a considerable tailwind not balanced by any strong headwind sections, so this partially balances that error.

On the steep section, I get δ = 0.491 (the fractional time increased versus a flat road). For the flatter section, I get 0.036. A weighted average (48% steep, 52% flat) yields 0.25. SO Terrible Two should take around 25% longer than a flat route at the same general pace. But going 25% further tends to increase fatigue. Assuming 5% slower per 2x increase in time, this increases to 27%. So the model predicts Terrible Two should be around 27% slower than a flat course, neglecting the asphalt, turns, and winds. This seems about right.

Sunday, January 20, 2013

narrow handlebars trending: Adam Hanson @ Tour Down Under

I've long been frustrated at the lack of bars in my preferred size, 40 cm o-o or 38 cm c-c. My shoulders are relatively narrow, and when I go to wider bars, I feel like I'm in a less stable position, with arms spread out. If I'm holding the plank position, I want my hands directly under my shoulders, and on the bike it's the same. This is the position of greatest mechanical strength.

On top of this, narrower bars are more aerodynamic and have less of a cross-section for contact with adjacent riders. It's win-win.

Yet for some reason wide bars have been popular. People feel they can get more leverage on wide bars. But power on the bike comes from the legs, not the arms (tests have confirmed this). To maximize this, you want to have as smooth and fluid a pedal stroke as possible. That implies staying balanced on the bike, not wildly thrashing about. So I think any benefits of wide bars except in violent sprints are over-rated.

But what about violent sprints? Andre Greipel is perhaps the most powerful sprinter in European cycling and he won the prelude race to the Tour Down Under in Australia yesterday. A key leadout man is Adam Hanson. Here's a photo of Adam at the race:

Adam

Adam is running 38 mm bars this year, the same size I prefer. And Adam's significantly larger. I copied the bars in front of his shoulders to show the bars are clearly narrower:

bars on shoulders

Saturday, January 19, 2013

modeling average speed versus hilliness

Recently I described a heuristic bike-speed formula which I thought came close to describing the behavior of a typical cyclist riding on hilly terrain. On climbs the rider tends to go relatively hard, approaching a constant rate of vertical ascent (VAM) until the road becomes too steep for gearing and balance, then the VAM drops. On descents, the rider's speed increases until it reaches a perceived safe limit, with additional delays for curves.

It's interesting to take this model and predict how hills affect average speed. To first order, in other words for hills of near-zero grade, a rider goes a bit faster uphill, a bit slower downhill, and on average the speed is the same. But for any significant grades this is no longer the case.

First, I'll run the model in terms of a "modified grade", which is the climbing per unit distance of travel, the sine of the angle, rather than conventional grade, which is the climbing per horizontal distance, the tangent of the angle. I do this because it avoids extra math, and it's more closely tied to the power-speed equations.

I start the analysis by assuming there is a particular statistical distribution of road grades on any course. By grade, I don't mean for 1 cm, but rather for distances beyond which inertia can extend. It's tempting to assume a normal distribution for grades per the Central Limit Theorem, but road grades don't happen by chance; they happen by design. A course over mixed roads will encounter a range of grades, but at some point there's a hard cut-off. This is important because my speed model includes a term proportional to an exponential of grade to the fourth power for time taken to ride a section of road, so if the statistical probability of grades only falls off proportional to an exponential of grade to the second power, pathlogically large grades will dominate the result. Since these grades don't exist on bikeable routes, a statistical distribution which truly goes to zero is needed.

So I used the square of a cosine, a nicely analytic function which goes to zero. The probability function of a given grade is then:

P(g) = cos2(π g / 2 gmax) / gmax

This can be simplified:

P(g) = (1 + cos(π g / gmax)) / 2gmax

I can then integrate it:

∫ P(g) dg = [ g / gmax + 1/π sin(π g / gmax) ] / 2

So then if I want to calculate the delay associated with grades from (g − Δg /2) to (g + Δg /2) I can take the definite integral of the probability distribution over the interval and multiply it by the inverse-speed associated with the speed at grade g. That provides an approximation to the amount of time spent in that grade range divided by the total course distance.

If I want to go from this statistical distribution of graded to total climbing for the course, total climbing is the weighted sum of positive grades (since I'm using an effective grade equal to the sine of the road angle) multiplied by total course distance. I'll call that γ, a grade-like quantity representing the total climbing divided by the total course distance. For example, a threshold which was recommended to me by Bill Bushnell as a metric for an exceptionally steep century ride is 100 feet of climbing per mile (10 thousand feet per 100 miles). This corresponds to γ = 1.89%.

I can calculate γ as follows, where the integral is from 0 to gmax (ignoring descents):

∫ P(g) g dg =
∫ (1 + cos(π g / gmax)) ( g / 2gmax ) dg =
gmax [ z2 / 2 + z sin(π z) / π + cos(π z) / π2] / 2,

where I've substituted z = g / gmax. I evaluated this using The Wolfram Integrator, since my analytic integration skills are lame (it smells like integration by parts).

After evaluating this at the end-points, which are z = 0 and 1, the sine term goes away:

γ = gmax [ 1/2 − 2/π2 ] / 2

Or evaluating this numerically yields:

γ = 0.149 vmax

Or conversely:

vmax = 6.72 γ

So this suggests if I know the total climbing per unit distance, and I assume the grades approximate a cosine-squared probability function, then the maximum grade is around 6.72 times the total climbing per unit distance. For 100 feet per mile of total climbing, this corresponds to 12.74%.

Realistically this is probably low, because it neglects the observation that roads often have extended flat portions and isolated hills. It applies more to rolling hills where the road is constantly going either up or down. So it's just a plausible example, not necessarily typical.

With this assumption, the following relative difference in course versus climbing density results:

plot

So for the y-axis = 0.2, the a course would take 6 hours instead of 5 had it been flat, neglecting fatigue. I'll define δ to be the fractional delay.

I played around with various equations to model this and found an excellent fit with the following differential equation:

dδ/δ = (dγ/γ) [ 2.19 - 46.7 γ + 786.8 γ2 ]
d ln δ = 2.19 d ln γ - 46.7 dγ + 786.8 γ dγ

Integrating yields

ln δ = 2.19 ln γ - 46.7 γ + 393.4 γ2
δ = K γ2.19 exp(-46.7 γ + 393.4 γ2)

I did a one-parameter fit to determine K:

δ = 3585 γ2.19 exp(-46.7 γ + 393.4 γ2)

The fit is shown on the prior plot.

This model assumed the course was uniformly hilly. However, typical courses aren't: they have flat sections and hilly sections. If I assume a fraction f of a course is hilly and a fraction 1 − f is flat with negligible climbing, then the climbing density of the hilly section is γ / f. From this, I get a net δ:

δ = 3585 f (γ / f)2.19 exp(-46.7 (γ / f) + 393.4 (γ / f)2)

This, however, is also a gross simplification because even flat sections have finite climbing density. But it gives me an extra knob to turn: one for total climbing, one for how steep the climbs and descents are.

So overall, the calculation yields an interesting and plausible result for the effect of rolling terrain on a typical cyclist's net pace.

Friday, January 18, 2013

Racing Weight (Matt Fitzgerald): Preview Review

Lance

Racing Weight

I just finished the Kindle preview chapters of Racing Weight by Matt Fitzgerald and I already have some critiques. The book also has a website. I'm sure more comments come later if I buy the book.

Note this is the first edition of the book, published 2009. The second edition, published in December 2012, appears to be not yet available on Kindle.

First, an obvious. He uses as an example of the advantage of weight loss on performance the now infamous case of Lance Armstrong. Lance lost weight from cancer and went from getting passed in time trials to dominating them. We now know there's a lot more to that than losing his "linebacker's build" from cancer. Of course the full story came out after the book was published in 2009 but this was after David Walsh's book, so there was certainly plenty of evidence out there that Armstrong had "confounding factors", and I'd expect he could have used a more scientifically sound example.

The section of main interest here is "The Right Body For the Job". He goes through various endurance sports and describes the optimal body type for each.

Cross-Country Skiers

Regarding cross-country skiers he writes:

The average height of an olympic cross-country skier is 5 foot 10 inches, and that of their female counterpart is 5 foot 7 inches. Height provides a mechanical advantage for poling, which is important for the generation of forward thrust for a cross-country skier. However, with height comes mass, and mass is the enemy of performance in cross-country skiing because it increases gravitational and frictional resistance. That's why you don't see as many 6-foot-5 athletes on the competitive ski trails as you do, say, on the volleyball court.

This is promising but not very insightful. The male height is close to population averages for cultures with ample access to protein-rich foods, the female height slightly taller. Why is it that the mechanical advantage dominant up to the average US male height, then the weight gain becomes dominant later? It's quite vague.

Cyclists

On cyclists, he argues:

Whereas power-to-weight ratio is the critical variable in climbing, in time trialing it is raw, sustainable power output which matters most.

This would be true if weight wasn't correlated with cross-sectional area, except it is. Consider a simple model of constant BMI. Weight is proportional to width multiplied by height multiplied by depth. If I assume height is proportional to the square root of weight, and depth is proportional to width, then width and depth are each proportional to the 4th root of weight. If I assume area is proportional to width times height, neglecting depth, then area is proportional to the 3/4 power of weight across a population.

But for a given rider mass is fixed, so in this case cross-section will increase proportional to the square root of weight increase. If I further assume that the bike is responsible for approximately 30% of total wind resistance, that leaves 70% to the rider, and then a 1% increase in weight will result in a 0.35% increase in wind resistance, assuming the body has fixed coefficient of drag. So if you are training for a big flat time trial and you gain 3% of mass but increase power 1%, feeling good about it because you read power was all that mattered in this book, the analysis suggests you're no better off from the perspective of wind resistance, and adding in rolling resistance likely results in you going into the red.

So size, while not as detrimental as it is to climbing, is nevertheless still very important for time trialing, even on flat roads.

He goes on to say, comparing cyclists to runners who have smaller legs on average:

Cyclists have greater leg muscularity because legs do essentially all the work in cycling whereas running is a full-body activity.

I propose an alternate explanation: when cyclists ride their legs spin around, coupled together by the pedals, When one leg drops, the other rises, the falling leg pushing up the rising leg. There is no fundamental energy dissipation with this motion, although in practice there will be energy losses associated with spinning the legs in small circles.

Runners also experience energy dissipation from spinning limbs around, which they do at roughtly the same rate as cyclists, but the difference is the legs aren't coupled together and so muscular work must be done to life each legs separately. There is thus an additional energy cost with raising each leg which isn't present for cyclists. And there's not much pay-back for lifting the legs: when the foot crashes back to the ground, much of the energy is lost (shoes can store and return only around 10 joules, a small fraction of the peak potential energy). Whereas with a cyclist, as the leg drops it does continuous work on the chain, propelling the bike.

The result is a cyclist gets power from leg muscles at less cost than the runner. The runner pays a greater penalty for leg mass. Indeed, leg and foot mass, which must be raised and lowered each step as well as accelerated forward, is the most costly mass for a runner. If you had to carry a pack when running, you'd put it on a relatively inert position like the back or waist rather than on your foot. I'd never, for example, use one of those shoe lace-attached key packs when running a race if I had an option.

Therefore people with powerful, muscled legs are disadvantaged when running more than when cycling. The best runners are those with slimmer legs. Mechanically the slimmer legs are more efficient: power may be less, but required power is less still, disproportionate to the difference in total body mass.

Rowers

On rowing:

Of course, more muscle means more power in every encurance sport, but unlike in other endurance sports, that mass comes at no cost in rowing, because there is no gravitational resistance to overcome and the extra weight has very little effect on frictional resistance between the boat and the water.

When I showed up at college as a Freshman, I was immediately recruited to be a coxswain on the rowing team because I was small and light. I instead joined the sailing team, to which I was also well suited. And there was good reason for this: because boat drag is strongly dependent on weight. In equilibrium a boat displaces its weight (with the weight of its contents) in water. More weight = more water displaced. And displacing water takes energy. So you want a boat to be as light as possible (which is why they're made of carbon fiber) and you want the crew as light as possible. Of course, increased power may offset increased weight, which is why good rowers are muscular, but they're also exceptionally lean, because only weight directly contributing to propulsive force is justified.

But I further disagree that "more muscle means more power". I suppose it depends on how you consider power, but I consider power to be useful power available for propulsion. If I have beefy arms as a cyclist, it seems plausible the metabolic load of supporting this additional mass, since muscle requires blood flow, will rob my legs of blood flow which they could better use to help me move the pedals. Rowers happen to use a wider range of muscles than cyclists, but they still want muscle development optimized for rowing, no more.

Swimmers

The interesting discussion here is that swimmers have higher body fat than other endurance athletes. The question becomes whether this is simply because body fat doesn't matter as much for swimmers or because it actively helps?

The proposed hypothesis is that the buoyancy of fat is an advantage. For open water swimming, insulation would be an additional advantage, but this probably isn't the case for thermoregulated pools. I don't think you can assume any characteristic feature of world-record-class swimmers is accidental or due to neglect.

Runners

I've one issue with his dicussion of runners. He notes that elite endurance runners are relatively small due to the advantge of being light. I completely agree. But I don't think this is obvious.

It might seem intuitive that taller runners would be faster. There's a guy at work I've run with who's much taller than I am: he's maybe 185 cm, while I'm 169 cm. When we run together, I feel like I'm struggling to keep my legs turning over fast enough while he takes advantage of a longer stride to just glide along.

Having made an amateurish attempt at working out running power-speed equations, I'll take a stab at explaining why our difference in height alone isn't the reason he's faster. Here's the formula I derived:

P = M g² (1 − 2 C Lstep / v)² / 16 C +
  [ 2 ( Mfoot g hfoot − Eshoe ) + Mk,foot v²/ (1 − C Lstep / v)² ] C.

I define terms in the original post, but in summary is M (total mass), Mfoot (leg mass lumped into an effective foot mass), C (running cadence: rate at which each foot lands per unit time), v (run speed), Lstep (how far the body moves while each foot is planted), g (gravitational acceleration), and Eshoe (energy stored and returned in the sole of each shoe, around 10 joules). Note both C and Lstep appear in multiple places, either adding to or subtracting from total power, suggesting optimum values for each.

One way to increase stride is to keep your planted foot on the ground for the same amount of center of mass motion and just launch your body into a higher trajectory, landing further. This comes with a high energy cost, because you've got to jump higher off the ground with each foot-stroke, and much of the energy is lost when you land, so it needs to be re-supplied next step. The other way is to keep your foot planted for more center-of-mass motion progress. This reduces the ballistic time, which reduces the amount of potential energy you need to supply, but it also increases the amount you've got to swing that foot and leg forward as well as reducing the time available to swing it forward before it lands ahead of your body. You thus need to supply additional kinetic energy for this to happen. And of course there's flexibility constraints on how long you can leave your grounded foot planted.

Legs get thicker as they get longer, so the mass increases disproportionate to the length, and energy increases proportional to mass, so longer, thicker legs with a longer stride aren't necessarily more energy efficient. In sprints, energy efficiency isn't as important as muscular strength, but in endurance events, the heart & lungs are limiting, so energy efficiency is more critical. The result is shorter, skinnier legs can beat out longer, thicker legs over long distances as long as flexibility is sufficient to cover the required angle of motion. At some point, though, the running cadence needed to keep up becomes too great given the limits height imposes on the center-of-mass motion over which the ground foot can stay planted, and so midgets don't win marathons.

This running discussion is probably too much detail for what is essentially an introduction, so I don't blame Fitzgerald for failing a detailed discussion of running kinetics. However, even his short descriptions of the issues in the other sports borders on misleading. It's a disappointing start to a popular book. But his strength is in nutrition, so perhaps that aspect is better.

Thursday, January 17, 2013

Low-Key Hillclimbs: 2013 challenges and Portola Valley Hills

2013 Low-Key Hillclimbs

I spent a lot of time recently working on the web pages for the 2013 Low-Key Hillclimbs. Every year this series takes an enormous amount of time and every year I wonder if I want to scale it back or maybe even not do it again but the enthusiasm of participants always keeps me going, and every year I tweak things a bit to keep it fresh. For example, in 2012 I blew out my previous constraints on scoring code complexity, but the result was a much fairer balancing of scores week-to-week. Additionally 2012 was the first year using GPS timing, which worked extremely well for week 7 up Montara Mountain.

This year brought two big changes: "coordinator's choice" where coordinators could pick their own climbs (rather than attempting to pick a balanced selection myself). This yielded an excellent schedule. But the big one is week 4: Portola Valley Hills, a GPS timing week much more complex than last year's dirt climb.

The way of thinking about how the timing is working on this week is to think of a series of checkpoints defined by timing gates. The timing gates extend from left to right, substantially beyond the width of the road to accomodate GPS position error. The riders must cross the timing gates in the correct direction and in sequence. Crossing a gate in the wrong direction is ignored. If a rider re-crosses an earlier gate, the rider must continue the course from that point. If the rider completes the entire course multiple times, the fastest time is retained.

I do two things with these "gates": (1) timing for the full course, (2) split times along the course.

For timing, I have certain checkpoints (gates) as mandatory, others optional. All of the checkpoints in Portola Valley Hills are mandatory. If any are skipped or otherwise not crossed properly, the course is considered to be not completed. The route between checkpoints isn't checked, so I need to make sure I have enough checkpoints that the shortest and quickest route between them is the official course.

Adjacent pairs of mandatory checkpoints define course segments. Course segments may have an optional time budget. If the segment is traveled in less than or equal to the time budget, that segment doesn't contribute to the total time. If more than the time budget is taken, then the time budget is subtracted from the time taken for contribution to the total time.

For Portola Valley Hills, I use time budgets to allow riders ample time to get from the top of one hill to the base of the next. I don't want them to feel rushed here, but I want the ride to be essentially continuous. I don't want riders stretching the day out to optimize recovery between timed climbs: that would be unfair to those with reasonable time constraints.

The other role of checkpoints is for split timings. Split times are defined as a pair of checkpoints. Split times are the one application of optional checkpoints: if the rider, due to GPS error for example, doesn't cross an optional checkpoint then he isn't ranked for splits including that checkpoint but he's still considered for overall time for the course.

Placing checkpoints is a bit of an art. You want them where there's little ambiguity about the rider's trajectory. For example, you wouldn't want one at the apex of a switchback: GPS errors would wreak havoc with that. Better in a long, straight section of road well separate from other portions of the road.

course profile

One challenging aspect of the route is navigation. The route includes a lot of turns, a lot of opportunity for error. In many cases, particularly Peak Road and adjacent Golden Oak, the road is traveled multiple times in the same or opposite direction. I recommend riders download a course from the Garmin Connect entry I've prepared. But Garmin Edge navigation can have problems with these re-entrant routes. It tries to allow for the scenario where a rider's GPS is turned on during the ride, so doesn't assume the ride began at the route start. I'm worried it will give incorrect directions in this case.

I may need to take extreme measured, like prepare an old-school paper route sheet, or arrange for the course to be marked with chalk. But the best thing is for riders to memorize the hill sequence, then ride in groups to avoid individual errors. There's plenty of time for riders of similar speed to regroup immediately after climbs and for the first to the top to still get to the base of the next climb within the time budget.

Hopefully this goes well. I'm sure there will be some snafus, but the orienteering challenge is just part of the fun.

Wednesday, January 16, 2013

heuristic bike speed formula

Recently I wanted to estimate how long a "typical" cyclist would take to ride a certain course.

You could assume the rider would ride at a constant power, but that's not realistic: we tend to ride at higher power uphill, a bit less on the flats, and low, zero, or even negative power on descents. And on the uphill, there's an optimal grade for power output: if the road gets too steep for a rider's gears or balance, power tends to drop.

So rather than a physically based model, I chose a heuristic one: one which has the correct behavior under the different conditions, and connects them analytically.

I've described the following model here before, where I used it in a formula for rating climbs:

v = vmax / [1 + ln | 1 + exp(50 g) |],
where g is a modified road grade (the sine, rather than the tangent, of the road angle) and vmax is the maximum speed the rider is willing to go on descents. For example, for a brisk rider, I chose vmax = 15 meters/second. For a rider going a relaxed pace, I chose 10 meters/second.

This formula has the behavior that or negative grades (descents) it approaches an asymptotic speed vmax, while for positive grades it approaches an asymptotic rate of vertical ascent vmax / 50. However, this fails at extreme grades: climbing roads on order 30% is typical slow due to the difficulties of riding a typical bike up a road that steep, while descending a super-steep road is difficult due to the safety hazard. Actually, I have difficulty controlling my speed at all at descents of 25%, but if my brakes worked better, I'd certainly want to go relatively slow for steeper roads.

So to handle this situation, I want to modify the equation with a factor which rapidly reduces speed further at large positive or negative grades, but which over a broad range of grades preceding has little effect. The following modification seemed to work nicely:

v = vmax / ( exp[−(3 g)4] [1 + ln | 1 + exp(50 g) | ] ),

An issue with this term is it requires good altitude numbers. If there is large point-to-point altitude errors for example due to using altitudes taken from a map in hilly terrain where small position errors can yield altitude differences, extremely large grades can result and this formula will have the rider come to a molasses-like stop. So for data where the altitude isn't reliable, the previous version is safer.

I compare these formulas here for different values of vmax: 10 m/sec corresponds to relaxed riding, 15 m/sec a brisk non-racing pace, and 20 m/sec someone who's riding hard, especially up the hills. To adhere to convention, I've converted "effective grade", the sine of the angle which is easier for calculations, to grade, the tangent of the angle which is better suited for surveying.

speed vs grade

The plot shows the formulas with and without the steep grade term. The effect of the steep grade term is clearly evident for descending grades, less than zero. For climbing, the effect is clearest when looking at VAM, which I plot here using the conventional meters/hour, for positive grades only:

VAM vs grade

With this heuristic formula the maximum VAM is at a 14.1% grade. This could be tuned with the various coefficients used, but seems plausible for a rider with wide-range gears (compact crank) which are the best overall choice in known-steep terrain. When descending the maximum speed is at a -10.0% grade. This also seems reasonable: much steeper and I worry about my ability to control the bike.

One other issue I recently encountered with this is the rider's pace on descents didn't depend on how twisting the descent was. The original application was for rating hills based on the profile for which heading wasn't available, but in this case I was using full map data and I wanted to predict how long a typical rider would take to ride it. To overcome this limitation, I decided to model the effect of corners as a given delay per unit angle. So for a given point surrounded by two connecting segments, I determine the magnitude of the course change between the two segments. Then I calculate the following delay:

Δtturn = (2 seconds) |Δheading| (v / vmax)2.

I assume turning one radian at vmax delays the rider 2 seconds due to braking then re-accelerating. For the rider going slower, the delay is less. At uphill speeds, for example, turns cause essentially no delay. Since time is calculated across segments and not points, I allocate half of this delay to the segment preceding, and half to the segment following. The result is a rider doing a quick 360 from maximum speed would be delayed 12.6 seconds relative to a rider who just blasted ahead full-blast. That seems fairly accurate.

I really like heuristic-based modeling in many instances. Rather than focus on developing a model based on fundamental underlying physics, focus on matching the observed large-scale behavior and make sure the model interpolates smoothly between simple cases (in this case, steep climbing and steep descending). The resulting coefficients don't necessarily have direct physical meaning but the resulting model can be simple and effective.

Tuesday, January 15, 2013

Aegis super-slack seat tube and pro rider position data

Previously I described data I'd come across on the saddle positions of various pro cyclists. I plotted those results, showing lateral position of the saddle versus saddle height, each relative to the bottom bracket center, as follows:

fit

The fit corresponded to a 66.4 degree seat angle with an intercept 93.5 mm ahead of the bottom bracket. So if the saddle was aligned with the sitting position aligned with a zero-setback post, then a slack, forwardly displaced seat tube would do the best job of fitting this diverse set of positions.

It was pointed out to me on the WeightWeenies forum that such a bike had been sold. Here it is: the Aegis:

Aegis

I can superpost the data on the bike. First, I need to flip the axes, then I need to convert saddle height to vertical position above the bottom bracket. Here's the result:

comparison

The match is remarable to the numbers I derived previously.

Evidently the bike didn't sell. The issue is that it isn't needed: you don't need a single bike that will fit a broad range of rider sizes with the same seatpost.

Monday, January 14, 2013

Viscenzo Nibali's Specialized bike fit session

CyclingNews published a gallery of photos of the Astana team getting tested for bike position by Specialized. Of particular interest was the position changes given to Viscenzo Nibali, their new star for stage races, who rode for the Cannondale-sponsored team last year. Presumibly the superior fit provided by the Specialized crew led by Andy Pruitt would improve Nibali's time trialing.

I was curious in how the position change, so I superposed the before and after photos in the article. To align the photos, I rotated and scaled them to align the arm pads.

According to the article, "Nibali's bars were raised so he could try to relax his shoulders and drop his head". However, to me if anything his "before" shot looks more relaxed, with a lower head and a more aerodynamic position. If on top of this you shift his "after" position up (the bars being higher) the advantage of the before shot appears even greater.

Of course, there's always the possibility the new position produces more power. Or there's the very real possibility the new position isn't well represented by the photo, which is just a moment in time.

Wednesday, January 2, 2013

Raceweight: looking at rider mass data leading to San Bruno Hillclimb

Approaching the San Bruno Hillclimb this year, two riders I know decided they needed to reach race weight. They weighed themselves daily, publishing the results via Twitter. I found the data interesting, so decided to do some analysis.

Here's a plot of their recorded mass in kg versus the day, where I've designated "race day", 01 Jan 2013, as "zero". Rider C began his calorie restriction earlier then Rider B, who began soon after Thanksgiving holiday. Here's the data:

mass versus time

I did a linear regression of their progress for the final 35 days (-34 to 0, inclusive). This includes all of Rider B's data up to and including race day, but excludes the day following. It excludes an upward blip in mass Rider C experienced during the Thanksgiving holiday (likely glycogen & water, I suspect). You can see the data generally follow the linear trend: both riders were exemplary in sticking to their diets. Rider B lost mass at approximately 66 grams / day, which is an impressive 1.02 lb / week of weight loss, but Rider C was losing an incredible 104 grams / day, which is 1.60 lb / week of weight loss. I've never lost weight this quickly.

The key thing to notice, however, is the large variation about the trend. To investigate this, I plot the residuals (actual mass minus the linear fit) for each rider during the period from 34 days prior race day to race day. Here's that result:

residual mass versus time

There is a lot of day-to-day scatter in mass relative to the trend. I calculated the statistical standard deviation of these residuals, and got 462 grams for Rider B and 430 grams for Rider C. For those mired in imperial units, this is an average of 0.98 lb one-sigma variation in body weight relative to the trends. For two randomly selected measurements the RMS difference is sqrt(2) sigma, which in this case is 640 grams. That's huge: around 1% the mass of these riders.

Body weight is by definition what the body weighs. However, in tracking it one is generally interested in changes in structural components: fat, muscle, bones, skin. Other components include water, glycogen, stomach & intestine contents, blood. Yet day-to-day variation is likely dominated by these factors (exempting blood, perhaps). If you were to carefully control salt intake, carbohydrate intake, rate of exercise, caffeine, water, etc., then perhaps the variation in water & glycogen could be minimized (glycogen binds with water, so is effectively quite heavy). But few of us are willing to do these things.

The effect of this variation is that single-day weigh-ins are virtually meaningless. Consider Rider B:

day kg
-16 62.7
-4  62.8

If he'd weighed himself 16 days before the race, then again 4 days before the race, a 12-day interval, he'd conclude he'd not lost anything. Indeed, he'd gained 100 grams.

On the other hand, supposed he taken the following spot mass-checks:

day kg
-19 63.7
-6  62.1

This is a similar interval spanning 13 days, from 19 before the race to 6 before the race, and here it appears he's lost 1.6 kg during this time: more than 120 grams per day.

One can find similar examples with Rider C: pick a relatively low earlier day and a relatively high later day and you could substantially underestimate the rate of mass loss.

The moral? If you're trying to track structural mass loss, you need to weigh yourself often, for example every day, and then track trends, rather than compare individual measurements. Even if your structural mass hasn't changed, two individual measurements will typically differ from each other by around 1% of total body mass.

One obvious question, though, is how well these riders did in the race? They were losing the weight to climb as fast as possible, after all. While both did well in absolute terms, I think each rider wanted to do a bit better than that. In a climb of this duration, 15-17 minutes, anaerobic power can be up to 10% of total assuming the critical power model. So if you go in 10% depleted, that's around 1% of total power squandered, and for these riders 1% added power offsets close to 1 kg of extra mass (since wind resistance also plays a role power requirements are not strictly proportional to mass). So while the weight loss helped, I'd have been tempted to end the calories deficit maybe five days before the race to make sure I was fully loaded and ready to go on race day.