customization of cycling heuristic speed formula

Recently I've been playing with a heuristic power-speed formula for cyclists on mixed terrain. I originally developed this for time-domain smoothing of hill profiles for rating the climbs: converting from altitude and distance to time requires a model for speed versus grade. It was designed based on two philosophies: that on descents a rider approaches a maximum safe speed, going no faster, and on climbs the rider approaches a maximum sustainable rate of climbing. Between these extremes I wanted it to be analytic (continuous in all derivatives). Here was the formula:

v = vmax / [ 1 + ln | 1 + exp( 50 g ) ] | .

Here g is the sine of the road inclination angle and vmax is the maximum safe descending speed. It worked great for this purpose. However, my interest in the model deepened when I wanted to apply it to predictions for riding times on the Portola Valley Hills course of the 2013 Low-Key Hillclimbs to establish checkpoint cut-off times. So for that I added two enhancements: a time penalty for turning at near the maximum safe descending speed, and a speed reduction term for climbing or descending at extremely steep grades.

For the steep hill portion, I added an exponential term which kicks in extremely strongly at steep grades:

v =vmax exp(−3 g4) [ 1 + ln | 1 + exp( 50 g ) ] | .

This steep hill penalty is realistic only if the grade data ia realistic. Best would be to use data from an iBike or similar, which has an inclinometer and thus measures grade directly. But lacking that, it's important to smooth altitude data with respect to distance first: this reduces the problem of a 1 meter error flip in measured altitude causing a huge spike in extracted grade for two closely-spaced points. I used a biexponential smoothing function (application of an exponential filter in each direction, forward and reverse: more details here) with reference length 50 meters. This is a good distance anyway because it's characteristic of inertia, and I don't model inertia here (I could, but I'm lazy). The smoothing function does a decent job of it, though.

For the turning part, I added time evenly split to the interval preceding and following each point, equal to a time per radian (Δtradian) of turning multiplied by the heading difference in radians. I'll make a slight change to that here: instead of a fixed time delay I'll assume a fixed distance array. With this correction faster and slower riders will each be fractionally affected the same by the same corner:

Δtturn = (Δsradian / vmax) ( v / vmax)2 |Δθ| ,

where θ is the heading. Some care must be taken to avoid the heading flipping by 2π: for |Δθ| > π I take 2π − |Δθ| instead.

I used Δsradian = 40 meters.. This means at 20 meters/second (72 kph) a rider going at his maximum safe approaching a one-radian (57 degree) corner is delayed 2 seconds, half decelerating into the corner, half accelerating out.

This equation requires some justification. First, it's linear on Δθ, independent of the tightness of the turn. This is over-simplistic, but with typical GPS precision and sampling rates I lack adequate precision on the details of the corner to be more detailed. Second, it's in terms of time rather than velocity. That's also because of variability in sampling rates: the same rider going through the same corner should take the net same time whether it's with Garmin 1-second sampling, Strava-app 3-second sampling, or Garmin "smart sampling" which could be up to 8 seconds. So I focus on time. Third, it is scaled proportional to the square of the ratio of speed to vmax. This is because slowing in corners is inspired by safety, and only if a rider is going close to the limit of safety will a maximal deceleration be necessary: a speed-squared dependence seemed about right to me. Then there's the distance cost per radian of turning. This is obviously rider-specific: some riders turn faster than others, but I assume generally the time cost is inversely proportional to vmax, so specifying a distance cost is more general than a time cost per radian.

This was fine but it had too few fitting parameters to fit actual ride data. The 50 in the exponent is based on the ad hoc assumption that the theoretical peak rate of altitude gain on climbs is 2% of the peak safe speed on straight descents. Additionally there is the implicit assumption the speed on flat, straight roads = vmax / (1 + ln 2) ≈ 59% vmax. Each of these assumptions are plausible but there's no reason to believe they're universal or even representative of any particular cyclist subpopulation.

So a more general model would recognize that rider speed versus terrain is characterized by several parameters which describe a typical rider's ability and behavior in the context of a particular ride. First, there's the maximum sustainable speed on straight descents, which I already included. Second, there's the maximum theoretical climbing rate (in the absence of any rolling resistance and with a tail wind matching speed). Third, there's the sustainable rate on flat straight roads. There's also the obvious parameters of the time cost of turning, the exponent of how that varies with speed, and the coefficient on the exponential term for super-steep roads. I'll resist adding those, though, to limit the number of fitting terms.

Here's the more generalized model, then, considering the straight-road speed to which the turning correction must still be applied:

v =vmax exp(−3 g4) / [ 1 + ln | 1 + Kv0 exp( KVAM g) | ] ,

where KVAM = vmax / VAMmax, and Kv0 = exp(vmax / v0 - 1) - 1, where VAMmax is the maximum sustainable rate of vertical ascent, and vmax is the maximum sustainable speed for descending.

There's an additional problem: it assumes riders ride at always the same relative pace. Low-Key running legend and accomplished master's ultrarunner Gary Gellin has shown nicely that top ultra runners generally run "positive splits" in multi-lap courses, running slower in a second lap than a first lap, even with improving conditions. Analysis of Race Across America results shows the winner virtually always slows as the race progresses. These are just a few examples. The reality is riders have a tendency to slow as a long ride goes on: I know this applies to me. The primary exception is probably tactical races. So if the goal is to create realistic rider schedules, the fatigue factor needs to be considered.

The simplest approach to this is to assume power-related speeds decay exponentially with time. But if power fades at a certain rate, climbing speed will fall relatively faster than flat-land speed, since wind resistance increases rapidly with speed while climbing power increases proportional to speed. I'll assume flat speed decays at half the rate as climbing speed. The maximum safe descending speed should remain unchanged as long as the ride remains in daylight. The rate of decay is another fitting parameter. So for example:

v0 = v0,init exp(−rfatigue t / 2)
VAMmax = VAMmax,init exp(−rfatigue t)

rfatigue is approximately the rate of power reduction per unit time of riding. For example, in the Terrible Two example I'll describe next time, 2%/hour seemed to work fairly well, which is a drop in power to 82% over 10 hours.

I'll look further at calibrating this model to Terrible Two data next.

Comments

Popular posts from this blog

Proposed update to the 1-second gap rule: 3-second gap

Post-Election Day

Marin Avenue (Berkeley)