power model summary

introduction

I spent 2 weeks analyzing fitting power-duration curves, inspired by a Veloclinic post on an analytic form for an anergy source with both rate and capacity limits (I don't know why I found this enlightening, as you get the same equation from simple capacitor-resistor networks in electrical engineering, but I never made that bridge). Anyway, Veloclinic has continued to move ahead with feedback models, but he remains committed to a model where the aerobic energy system has fixed capacity. I have stuck with more heuristic approaches where the aerobic system has time-dependent capacity. I still have a limited anaerobic capacity, but that's because attempts to replenish the anaerobic system over time would become hopelessly entangled with the aerobic power component. So simplify.

I used two components with some success: an aerobic component and an anaerobic component. This is all that the CP model uses, and it oversimplifies each. Yet despite this it is the most commonly used model. Simplicity yields popularity. I tried adding a third constraint to better match "sprint" power in the 0 - 10 second range, but I gave up on this since the addition of coefficients for this time range resulted in numerical instabilities. So I'll say the model is limited in its ability to match results in this range.

The key improvement in my approach over what I've seen elsewhere is the use of an envelope fit. The envelope fit says measured points are never the best possible effort at a given time, but rather represent imperfect efforts, and the goal is to find the power which could be produced with a perfect, or at least near-perfect effort. So I encourage the fitting algorithm to fit the peaks on the power-duration power data rather than to cut through the center, as conventional least-square fitting does. I did this in GoldenCheetah by searching for two "quality points" to do the CP model fit.

But for more complex models, this doesn't work as well. So I worked out an iteratively weighted least square fitting algorithm, where points falling above the model curve were assigned a weight of 10 thousand, while points falling under the model curve were assigned a weight of close to 1 (exactly 1 if the spacing to to surrounding points is 2.5% of time). This worked extremely well.

For the CP model a linear fit can be done on work versus time. However, since I'm interested in fitting power, not work, I need to additionally weight this fit by a factor 1/time. This more strongly weights points at short duration which have lower values of work but higher values of power.

For the Veloclinic-based model linear fits won't work so I used a weighted nonlinear least square fit. Here I fit power directly, not work. But nonlinear least square fits are done iteratively and require good initial guesses. So I used the CP model to generate initial guesses.

fitting procedure

  1. Decimate the original data: go through and prune all points falling closer than 2.5% of the time associated with the previous point. This speeds up the numerics enormously with negligible effect on the result.
  2. Assign weighting values to points. For point n with time tn, followed by point n + 1 at time tn+1 and preceded by point n - 1 at time tn-1, the weighting factor is ln(tn+1 / tn-1) / 2 ln(1.025). For the first point the weighting factor is ln(tn+1 / tn) / ln(1.025). For the last point the weighting factor is ln(tn / tn-1) / ln(1.025).
  3. Do a CP fit:
    1. Divide the weighting factor, for CP fit only, by tn.
    2. Calculate the mechanical work for each point, Wn = Pn tn, where Pn is the measured power.
    3. Initiate a list of effective weights to the base weighting factors already described.
    4. Iterate until result stops changing or until an iteration limit is reached:
      1. Do a weighted least-square fit of Wn versus tn
      2. Update the effective weighting factors wn to be the base weighting factor if the prediction is over Pn, or some larger value (for example 10 thousand) if the point is above the prediction. The fitting coefficients are CP (slope) and AWC (intercept).
  4. From the CP fitted values derive initial guesses for the nonlinear model:
    • P2 = CP
    • τ2 = 25 thousand seconds
    • α2 = 0.5
    • P1 = 1-second power (or whichever power has the shortest time)
    • τ1 = AWC / 2 P1
  5. Initialize a list of effective weights to the base weighting factors already described (these have not been divided by t2, only adjusted for time interval).
  6. Initialize a set of damping terms δn = 1.
  7. Iterate until result stops changing or until an iteration limit is reached (for example, 256 iterations):
    1. Construct a Jacobian matrix of derivatives for a function f of parameters P1, τ1, P2, τ2, and α2, where the derivatives are for the dependence of power on the logarithm of the parameters rather than on the parameters directly. Rather than put the formatted equations here, I'll post my Perl code after this summary.
    2. Calculate a list of modeled powers fn = P1 ( τ1 / tn) ( 1 - exp[ -tn / τ1 ] ) + P2 / ( 1 + tn / α2τ2 )α2
    3. Construct a list of residuals Rn = Pn - fn.
    4. Calculate the matrix JTW J, where JT is the transpose of the Jacobian matrix, W is the weighting matrix (diagonal with elements wn, the effective weighting terms), and J is the Jacobian matrix itself.
    5. Invert this matrix: (JTW J)-1
    6. Multiply by W JTR to yield the correction vector for the vector of the log of the parameters: Δ = (JTW J)-1W JTR
    7. If the number of iterations exceeds some number, for example 32, then reduce the damping terms δn using: δn → δnη δmin1 - η, where η controls the rate of damping adjustment and is for example 0.98, and δmin is the asymptotic damping factor, for example 0.5.
    8. Create an effective set of damping values δ'n = δn / (1 + |Δ| ), where |Δ| is the vector magnitude of unitless vector Δ.
    9. Multiply each parameter by exp[δ'n Δn], where n is the index associated with the model parameter, to get revised parameter values.
  8. The fit is done! Be happy and reap the benefits of a well-fitted power-duration curve.



Jacobian

The Perl code for calculating a row in the Jacobian matrix is as follows, where $n is the index of the point. There's a hash which contains flags for whether each component should be included, allowing for the solution of subsets of the complete model. This code also calculates the weighting factors, the "Veloclinic" power, and components of the power:

      # calc some common values
      my $t  = $time->[$n];
      my $p  = $pmax->[$n];
      my $r1 = $t / $tau1;
      my $r2 = $t / ($alpha2 * $tau2);
      my $e1 = exp(-$r1);
      my $z1 = (1 - $e1) / $r1;
      my $z2 = 1 / (1 + $r2) ** $alpha2;

      # calculate VC power
      my $pvc1 = $p1 * $z1;
      my $pvc2 = $p2 * $z2;
      my $pvc  = $pvc1 + $pvc2;

      push @pvc1, $pvc1;
      push @pvc2, $pvc2;
      push @pvc0, $pvc0;
      push @pvc, $pvc;

      # weight:
      push @w, my $w = (($pvc < $p) ? $weightFactor : $w0->[$n]);

      # derivatives:
      # solve for the log of the parameters, rather than
      # the parameters themselves, to avoid negative values
      # d ln x = dx / x => dx = x dln x => d f / d ln x = x df / dx

      my @r =
        (
         # P1 derivative:
         $parameter_hash{p1} ?
         $p1 * $z1 :
         (),

         # tau1 derivative:
         $parameter_hash{tau1} ?
         $p1 * ($z1 - $e1) :
         (),

         # P2 derivative
         $parameter_hash{p2} ?
         $p2 * $z2 :
         (),

         # tau2 derivative
         $parameter_hash{tau2} ?
         $p2 * ($alpha2 * $r2) / (1 + $r2) ** ($alpha2 + 1) :
         (),

         # alpha2 derivative
         $parameter_hash{alpha2} ?
         $alpha2 * $p2 * ( $r2 / (1 + $r2) - log(1 + $r2) ) * $z2 :
         (),
        );

      # weighted row
      my @rw = map { $_ * $w } @r;

      # Jacobian matrix:
      push @J, \@r;

      # weighted Jacobian:
      push @Jw, \@rw;

      # residual
      push @R,
        [
         $pmax->[$n] - $pvc,
        ];

discussion

So why do this fit? Two reasons: one is to interpolate the attainable power of efforts for which previous quality efforts do not exist. The other is to have a quantifiable measure of fitness. The latter goal will always be a stretch, but if the data are of sufficient quality (good efforts at a sufficient number of sufficiently spaced durations) the interpretation might be:

parameterdescription
P1peak anaerobic power
τ1sustainability of anaerobic power
P2peak aerobic power
τ2sustainability of peak aerobic power
α2rate of power loss in endurance limit


where the power at time t is predicted to be:
P|t = P1 ( τ1 / t) ( 1 - exp[ -t / τ1 ] ) + P2 / ( 1 + t / α2τ2 )α2

So what next? It's tempting to say implement this in GoldenCheetah, but it seems Damien has that ground already well covered with his model.

Comments

Ryan Sherlock said…
Always great reading your posts Dan
djconnel said…
Thanks, Ryan -- and have fun in GC!

Popular posts from this blog

Marin Avenue (Berkeley)

hummingbird feeder physics

Gallium Pro geometry