Friday, January 31, 2014

adding dynamic damping to nonlinear least square fitting of modified Veloclinic power-duration model

Previously I used a damping factor δ for the nonlinear fitting of the modified Veloclinic power-duration model. This resulted in me fitting 28 out of 29 test cases when I set δ = 0.25.

But δ = 0.25 is too small: it means at best 25% of the progress toward the solution is covered with each iteration. That's inefficient when homing in on the result.

The real issue is when the solver isn't homing in on the result: when the solution is still substantially off and the solver it attempting to make a big leap toward the solution. The big leap could easily overshoot the desired solution, or be in a slightly wrong direction in the hyperdimensional parameter space. In such instances, it's better to take smaller steps toward the solution, to home in on the the "zone of quadratic convergence" where the targeting becomes easier.

For that I introduced a dynamic weighting scheme multiplying the primary damping factor by a factor dependent on the step size of the parameters. Since I am solving for the logarithm of the parameters rather than the parameters themselves, what constitutes a "large step" is the same for each one: magnitude 1 is a large step, much less than magnitude 1 is a small step, and much larger than 1 is huge.

So what I decided to do is to use a damping scheme which restricted the maximum step size to 1:

δ = δ0 / (1 + |net undamped step size|)

where δ0 is the damping factor applied to small steps and the step size is the square root of the sum of the squares of the undamped changes in the natural logarithms of each parameter.

With the damping set to 0.25, I'd fit 28 of my 29 datasets. With the dynamic scheme and δ0 = 1, I fit all 29, and in general the solution was much quicker.

Another type of damping is to reduce the coefficient (more damping) if the number of iterations exceeds a certain value. For example, after 32 iterations without convergence to a solution, I can reduced the coefficient using the following formula:

δ → δη δf1-η

This causes δ to transition from its original value (for example, 1) to a final value (for example, δf = 0.25). after 32 iterations. I used η = 0.95, which limits the rate at which the δ is reduced. This may only be needed when the data are relatively poor and don't fit the model well. The problem can be reduced by reducing the weighting factor.

Thursday, January 30, 2014

envelope fits of modified Veloclinic model using nonlinear least squares with iterative weighting

I've described everything needed to implement the fitting algorithm to the Veloclinic model. All that remained was to tune the parameters.

I started with a CP-model fit, yielding parameters CP and AWC. I used these to generate initial guesses for the VC model parameters. P2 = CP, then I set τ2 = 24000 seconds (an arbitrary value), then P1 = 1-second power - CP, then τ1 = AWC / P1. It's important with nonlinear fits to have a good initial guess, and this seems to get close enough for the algorithm to find its way.

Then there's the question of the damping factor and the weighting factor. A damping factor of 0.5 works fairly well, but I relaxed it to 0.25, which results in slow convergence but helped in one difficult case. Then there's the weighting factor. 10 thousand yields a nice envelope fit, but I relaxed it to 1000 because this also helped the fit converge.

Here's results generated from 28 datasets I had laying around. In one dataset the algorithm failed to converge. I can improve things by tuning the fitting further, for example applying adaptive or nonlinear damping. But this gives a good idea for how well it works:

animation

One thing which is clear is the model almost universally overpredicts sprint power. I'm not going to try to explain why that is, but simply to observe the two-component form of the model is very simplistic.

Veloclinic continues to work on his modeling, for example as described here.

Wednesday, January 29, 2014

Parameter choice for optimal model fitting

I implemented the nonlinear least-square fitting scheme I described, and it didn't work. That is, it didn't work until I fixed it.

The initial problem was the parameter choice. This is a classic problem with iterative fitting schemes: parameters which may span a broad range of values, yet which are restricted to being strictly positive (or equivalently, strictly negative) are often best not fit directly. Instead it may be smarter to fit their logarithm, then take the exponential of the result to restore the original parameter. Exponentials are strictly positive, so the logarithm maps the positive numbers to the full range of real numbers, so it's more robust. Additionally, the logarithm increases the significance of differences of small values, decreasing the significance of the same absolute difference of large values. This is typically what is wanted: the difference between τ = 1 and τ = 2 is probably more significant than the difference between τ = 24000 and τ = 24001.

Fortunately, I don't need to do any complicated calculus to change my parameters to their logarithm. I instead use the following simple transformation:

∂ f / ln x = x ∂ f / x

I thus in one easy step change my Jacobian matrix to use the following derivatives, which also happen to be simpler than the originals, which is almost always a sign that I'm on the correct path:

∂ f / ∂ ln P1 = P11 / t] [1 − exp(−t / τ1)]

∂ f / ∂ ln P2 = P22 / t]1/2 [1 − exp(−[t / τ2]1/2)]

∂ f / ∂ ln τ1 = P1 [ (τ1 / t) (1 - exp[-t / τ1]) - exp(-t / τ1) ]

∂ f / ∂ ln τ2 = 1/2 P2 [ (τ2 / t)1/2 (1 - exp[-(t / τ2)1/2]) - exp(-[t / τ2]1/2) ]

Then, when I get the Δ values from the solution of the linear algebra problem, instead of adding these to the previous values, I do the following:

P1 → P1 exp( Δ ln P1 )
τ1 → τ1 exp( Δ ln τ1 )
P2 → P2 exp( Δ ln P2 )
τ2 → τ2 exp( Δ ln τ2 )

To improve numerical stability further, I can add a damping factor δ :

P1 → P1 exp( δ Δ ln P1 )
τ1 → τ1 exp( δ Δ ln τ1 )
P2 → P2 exp( δ Δ ln P2 )
τ2 → τ2 exp( δ Δ ln τ2 )

Here δ is between 0 (exclusive) and 1 (inclusive), where 1 signifies no damping. The smaller the value, the slower the optimal convergence rate, but the more robust the process. I chose δ = 0.5 initially, but more on that later.

With these changes, I have had good success with the method. I'll show that next.

Tuesday, January 28, 2014

fun with PERL Math::Matrix

Wanting to do some matrix math, I installed the Math::Matrix module from CPAN. There's some nice instructions on installing CPAN modules on about.com.

A common tool people use for matrix manipulation is MATLAB. I'm deeply embarrassed I have virtually no skills in MATLAB. However, this module is pretty cool.

What makes it so nice it is uses overloading of the basic mathematical operators. So that makes it easy to add, subtract, and multiply matrices, as well as a string conversion when it's accessed in a scalar context. Here's an example:

use Math::Matrix;
use strict;
my $a = new Math::Matrix ([1, 0, -1], [-1, 1, 0], [0, -1, 1]);
warn("a =\n$a\n");
my $at = $a->transpose;
warn("at =\n$at\n");
warn("at * a =\n", $at * $a, "\n");

The result:

a =
   1.00000    0.00000   -1.00000 
  -1.00000    1.00000    0.00000 
   0.00000   -1.00000    1.00000 

at =
   1.00000   -1.00000    0.00000 
   0.00000    1.00000   -1.00000 
  -1.00000    0.00000    1.00000 

at * a =
   2.00000   -1.00000   -1.00000 
  -1.00000    2.00000   -1.00000 
  -1.00000   -1.00000    2.00000 

I can also solve equations. Here's a trivial set of 3 linear equations where the solution vector is listed as a fourth column to a 3×3 matrix of coefficients:

print new Math::Matrix ([1, 0, 0, 2], [0, 1, 0, 3], [0, 0, 1, 4]) -> solve;

The result:

   2.00000 
   3.00000 
   4.00000 
Or I can solve for multiple solution vectors:
print new Math::Matrix ([1, 0, 0, 2, rand], [0, 1, 0, 3, rand], [0, 0, 1, 4, rand]) -> solve;

This yields:


   2.00000    0.76763 
   3.00000    0.38500 
   4.00000    0.92288 

You can also concatenate the solution vector to the coefficient matrix:

my $matrix = Math::Matrix->diagonal(1, 1, 1);
warn "matrix =\n$matrix\n";
my $vector = new Math::Matrix([1],[2],[3]);
warn "vector =\n$vector\n";
warn "solution =\n", $matrix->concat($vector)->solve(), "\n";

This generates:

matrix =
   1.00000    0.00000    0.00000 
   0.00000    1.00000    0.00000 
   0.00000    0.00000    1.00000 

vector =
   1.00000 
   2.00000 
   3.00000 

solution =
   1.00000 
   2.00000 
   3.00000 

Here I'm using "warn" instead of "print". It's my general practice in scripts to use "warn" for user messages, and "print" for data intended to piping into another command.

If I want to set particular components of the matrix, the following approach seems to work:

my @m;
for my $r ( 0 .. 4 ) {
  for my $c ( 0 .. 3 ) {
    $m[$r]->[$c] = 0;
  }
}
my $m = new Math::Matrix @m;

$m->[2]->[1] = 42;
print "m =\n$m\n";

I constructed an array of references to arrays, passed that to the class new method, and it returned a matrix object. I set elements of the matrix using the same notation I would us to set elements of a reference to that original array of array references.

The output:

m =
   0.00000    0.00000    0.00000    0.00000 
   0.00000    0.00000    0.00000    0.00000 
   0.00000   42.00000    0.00000    0.00000 
   0.00000    0.00000    0.00000    0.00000 
   0.00000    0.00000    0.00000    0.00000 

The first index was the row index (2) and the second index was the column references (1).

The big one for solving matrix problems is probably inversion. That's done with invert. For example:

my $m = Math::Matrix->diagonal(1, 2, 3, 4);
my $i = $m->invert;
print "m =\n$m\nm inverse =\n$i";

This inverts, as an example, a diagonal matrix.

So this is all good stuff for my present project.

Monday, January 27, 2014

differentiating the Veloclinic model for nonlinear least-squares fitting

Two posts ago I described fitting the linear CP model with an iteratively weighted least-square fit to approximate an envelope fit. The weights were either 1/t2 for points falling under the CP curve or 10k/t2 for points falling above the CP curve. This did a decent job in the example I showed of matching the fit using a 2-point method used in Golden Cheetah.

But of greater interest is using this method for my modified Veloclinic model. That model should do much better at fitting the full time spectrum of the power-duration data. The challenge there, however, is the model is nonlinear.

Nonlinear least squares fitting is described by Wolfram. In my last posts, I added weighting to the method described on the Wolfram page. Nonlinear least square fitting is a matter of navigating a hyperdimensional space, looking for the point where all error terms go to zero (where the model perfectly fits the data). There is no such point, so you search for it until improvement slows below some threshold, then declare victory. But the search involves finding the direction to zero, and that requires tracking the slope, and that requires derivatives. No problem -- it's easy to differentiate the Veloclinic model.

The model as I modified it consists of two components. First, the anaerobic component:

P11 / t) [1 − exp(−t / τ1) ] ,

and then an aerobic component, which I modified as follows:

P22 / t)1/2 [1 − exp(−[t/τ2]1/2) ] .

I'll call the total modeled power f, a function of time.

There's four parameters: P1, τ1, P2, and τ2. The model is clearly linear on P1 and P2. It's the τ1 and τ2 which are nonlinear.

First, the derivatives of f on P1 and P2 are trivial since the dependence is linear:

∂ f / ∂ P1 = [τ1 / t] [1 − exp(−t / τ1)]

∂ f / ∂ P2 = [τ2 / t]1/2 [1 − exp(−[t / τ2]1/2)]

The derivatives of the τ terms is a bit more complicated:

∂ f / ∂ τ1 = P1 [ (τ1 / t) (1 - exp[-t / τ1]) - exp(-t / τ1) ] / τ1

∂ f / ∂ τ2 = 1/2 P2 [ (τ2 / t)1/2 (1 - exp[-(t / τ2)1/2]) - exp(-[t / τ2]1/2) ] / τ2

I checked these numerically and made some corrections versus the originally posted derivatives.

Sunday, January 26, 2014

adding weights to nonlinear least squares algorithm

Note: this uses MathML code produced by OpenOffice. It doesn't seem to work on Internet Explorer. It's also been rejected by Chrome. But it works on Safari, Chromium, and Firefox.

Nonlinear least-square fits are done with a Jacobian Matrix, which describes the linearized dependencies of the model, evaluated at each of the points, on each of the model parameters.  In this case I show two parameters, τ1 and τ2.  I like writing these things out, rather than using index notation, because index notation is a bit abstract.  In the following, f is a function of time t representing the model to be fit.

A = f τ 1 t 1 f τ 2 t 1 f τ 1 t 2 f τ 2 t 2 f τ 1 t N 1 f τ 2 t N 1 f τ 1 t N f τ 2 t N
 

With the unweighted nonlinear least-squares fit, the transpose of the Jocobian matrix is then taken:

A T = f τ 1 t 1 f τ 1 t 2 f τ 1 t N 1 f τ 1 t N f τ 2 t 1 f τ 2 t 2 f τ 2 t N 1 f τ 2 t N
 

This then creates a linear equation describing the iteration of the solution.  In the following, Pi are the points to be fit, of which there are N.  Point Pi is sampled at time ti.

A T A Δ τ 1 Δ τ 2 = A T P 1 f t 1 P 1 f t 1 P N 1 f t N 1 P N f t N
 

To weight this, I replace the AT matrix with the following:

A W T = w 1 f τ 1 t 1 w 2 f τ 1 t 2 w N 1 f τ 1 t N 1 w N f τ 1 t N w 1 f τ 2 t 1 w 2 f τ 2 t 2 w N 1 f τ 2 t N 1 w N f τ 2 t N
 

Then I solve the following modified equation for Δτ1 and Δτ2:

A w T A Δ τ 1 Δ τ 2 = A w T P 1 f t 1 P 1 f t 1 P N 1 f t N 1 P N f t N
 

On the right side of this equation, each Pi − f|ti are multiplied by a weighting factor wi, consistent with points being duplicated.  Similarly, on the left side of the equation, the normalization term is weighted, as it must be since all that matters is the relative weights, not the absolute weights.

It's good to check this with simple cases.  One is to reduce the number of parameters to one: τ.  Then additionally I'll reduce the number of data points to a single point: t1.

Then the above equation becomes:

w 1 f τ 2 Δ τ = w 1 f τ P 1 f t 1

Note the weights cancel, as they must, since there's only one point. It's easy to see from the equation that this is the correct result: it's Newton's method in one dimension.

If I change back to two parameters and two points, I get:

w 1 f τ 1 2 t 1 + w 2 f τ 1 2 t 2 Δ τ 1 + w 1 f τ 1 f τ 2 t 1 + w 2 f τ 1 f τ 2 t 2 Δ τ 2 = w 1 f τ 1 t 1 P 1 f t 1 + w 2 f τ 1 t 2 P 2 f t 2
and:
w 1 f τ 2 2 t 1 + w 2 f τ 2 2 t 2 Δ τ 2 + w 1 f τ 1 f τ 2 t 1 + w 2 f τ 1 f τ 2 t 2 Δ τ 1 = w 1 f τ 2 t 1 P 1 f t 1 + w 2 f τ 2 t 2 P 2 f t 2

Every term is multiplied by a single weight and so only the ratios of weights matters, as expected.

If I set w2 to zero while w1 remains positive, the above two equations collapse into the following simplified equation:

f τ 1 t 1 Δ τ 1 + f τ 2 t 1 Δ τ 2 = P 1 f t 1

This is clearly correct yet it is underconstrained: two unknowns for a single condition. But it shows the weights are working as expected: putting the emphasis on the points with the higher weighting coefficients.

Appendix: the weights can be represented as a diagonal square matrix, the weights on the diagonal. Then I can map ATA to AT W A, where W is the weight matrix, A is the Jacobian matrix described by Wolfram, and AT is the transpose. Then I simply use W AT where in the unweighted case I'd used AT.

Saturday, January 25, 2014

Fitting Critical Power model with iteratively weighted linear least squares

Now that I've establish envelope fitting can be done using an iteratively weighted least-square fitting scheme, the weights set larger for points falling above the model than below, I want to apply that approach to power-duration data.

A nice thing about this is I no longer need to partition the points into discrete regions for each model parameter. I can fit the whole curve. But I need to do that smartly.

First, there's point decimation. It's natural to plot power-duration data on a logarithmic time axis. For example, the points falling between 1000 and 2000 seconds should have about the same significance as the points between 100 and 200 seconds. Each of these sets represents a factor of two in duration difference. However, with points derived on a second-by-second basis, there would be 10 times as many points in the 1000 to 2000 range as there were in the 100 to 200 range. When doing a regression, the longer time data would thus exert far greater influence.

This can be accommodated two ways. One is with weighting: weight each point proportional to 1 / time (in addition to other weights). Another approach is to decimate the data such that they are approximately uniformly spaced on a logarithmic axis. This is arguably the better approach, since it reduces the size of the problem, and that speeds up the calculation. For example, the data can eb selected such that points occur at approximately 2.5% intervals for time points of at least 40 seconds. This will result in points less than 40 seconds being relatively short-changed in influence, but since short-time points are more prone to measurement error due to 1-second sampling, this probably isn't so bad.

The other issue is the nonlinear nature of the Veloclinic model equations. I'll worry about that later. First I want to test the approach on the simpler, linear critical power model.

The critical power model can be represented as a linear model relating work to time, as follows:
work = AWC + CP × time

Then from work I can calculate the power.

There's one subtlety with this approach, which is an additional weighting factor. As it is written, work increases approximately linearly with time. That means a given error, measured in joules, for a long time will be less of an error in power than will be the same error in joules for a shorter time. This essentially places a priority on matching long-time points. To correct this error, an additional weighting factor is needed of 1 / time squared. The square is because the weighting factor is applied not to the error but to the square of the error.

Here's a result. I show two fits, one with the inverse time squared weighting factor, one without. The one without the weighting factor produces an inferior fit where it's wanted, which is in the range of 10 minutes to maybe 30 minutes where the CP model should work well. Also added to the curve is the 2-point method I developed for Golden Cheetah. The 2-point method finds two quality points, one in the anaerobic zone, one in the aerobic zone, and runs the CP curve through those.

CP fittings

With the 1/t2 weighting, the regression method gives a similar curve to the 2-point method, with a slight shift from CP to AWC.

Next would be to apply this method to the Veloclinic model, which is nonlinear.

Friday, January 24, 2014

envelope fits using iteratively weighted least-squares regression

I've noted an issue with fitting power-duration curves is that there is a lack of quality data, and the challenge is to focus on the data of the highest quality. Only at a few time durations will the data represent the best possible effort available from the rider, at other durations more power was possible. So to fit a model to the data, what I call an "envelope fit" is wanted. The fitted curve should encapsulate the measured points, hugging the data as closely as possible, such that the points generally form under the curve or on the curve but not above the curve.

I showed for the critical power model, with two parameters, that a reasonable fit to data with sufficient quality can be attained by searching for two "quality points" and doing an exact fit through those points. To independently fit the two parameters, subsets of the available time points are chosen, for example 1-6 minutes for anaerobic work capacity and 10-60 minutes for critical power. This risks ignoring points below 1 minute or in the 6-10 minute range, but the scheme is simplistic and is confused by points in the 6-10 minute range. They are dominated neither by anaerobic work nor by critical power.

I tried applying a similar scheme to a 4-parameter model which I call the "modified Veloclinic model", or just the "Veloclinic model". This model in its original form was described in the Veloclinic blog, and I made a phenomenologically based modification to improve the behavior at long times. However, the fitting scheme I derived was prone to large errors in predicted power where the envelope fell well above the measured points. This is because the fitting scheme is insensitive to how far it is above inferior points: it considers them 100% unreliable. It is sensitive only to the curve touching its four identified "quality points", and the selection of those points is a bit arbitrary.

So what's wanted is a scheme where the inferior points aren't ignored but rather de-weighted. They should be considered, but just considered a lot less than the points of higher quality, that is those points which are higher power relative to the fitted model.

A more common approach to curve fitting is a least-square fit. In least-squares, the fitted curve generally passes through the cloud of measured data, approximately half falling above and half falling below. This is, as I've described, unsuitable for fitting maximal power curves.

A variation on the simple least-squares fit is a weighted least-squares fit. With a weighted fit, some points count more than others. This is equivalent to duplicating points (although the weight can be any real number, and duplicating points would restrict you to weights which are rational numbers). For example, suppose I had points at t = 1 and t = 2, but I wanted the t = 1 point to count twice as much. With a weighted fit it would be equivalent to having two equivalent points at t = 1 and a single point at t = 2.

So it's clear to get an "envelope fit" I want the "quality points" to have a high weight and the other points to have a low weight. I need to determine the weights.

A bad way to determine weights would be to weight by power. The problem with this is that while power is higher for quality points and inferior points at similar time, quality points for long times will still have much lower power than quality points for short times.

Instead I want to weight the points based on how the power compares to the model. For example, if a point has higher power than the model, that point should get a high weight: it becomes a priority to reduce that error. On the other hand if a point has lower power than the model, this is a lesser priority. That point is consistent with the model. It would be better if the model came closer to that point than further, but not if that involves other points moving substantially higher than the model.

The problem with this is you need the weights to derive the model, and the model to derive the weights. So the fit needs to be done iteratively. Hopefully the result converges, rather than bouncing around forever.

The weighting scheme I chose was a simple bimodal weight. If a point is less than the model, then the weight is 1. If the point is higher than the model, the weight is large, for example 1000.

To test this, rather than using complicated power-duration models, I did a simple linear fit. I generated 101 data points from 0 to 100. The ideal value for these points is 0, but I have assigned unit normal perturbations to each point. A least-square fit to the data should be y = a x + b, where a = 0 and b = 0. A numerical fit won't return exactly a = 0 and b = 0, but the results should be close to this. Then to these data I can add iterative weights. What I hope to see is that the fitted curve "floats" to the top of the data, going from a simple least-squares fit to an envelope fit.

Here's the result:

test

The first fit is the red curve. The weight of all points was set to 1. For this fit, as many points fall above as below the curve. But then based on this fit, the weight of the points above was set to 1000, the weights of the points below kept at 1.

This led to the orange curve. The points below the red curve are now substantially de-emphasized. So the orange curve cuts through the points which were above the red curve.

Now weights were reassigned so those above the orange curve were 1000, all others 1. A new fit led to the green curve. The fitted curve keeps floating upward.

Finally, at the sixth fit, an equilibrium is reached. Here the pull from the 3 points above with weight 1000 is matched by the pull from the 98 points below with weight 1. The curve isn't a formal envelope, as some points still fall above, but it comes close. With a higher weight (for example, 10 thousand) it would come closer to a perfect envelope fit.

This was just exemplary. For the power-duration model, there's more work to do since that's a nonlinear model.

Thursday, January 23, 2014

automated fitting of the VeloClinic model: fail

I developed an automated fitting algorithm for the Veloclinic power-duration model, or rather the variant that I described in this blog, with the addition of a square root term on the time dependence of the aerobic power. Or at least I tried.

First I test it against the model itself. I first created an "ideal" power-duration curve with the following Perl snippet:

  my @pmax;
  for my $t ( 1 .. $duration ) {
    push @pmax,
      $P1 * ($tau1 / $t) * (1 - exp(-$t / $tau1)) +
        $P2 * sqrt($tau2 / $t) * (1 - exp(- sqrt($t / $tau2)));
  }

Then I created 3 hours of ride data and ran it through my fitting program. The plot is from my "quick and dirty" plotting program of choice, xgraph. The axes aren't so clear: the "x-axis" is time in seconds on a logarithmic scale, and the "y-axis" is power in watts.

ideal data

Two fits are shown, one for the CP model and the other for the Veloclinic model. The fit to the Veloclinic model is essentially perfect: the curves overlap. This is of course because the data were generated with the same model. But at least the fitting program is working in the easiest case.

The CP model, of course, does a poor job except over a very small range of ride durations. This is because it assumes aerobic power can be sustained forever, and it assumes anaerobic work can be fully utilized instantly. Neither of these assumptions is close to correct.

Here's a comparison of the fitting parameters:

parameteractualfit
P1490490.005
τ12424.007
P2280279.98
τ22400024029

So the original model parameters are reproduced with excellent precision: no worse than 0.12%. Three of the four parameters are much better than this. But this doesn't prove anything about the quality of the envelope fit. All of the points fall on the curve, effectively, so even an least-squares fitting scheme would have done as well.

Next step I fit to partial data again taken from the same model. In this case instead of every point on the curve being power from the model, I assume the rider did a finite number of rides of different duration, each ride "optimal" for that duration given the model. This is a test of the envelope fit. The goal is that the ride durations fall on the curve, but all other durations fall below the curve due to a lack of optimal efforts.

ideal data, random durations

Here's a comparison of the fitting parameters:

parameteractualfit
P1490490.003
τ12424.004
P2280279.991
τ22400024011

Surprisingly, these are even better than last time. But maybe it isn't surprising: it was forced to pick points with which to fit the model which are relatively spread out. But hindsight is 20:20.

It's looking really good now, right? But wait... I dredged up some old data from my 2009 pre-Strava days. Crash and burn:

real data

The fit is fine for much of the curve, but drastically overestimates maximum power. Indeed, the time constant for anaerobic power from the fit is only 1.24 seconds, with a maximum power for this component of 4.6 kW. The product is 5.7 kJ, a reasonable number for AWC, but the peak power is too large, and clearly the "fit" to small powers is poor. Yet if I were to force the algorithm to fit this section better, the results wouldn't be much more satisfactory.

What I do for this fit is to sequentially examine sections of the curve, and use each to optimize a particular parameter which is relatively more important for that portion. I start by fitting the CP curve. This gives me two parameters: CP and AWC. I then set P2 = CP. For P1, I use the one-second power, subtracting off CP. Then I set τ1 = AWC / P1. I assume τ2 = 24000, which seems a reasonable guess based on having done a hand fit to a plot of superposed power-duration curves from my own records.

So I then go through each parameter in turn. I start with P1. I look for points in the range from 5 to 30 seconds, picking the one which gives me the maximum value of P1 needed to match that power point, all other parameters fixed at the previous guess. Then I go to the endurance side. I look at points from 3600 to 5400 seconds, stepping by 5 seconds, looking for the point which gives me the maximum estimate of τ2, all other parameters fixed. Then I move back to the anaerobic portion of the curve, from 60 to 360 seconds, looking for the point which gives the highest value of &tau1. Finally I look at the "aerobic" portion, from 600 to 1800 seconds, looking for the point which gives me the highest value of P2. I repeat this sequence until the numbers stop changing to a summed relative difference of 0.01%. For this curve, this took 27 iterations.

The "quality points" which result from this process in this example are at 22, 60, 1076, and 3600 seconds. Four quality points allow setting of the four parameters.

But as you can see it didn't work so well. If you were to see this data fit in Golden Cheetah you'd want your money back. Confidence in the fit is necessary to draw lessons from the results.

One point of interest in the fit is the range from 200 to 1000 seconds. Here the Veloclinic model predicts a higher maximum power than the CP model. More typically, by virtue of providing additional constraints, the Veloclinic model predicts lower power, but in this section, the predicted power is greater. Suppose you feel fresh and ready to go and want to absolutely drill a 10-minute (600 second) effort. Do you pace it off the CP model or the Veloclinic model? If I choose CP, I go out at 290 watts. On the other hand, if I choose the Veloclinic model, I go out at 300 watts. That's potentially a big difference.

The reason for the higher Veloclinic model prediction is the model is assuming a relatively high rate of fatigue on aerobic power: the fitted value of τ2 is only 2961 seconds. That's obviously a lot shorter than I'd fit for an aggregate superposition of power-duration curves from my pre-Strava files. It happened to be the case in this data set I'd done no quality efforts over an hour, and I use efforts over an hour to determine τ2. So it assumed my lack of good numbers wasn't due to lack of effort, but rather due to lack of ability, and therefore my promising aerobic power in the 18-minute range (1076 seconds) simply wasn't up to the task of producing power for 3600 seconds. But this high rate of decay for times over 18 minutes implies a high rate of ride for times less than 18 minutes. So this boosts the prediction for 5 minutes. In reality, I trust CP more.

So good, robust fitting is important, and I don't yet have confidence in the fitting scheme I have, no matter how well it did on toy data.

So what to do? One approach I tried is to add an additional component, P0, which caps the power at very low durations. Then I modified the the power from the existing model with the following heuristic equation:

Pnew = [ P0−5 + PVC−5 ]−1/5

But I'm not going to promote this without justification, especially since it's not integrated with my fitting function, but is applied post-process.

So the question is why the fit is so poor. I think the answer is my fitting scheme is too simple-minded. I fit each parameter in turn, using different segments of the data, the goal of each being to maximize its value. I think I need some sort of global goodness-of-fit metric. For example, since I want an envelope fit, I am much more sensitive to points falling above the model curve than below, but I still care about points falling below. Additionally since it's natural to plot the data with logarithmic axes, I want to weight the points by 1 / (t × P), or weight by 1/P and decimate the points to constant spacing of the logarithm of time (which will save computation time). Least square fits are done with an error function of error-squared, but since I want the envelope, a strongly asymmetric error function would be needed. Anyway, these are just ideas.

Wednesday, January 22, 2014

next trail race: Lake Chabot 30 km

Lake Chabot

The next trail race on my list is the Lake Chabot Trail Run by Inside Trail Racing. This seems to be my year for Inside Trail racing. That just worked out based on the race calendar.

In a moment of weakness I chose to accept the technical T-shirt. I'm down one technical T after leaving one somewhere; I don't recall at the moment where. At my last race, the Inside Trail Racing's Pacific Foothills race at Montara Mountain, I ended up wearing my wool undershirt after I accidentally started pinning my number to it and then just decided to go with it. It's January and way too warm for two layers. Not normal. The biggest concern about using the wool shirt is pinning numbers leaves damage, and I really like the shirt. Better to shred my race technical T's.

As an aside, both of my technical race T's fit me like a tent. I think the point of these shirts is to wear them in a race, and in a race, wind resistance slows you down. That means shirts should be form-fitting. Maybe shirt companies should do what Jakroo and other makers of bike clothing do, and offer a second fit axis: "slim", "regular", and "relaxed". I'd gladly pay more for this option, although I don't think it should cost much more.

I was told it was "a good sign" I was talking about wanting to run a 50 km race minutes after finishing the Pacific Foothills half on Saturday. That was a relatively hard race: up and down virtually the whole way. The hardest part is the down: fast downhills pound the quads in a way for which it's hard to prepare without doing more of the same. Fortunately my downhill running was the bright spot of that race for me: my quads were sore but not as bad as they've typically been in the past, and it was the first race I've ever done where I wasn't passed by anyone running down hill. There were faster runners, but I cleverly let them get a big lead on me on the climb so they wouldn't have the opportunity to pass me on the descents.

So downhills are hard, and can create a real issue with post-race recovery, but after outright resting on Sunday, and getting stuck at work for over 12.5 hours (+ 2.5 hours of commuting) on Monday due to a deadline and an unfortunate glitch on some flaky on-line report entry software, on Tuesday I don't feel completely bad.

And as to the result: I finished the same place (11th) in a field over 3 times as large relative to when I did the Coastal Trail Runs version of that race in 2011, so I had to be pleased with that result, and it encouraged me to move on with my plan or ramping up to 50 km.

So I'm looking forward to Lake Chabot. I like the 30 km distance. Half marathons don't push the distance limits nearly as well. In CIM, my legs gave out at 18 and 20 miles. I've had issues as often as not running 30 km races, and I've never had serious problems in halves. Historically, I find there's a huge jump going from 25 km to 30 km, and from 30 km onward. So 30 km is a good challenge.

And the course looks amazing. 30 km without any repeat trails, the only "out-and-back" a short bridge (trail map). Racing is about the epic journey -- the best courses are point-to-point, but those are rare. Next are courses which form single loops, the larger the area subtended, the better. This one comes pretty close to the "big loop". Least attractive are multi-lap races. These also tend to have a high rate of drop-outs, as runners find it too tempting to quit at the start/finish, and too uncompelling to repeat something they've already done. At Pacific Hills, the courses over the half-marathon each involved repeat loops, and there was over a huge drop-off in the number of finishers going from the half to the 30 km. Normally there's a strong turn-out for 30 km races. This was no doubt in part due to a relatively lower rate of signing up for 30 km.

Add in the attractions that (1) I've never run there, and (2) I can take BART.

Before the race, however, I definitely need to make sure I get out for a few more solid training runs on the trails. I need to reinforce my downhills further. Chabot is not as climb-intensive as Pacific Foothills was, but the longer distance makes up for the reduced descending density.

Tuesday, January 21, 2014

some of my favorite pro cycling things

Favorite pro racing helmet

POC. I really like the design concept: to make a helmet which is race-light but goes beyond the minimal standards of the usual tests. It puts material on the temples and lower back of the head where it will do some good, avoids sharp angles which could twist the head and contribute to concussions, and it comes available in an orange color which flaunts fashion in the name of visibility. Plus it's round, which should fit my head, although I have yet to see one let alone try one on.

Favorite pro racing bike frame

Not counting fit issues, it has to be the Cervelo R5. In particular, the R5-CA, but the R5 is close enough to have some of the credit for that $10k frame to rub off. The Cervelo white paper on the R5-CA design is really extraordinary: a wonderful balancing of aerodynamics, stiffness where it's wanted (not where it's not), and light mass. I'm not sure about the head tubes, but if the bike fits you... BTW, I also like the Cannondale Evo and the Scott Addict, but not with heavy paint.

Favorite power meter

Garmin Vector, both for what it is and what it can become.

Favorite pro team jersey

Garmin-Sharp. Their jersey balances simplicity with interest the best in 2014, with a nice, distinctive color combination and a subtle integration of the traditional argyle, avoiding the herd mentality flocking to black.

Favorite rider to get promoted from the domestic ranks

It has to be Phil Gaimon. The guy can write as well as he rides his bike.

Put these all together....

First day of racing in 2014 road calendar... at Tour de San Luis:

CyclingNews photo
CyclingNews photo

Okay, so he's on Shimano. Nobody's perfect...

Monday, January 20, 2014

selling pedals

When I got fit at 3D Bike Fit, it was recommended I switch from Speedplay to Shimano pedals due to the more stable platform and greater resistance to pedal wear. I gave these a try, and indeed the feeling of connectedness was palpable. I expected to feel claustrophobic, accustomed to the freedom of Speedplay's rotational float over many years, but with Kevin Bailey's expert cleat placement, my foot was where it wanted to be and there was no need for it to be anywhere else. As long as I didn't think about the fact my foot was constrained, I was fine. And the pedals were light (248.0 grams on my scale), especially considered in combination with the cleat hardware, which is much lighter than a Speedplay cleat with adaptor plate (although I have no adaptor plate on my Bont Speedplay-drilled shoes).

The only time I noticed the constrained position was when coasting. I found that when I coasted, I liked to move my foot around a bit to stretch the muscles. This isn't as possible with Shimano's relatively fixed position. But really it wasn't a big deal. I rode the Devil Mountain Double on the Shimano pedals & cleats, and had no issues over the mountainous 200 mile route.

What turned me off on the pedals, however, was the single-sided aspect. I stopped once to indulge my navigational paranoia when the Garmin 500, as it is prone to do, warned me "off course" even though I was exactly on course. And then when I started again, I had trouble finding the correct side of the pedal. It wasn't that big a deal, a few seconds, maybe 5. But those 5 seconds meant I just missed a traffic light at the next intersection. All of a sudden 5 seconds lost became 30 seconds lost, and 30 seconds can make the difference in a placing, even on a route this long. And were I to use the pedals in a Low-Key Hillclimb, 5 seconds trying to find a pedal at the start is a similar time loss to 300-500 grams extra on the bike. If I had a choice of two pedals, one set 300-500 grams heavier, which would I pick? Which would you pick? 5 seconds is a big deal.

Of course it's possible with practice I'd learn to clip in as quickly with the Shimano pedals as I would with the Speedplays. But maybe not. And certainly I'd not be able to clip in quicker. Team Sky's policy is "marginal differences". You chip away at things which matter just a little, and in the end you end up with a net effect which matters a lot. This was one extra factor about which I didn't want to worry.

So after putting the pedals aside for awhile, I'm finally selling them. It seems a good time: road season is approaching, and people might be looking for new equipment now. The pedals certainly are nice, and I regret giving them up, but it's important they find a home where they'll be put to proper use.

Here's the eBay link.

As an aside, Garmin Vectors are based on the Look platform, and they also are single-sided. That's unfortunate, in my view, but I will give those a try as well, mostly because I'm so very interested in the L-R balance.

Saturday, January 18, 2014

Inside Trail Racing Pacific Foothills Half-Marathon

I was shocked when I looked back through this blog and observed that it had been two and a half years since my last trail race, the Golden Gate 30 km by Coastal Trail Runs. I remember that run well: the pain of the wrong turn, breaking down toward the finish. Two and a half years? Where had the time gone? Obviously, I've not been idle, but between cycling events, road running events, and a few unfortunate injuries focus drifts and suddenly you open your eyes and the year has jumped 3 digits.

Clearly I had to fix this.

And so after an extended period of bikeless December travel where running became my only outlet for self-exertion, I decided to continue the momentum through early 2014. Goals: it's important to have goals. With each year it's important to try for something new and challenging, something outside the comfort radius, something which will be hard. And an anomalous addiction to Ultra Running Magazine has me telling myself that a marathon distance is just a psychological limit, I need to break beyond, even if my two marathons so far have ended in hobbled pain. But life is short. The time for a 50 km goal is now.

The first step toward that goal was a half-marathon. Mid January seemed a good time, as my speed, as it is, has just been coming back with my solid block of run training. And I've more respect for the need to run trails to prepare for running trails. Run as much as you like, if you don't work the downhills, they'll bite you on race day.

Inside Trail Racing's Pacific Foothills race was a very attractive option. I'd done a race there with Coastal Trail Racing in March 2011, recovering from fitness down-time associated with a new job, and had run in the fog and rain to a time of 2:13:12. I wanted to beat that time this year. No fog, no rain to be seen. It was a crystal clear view of the summit of Montara as we gathered near the park entrance for the start.

The first wave consisted of the 30 km, marathon, and 50 km groups, which went off at a few minutes after their scheduled start at 8:30 am. My group, the half marathon, went next. With around two minutes to go, as I was lined up at the front of the pack, I decided to reset my Garmin Forerunner 610, to get true race distance. "Resetting in 3, 2, 1..." it said, then froze there. I didn't think much about this until I went to hit "start" with 45 seconds to go. Unresponsive. I've had this problem before, and fortunately new the solution: push the power button for 30 seconds, forcing a shut-down, then restart. I barely got this done before the race start, but then I hit "start" before it had acquired GPS. This tells the unit to stop worrying about GPS and just act as a timer. Obviously that wouldn't do. With seconds to the start, I didn't know what to do, so power cycled it again. But by now we were running under the trees and so it took what seemed at the time to be around 2 km to get GPS lock, but was actually only around 900 meters. I felt lucky -- it could have been a lot worse.

Cara Coburn photo
Start of the half marathon (Cara Coburn photo)

I knew two others in my race, both from cycling: Tim Clark from Low-Key, and Peter Rigano from SF2G. Peter I knew is fast. He was going to be up there contesting the win. Tim I wasn't sure: he's aerobically superb but I wasn't sure about his running endurance. I was coming at this from the opposite side: my speed was questionable but I had confidence in my endurance.

The course isn't subtle: it starts almost straight off climbing Montara Mountain. Despite the name, there's not much "foothills" about it: from the bottom to the top, straight up. Peter and Tim both went off fast in a lead group. I had to let them go, leading a chase group which was lined up behind me. First one runner passed me and disappeared up the trail, then another, and finally a third. But when this third runner's lead stabilized, I felt things were setting into a steady-state equilibrium. Eventually the gap started to drop. I felt this was a good sign for my pacing, since I felt, while strained, I could keep it up for awhile. Not too long after I passed him. Nobody else would pass me all day.

Still on the Montara Mountain Trail, before the fire road, a runner came blasting down the opposite direction. He was the leader of the first wave. It was remarkable, because their lead had been probably significantly less than the allotted 15 minutes due to their slightly late start. But it was a warning to stick to the right of the single-track trail from this point onward.

That wasn't always possible, as we started catching slower members of that first wave ourselves. In trail races, however, people tend to be particularly courteous about letting faster people pass. It's all very friendly.

Finally I reached the fire road, which I knew well from the recent Low-Key Hillclimb here. There are some soberingly steep sections of the fire road here, especially where it consists of exposed rock with a thin layer of sand, and I had to give up any semblance of running here. Typically when I run hills, I go from a true run to more of a power walk (more efficient than running at the same speed) when the grade kicks up to around 10% or so. But there was nothing "power" in my walk here. It was more of a "brisk hike".

But I got through, and eventually the grade leveled out, marking the approach of the summit. From the map, it had appeared we were to run to the north peak. This is where Wendell of Coastal Trail Runs had told us to look for a secret phrase ("long climb") written on a sign at the summit to prove we'd been there. But as I approached the intersection with Middle Peak I saw some runners standing around. A sign marking the turn-around was near the junction, with a white chalk line. And no secret phrase in this race. We had timing chips, but no mat either. Dutifully crossed the chalk line, turned around, and began my descent without delay.

The descent was a relief, shifting the load to different muscles. In the clear sunshine and warming air, this was a very different run than it had been three years ago in clouds. I had little problem with the descent, slowing substantially only for that steep, sand-covered granite where I didn't trust traction. The only real issue was, once on the single-track, my inherent lack of navigational confidence started infecting my brain. But Inside Trail Racing does an excellent job marking the course, marking not only corners, but also putting blue ribbons at the head of trails which shouldn't be taken, and putting plenty of ribbons along the trail between turns to let you know you're still on track.

I was surprised nobody had overtaken me to this point, as my past experience had been I get passed early and often on downhills. So it was some combination of descending faster, and perhaps climbing slower, such that there weren't as many fast riders behind. But toward the bottom of the descent, as I was contemplating my hydration strategy, a runner approached from behind. Fortunately I reached the bottom before he reached me, and I was able to redistance him on the short climb which opened a little loop added to the Inside Trails course relative to the Coastal course.

Coming into the finish, my stomach was complaining about the strawberry Hammer Heed I'd been drinking. I was carrying a Low-Key water bottle which I'd filled with a solution of two scoops of Hammer Heed. This is a fairly strong mix, in excess of the recommended concentration, and my stomach was rebelling slightly. I'd put a second such bottle at the start/finish, serving also as the single aid station on the course, but I decided to go for water instead for the rest of the race. To prepare, I unscrewed the top of my bottle and when I entered the aid station I headed straight for the water jug. I hit the valve and began filling, but rather than monitor the water level directly, I estimated time needed to fill the bottle 2/3 of the way, which was the amount I thought I wanted for the remaining close to 10 km. Not wanting to waste any time, I sprinted off before looking to see how much water I actually had. It was only half this: 1/3 full. This would need to do. Water is mostly a psychological crutch at this point, I told myself.

The second loop, marked with pink ribbons, consists of two climbs and descents separated by a short stretch on smooth dirt road. This is one of the few truly flat sections of the whole course. The first climb went well enough, then the descent, and I felt good on the road. But I'd underestimated that second climb. It went on and on. I'd been passing runners all the way, not able to tell if they were from the earlier wave or from my wave. But if you keep on moving eventually the top arrives, and as I'd remembered from last time, the top on this climb comes as a relative surprise. You're going up, then you're going down.

And it was a relief to reach it, as I'd finished my water before the climb began, and the increasing heat of the day was leading to my shirt (a wool long-sleeve) becoming soaked with sweat, something which is becoming a regular feature of this record-warm, record-dry January. I could feel my resources draining, but once again I was renewed by the transition to downhill.

Two runners were ahead, one wearing white and, ahead, another in yellow. Wait -- that guy in yellow was Tim. I paced behind the first runner until he offered to let me pass and slowed a bit. Then I caught and passed Tim, who wasn't so happy to see me. But this is trail running, all good fun.

Soon after passing Tim I caught Cara, who was merrily hiking along. In the 2011 event, she won her age group in the 10km race, also hiking, despite trying hard to not do so by stopping at her car immediately before the finish. Hiking is a great option at these events.

I felt good from here on, feeling the pull of the finish. I crossed the line with the clock showing 2:17, a disappointment, but when I saw the results soon after (chip timing is great) I realized I'd neglected to subtract the 15 minute difference to group 1. My time was 2:02. Solid.

At the finish I chatted with Tim and with Peter, Peter having finished in 1:51 to win the 20-29 age group and having finished, 4th overall. I was 3rd in the 40-49, a competitive age group, and 11th overall out of a remarkable 131 finishers. I collected my medal and my coffee mug. Cara finished a bit later.

Eventually, wanting an alternative to the Costco food at the finish line, Cara and I left the race scene and went to Guerrero's Taqueria in Pacifica, a friendly place with a nice salsa bar. Then it was back home where I spent the rest of the day trying to recover. As addictive as this race thing is, one thing I don't like is the hours of recovery after. I think if I'd drunk a bit more during the race I might have felt a bit better.



training

Friday, January 17, 2014

"20 is plenty" in San Francisco?

20 is plenty

San Francisco Bike Coalition and San Francisco announced a new program, Zero Vision, for a nominal goal of zero tolerance for pedestrian deaths. The idea is to shift the perception that pedestrian deaths are an acceptable cost of doing business, that crossing the street is an activity like skydiving or smoking where the risk of death is implicitly accepted.

But it's mostly hot air. Looking at the article, the term "speed" isn't to be found at all. The #1 risk factor for pedestrians is vehicle speed. Going from 20 mph to 40 mph increases the risk of pedestian fatalities by approximately 17 times.

Perhaps it's perceived promoting slower vehicle speeds is politically unrealistic. I don't give a crap. If you're serious about safety, then it's important safety be given a priority.

And what's the cost of lower speed? Consider an example of a driver crossing the city at either 35 mph or 25 mph maximum speed. San Francisco is approximately a 7 by 7 mile square. So take 7 miles as a typical long trip. Assume the driver is at speed for approximately 2/3 of this, the rest of the distance the driver is acceleration-limited, deceleration limited, or congestion-limited. The driver will additionally be stopped at traffic signals for much of the trip, but that's a fixed time cost.

The travel times at speed are then 8 minutes for 35 mph versus 11.2 minutes for 25 mph. So the slower speed results in 3.2 minutes of additional time to cross the city.

If that was all there was to it, then that wouldn't be so bad. But there's compensating advantages. For example, see this discussion of a 20 mph speed limit in London.

The key point is when vehicles go slower, everything goes smoother. Consider gas flow. When air flows over an object, if it moving slowly, you get laminar flow, which is smooth with relatively little resistance. However, when the flow becomes extremely rapid, you get turbulent flow, with more resistance.

Vehicles aren't gas molecules, but they often behave with no greater intelligence. The principle holds.

So Ed Lee should step up and take a bold position here. If he's serious about safety, follow London's example and reduce vehicle speeds in the city. 20 mph would be especially bold, but I'll accept 25 mph. The simplistic model is it adds only 3.5 minutes to a cross-city trip. In reality it likely adds less. And the side effect is traffic flows more smoothly, which means MUNI can stick closer to schedule, which means bicyclists are better integrated with vehicular traffic, which means pedestrians can trust their survival chances more. All of this means less need for people to drive, and less traffic means less congestion, and less congestion just moves things along even faster.

Unfortunately, after I wrote the bulk of this blog post on Wednesday, this article was posted on Streetsblog. There's nothing bold coming from Ed Lee. Once again, San Francisco will need to look elsewhere for leadership, even within the US and not just the world. I am not surprised. I was opposed to Ed Lee in the 2011 mayor's election. There's just nothing there.

There was an effort to pass a 20 mph speed limit in New York. The bill introduced by David Greenfield eventually sputtered. But these things don't happen quickly. At least New York had the dialogue. In San Francisco, addressing vehicle speed is barely, if at all, on the government table.

Thursday, January 16, 2014

Mark Cavendish bike fit: 2011 vs 2014

According to a recent CyclingNews article, Mark Cavendish downsized from a 52 cm to a 49 cm Specialized Venge this year.

The article quotes Specialized Body Geometry fit manager Sean Madsen:

"One thing about Cav is that he likes to periodically change his position around, based entirely on his feeling," Madsen said. "He may change it back in a couple of weeks, when his mood changes!"

So did he change his position? Here's some BikeRadar/CyclingNews reviews of Cavendish bikes:

  1. 2011
  2. 2013
  3. 2014

I took the 2011 and 2014 side shots. Unfortunately these are from relatively different camera positions. To compensate, I did a combination of image scalings, rotations, and perspective transformations on the images to match the bottom bracket positions and the heights of the front and rear tires. This is 3 transformations for 3 constraints. It's not enough to fully match the photos, but it's a lot better than taking them unprocessed. I aligned the front chainrings, since this should align the bottom brackets, the standard of reference for bike position. I then did a superposition GIF animation.

I don't see much of a position change. The 2014 bike has spacers to correct for the difference in head tube length. The front-center of the two frames appears to be fairly similar.

Okay, don't be lazy: I can check to see if this is true. Here's a geometry chart:

Indeed the front-center is identical. The smaller frame has more trail. Maybe Cavendish likes the feel of more steering stability.

Wednesday, January 15, 2014

tracking my running training load

It's been only a bit over two weeks since I tracked my running training load last, but with my first trail race in two and a half years coming up this weekend (how did I let it go so long?) I wanted to update the plot.

It can be hard to assess how it's going from feel. I run a bunch, rest, run a bunch more, rest... weekly miles tell a story, but only crudely. For a better view I borrowed some metrics from Andrew Coggan, typically applied to calculations from cycling power, ATS and CTS. In lieu of Coggan's effective workload numbers, I use running kilometers. This is crude, but tells a story.

The metrics are a 7-day exponentially weighted average for ATS, and a 42-day exponentially weighted average for CTS. ATS represents fatigue, CTS represents accumulated fitness.

Here's the new plot:

Sure enough, while ATS is spiky, CTS keeps slowly ramping upward. The peak spikes of the ATS are similar, but they're getting closer together.

In addition my runs have been getting a bit faster, which is not indicated on the plot, which is distance only.

Hopefully the race on Saturday goes well.

Tuesday, January 14, 2014

Team Low-Key ready for greatness

This whole jersey design thing is addictive. I've really got to move on... but at least Team Low-Key is ready for greatness:

Monday, January 13, 2014

POC Octal helmet

Garmin-Sharp previewed the new POC Octal helmet in the early 2014 races in New Zealand and Australia. Here's Jack Baur finishing second in the sprint in the New Zealand championships in Christchurch:

CyclingNews

I love what I see on the POC: round shape should fit my head, light weight (195 g for medium), more padding in the temple area and the lower back of the head, and rounded profile which should reduce twisting moments which contribute to brain trauma and concussions. It seems to have it all, other than being soberingly expensive ($270 retail).

Well, everything except aerodynamics. I was told it could be a dog in the windtunnel, something which Jack might be regretting in that photo, as he gets beaten to the line by Hayden Raulston.

But POC also has an aerodynamic version, the Octal Aero. Here's a comparison:

Octal
Octal Aero

I've read the two are equivalent, except Octal used a cover on the vents in the Aero. To test this, I superposed the profiles, making the Octal red and the Aero green. Where they overlap is brown:

profiles

There's essentially no red left in the image, and there's green only over the vents holes of the Octal. So it indeed appears the case the profiles are the same. The penalty is 20 grams, which is significant but not so bad compared to other aero mass-start helmets, like the Specialized.

I wonder how well you could do using packing tape on the Octal when you want aero, removing it if you don't. I did this trick with my Specialized Prevail but basically never remove the tape. I never have a need to. The tape is only over the front center vents, not the rear or side ones, as it is in the front center that the wind drag penalty will be greatest, while keeping the side and back open allows for diffusive cooling. Certainly the plastic cover, while heavier, is a tidier solution, as the packing tape tends to attract debris.

Anyway, as I write this they have only one size in one color available. The promised availability in stores is March.

As an aside, there's also a time trial helmet, which is... distinctive. I am a big fan of function over form on helmets, though, and if it works in the wind tunnel, then it's fine with me:

RidingFeelsGood photo
Riding Feels Good photo

Sunday, January 12, 2014

path to 50 km

A new year's resolution for this year was to race a 50 km trail run. Not just finish a 50 km, that can be done by walking all of the uphills, running the downhills, and jogging the flats. I want to run the whole thing (power walking is fine on steep climbs: it's just as fast and more efficient).

I did a 19 km trail run Saturday 4 Jan following a 15 km road run the day before. My next run wasn't for another week, due to fatigue, at least in part likely allergies.But it's good to occasionally rest. And when I finally felt ready to go again, paranoid I'd lost all my fitness, fat and out of shape, I went out for a 20 km run where once I got the kinks out I actually felt fairly good. So it's clear I've got a half-marathon in me.

So the next rung in my ladder is my first trail run race since my injury: Inside Trail Racing's Montara Mountain half marathon. This will be my first run by Inside Trail Racing. It will be interesting to see how their promotion goes. I ran the same course in March 2011 in a race promoted by Coastal Trail Runs. That one was in the rain with low visibility. That had its own merit, but I'm hoping for clear weather next week.

My moving time the last time was 2:23 (Strava activity). My goal is to break that time. I should be able to do so: I was at that time coming off a long period of detraining due to focusing on what was at that time my new job. As an aside, I think I'm more productive when I make the time to exercise.

As an aside, it's interesting going back to that run report, since I dismiss there the idea that run gait efficiency is limiting my descending speed. I'd previously had my gait analyzed at Innersport in Berkeley. But a followup visit in March 2013 showed I was still overstriding, wasting energy with each foot-strike. Improving that will be key to improving running speed on flats and downhills.

So what next? One nice option is Inside Trail Racing's Lake Chabot race on 22 Feb, where the 30 km race looks like a very nice course. This is 5 weeks after next weekend's race, which is a nice interval. It provides time for recovery, then training, then tapering.

A very nice looking 50 km race is Woodside Ramble, Inside Trail Racing's race out of Woodside, CA, on 10 April. This is 7 weeks after the Lake Chabot race. This should be time to get my endurance from 30 km up to 50 km: it's 7.5% per week. My first trail race was in Huddart Park so I have a bit of a soft spot for it, and it's really a wonderful place to run. This course goes well beyond the boundaries of Huddart Park, however: 50 km without any multiple loops. Really nice. I will definitely need to scale back my pace on the 50 km race, though: treat it as a training run, not a race. The goal is to finish, running the whole way. 30 km races are different: there I want to go fast.

If I can do a 50k, I'll feel I've broken through a big barrier. It's important to break some new barriers each year. I know I should be able to do it. 50 km is considered a beginner event in the ultrarunning community. It's mostly about taking care of myself.

Saturday, January 11, 2014

2014 Strava-Marc Pro jersey

I love this jersey design: the 2014 Strava-Marc Pro team, based in northern California:

As an aside, the jersey is done by Jakroo, the northern-California-based company which also did the WeightWeenies jersey, and which I plan to use for the Low-Key Hillclimbs jersey.

I really like my Weightweenies jersey: it fits snugly without being tight. It's a perfect race jersey for me. Unfortunately I can't wear it in sanctioned races.


Bill Bushnell photo

On the subject of Strava, I'm a bit disappointed that they dropped me from their Ambassador program this year without any notice (starting 1 Jan, I was back to "premium"). But after they stopped sponsorship of Low-Key Hillclimbs of which hey were an enthusiastic supporter, stopped sponsorship of my cycling club, and now this, I'm naturally a bit less of a fan. Maybe it's a natural consequence of their amazing growth. And it's important to handle disappointment.

Friday, January 10, 2014

Sky switch from SRM to Stages

When you think about power meters what professional team comes to mind? Only one...

Indeed, there's a web site devoted to photos of Chris Froome looking at stems. Staring at his SRM display, many have claimed. Sucking the humanity of out racing by reducing it to pre-programmed efforts determined by power analysis? Hardly, obviously, but traditionalists always oppose change.

Froome and his SRM

He even does it in Pro Cycling Manager, a video game:

Froome and his SRM

So it's a major change in the world as we know it when it's leaked that for 2014, Sky is discarding their trusty SRMs for Stages power meters:

Stages on Sky

Stages measures power on only the left leg, assuming the right leg is the same. This, not surprisingly, results in errors. I looked at Stages power data in this blog post.

For example, comparing Stages power / 2 to Vector left-leg power is a good match, according to DC Rainmaker's data:

Stages on Sky

But compare it to the Vector right leg data, and things get uglier:

Stages on Sky

It typically takes two legs to pedal a bicycle (although some manage rather well with just one: Stages would be a great option for half of them).

I'm surprised the team of "marginal gains" would allow itself to get by with anything less than the best. In particular, since asymmetric chainrings are popular with the team, I'd be tempted were I them to use Power2Max power meters, which measure detailed cadence and can thus deal with eccentric chainrings. I looked at the error from eccentric chainrings in this post.