Wednesday, April 30, 2014

algorithm for variable-lifetime maximal power curve derivation

The traditional problem with maximal power curves is they may contain rides so old that the power generated in those rides has little relevance to the present. So it's useful to truncate the data: to set an age limit for points used.

But there may be a considerable data set to process for this purpose. Calculating a maximal power curve from scratch makes little sense.

When calculating a curve from an entire data set, you start at the oldest activity, calculate the maximal powers for each duration for that set, then that's it. For each additional activity you compare the average power for each duration, extending the existing maximal power curve to longer durations if needed (constant work), then adding in points from the newert activity (extending that if needed).

But with an adjustable age limit, it gets more complicated. Instead I need to retain, for each duration, all points which are the maximum power for their age or less. These numbers can be saved, for example, as a linked list in order of age.

The durations need not be every duration. In my fitting procedure I use times which are separated from each other by no less than 2%. So 1 sec, 2 sec, 3 sec, etc, up to 50 seconds, but then 52 second, 54 seconds, 56 seconds, etc, up to 100 seconds, then 103, 106, etc. Approximately every 50 seconds the time step increases by 1

So consider I have a new time point for today's ride, where I got 300 watts for some duration. I have a linked list as follows:

  1. 270 watts, 5 days old
  2. 285 watts, 30 days old
  3. 310 watts, 90 days old
I then step down the linked list until I get a power which is larger than the present one, and insert the new point as the new head of the list. So, for example, the new list in this example will be:
  1. 300 watts, 0 days old
  2. 310 watts, 90 days old

Had the power been at least 310 watts, the new point would be the only point in the list. On the other hand, had it been under 270 watts, it would have gone on the head of the list, with all other elements shifted downward.

Given the linked list for each duration, if I want to construct a maximal power curve for any time cut-off I go through the linked lists for each duration and take the oldest/highest power point no older than the limit.

Note the linked lists contain points of strictly increasing power and strictly increasing age. While it is pathologically feasible this will include points from all activities, the far more likely scenario is it will contain only a small fraction. This is why a linked list, rather than a static array, makes sense here: the linked list has arbitrary length yet is optimized for short lists.

Tuesday, April 29, 2014

Maximal Power curves: thoughts on deweighting, depreciating, or retiring old data and the "do no harm" principle

Back in January and February I had a series of posts here on fitting maximal power curves using a heuristic model and a weighted nonlinear least-squares fitting procedure to fit to do an envelope fit, the curve passing through a series of "quality" power points rather than passing through the middle of points. The argument was that we don't produce our best possible power for all durations, but rather typically for a few durations, and so to predict what our maximal power is for a given duration, we need to interpolate or extrapolate based on the durations for which our efforts represented the best we could do. The weighting scheme was to assign a high weight (for example, 10 thousand) to errors of points falling above the modeled curve, and a lower weight (1) to points falling below the curve. This caused the curve, after an appriate number of iterations, to float to the top of the data point cloud, where it would essentially balance on the points consistent with the highest predicted powers.

The obvious issue with this is that best-efforts from long in the past may no longer represent present fitness.

The way Paul Mach @ Strava dealt with this in implementing a maximal power curve for that website was to put an adjustable expiration data on points, for example a default of 6 weeks, which is the time constant for determining chronic training stress in the Coggan formulation. This makes enormous sense, but it's a bit arbitrary. All of a sudden a power point goes from being 41.9 days old to being 42.1 days old and the entire maximal power curve undergoes a big jump. Obviously there's been no change in my fitness over those 4.8 hours. Yet the maximal power curve might have predicted I was capable of substantially harder efforts at the earlier time than the later time.

The obvious approach is follow Coggan's example in the CP calculation of depreciating old points exponentially. An expontial weighting scheme with a 42 day time constant, for example, would be consistent with the CTS formula which uses the same weighting.

But the devil's in the details.

One goal on the fitting of maximal power curves is the "do no harm" principle. That is if I add additional activities to an existing data set, the resulting maximal power curve should never be unambiguously lower (it my be lower at certain points and higher at other points, however: for example, in the CP model, a strong short-duration effort may increase AWC but decrease CP). This is the justification for an envelope fit. If I go out for an easy recovery spin, with low power numbers, that doesn't imply that my maximal power is lower, it simply fails to prove that it's higher.

Suppose I have a recent ride with some high power numbers for a given duration, contributing to my maximal power curve. Then I add an old ride with slightly higher numbers at the same duration. IF these old data replace the newer by virtue of being higher, but are then assigned a low weight due to being old, the result may actually be a lower maximal power curve due to the recently uploaded old data having little influence. So if you are going to deweight old activities, you need to fit to more than just the highest-value points for each duration: you need to fit for all values for each duration, weighting each by an age term.

But now my easy recovery spin, for which under the existing system all of the points would be ignored, now has finite influence on the maximal power curve, dragging it down some amount. The weighting used in the envelope fit will reduce the influence, but it will still be there.

Another example would be if no activities are registered for a long interval, the existing points become deweighted, but since they all deweight together, the maximal power curve stays unchanged. Then a new activity is added, for example the same example of an easy ride, and all of a sudden the maximal power curve would be dramatically reduced. The rider has lowered his maximal power curve. This may be an undesired result.

An alternate possibility is to depreciate powers. With this approach, old activities contribute at full weight, but at lower power. Exponential is the simplest approach, because the entire curve can be depreciated before a new activity added (the amount to incrementally reduce a given value is proportional to the value, independent of age). However, this is rather unsatisfying as even activities just a few days old are significantly attenuated. Other depreciation rates could be considered, for example cosine-squared, which stays flat and then more rapidly attenuates to zero, although in a fashion smoother than the Strava approach of simply discarding activities.

But basically I'm not seeing that any of these techniques are worth the complexity. The Strava approach of using a fixed time window for activities is transparent, simple, and computationally efficient. That's probably the best approach.

Monday, April 28, 2014

Dan Martin's crash in Liege-Bastogne-Liege

Yesterday, full of self-loathing, I sat addicted to my laptop watching the final 45 km of Liege-Bastogne-Liege. I can't help it. I'm addicted. But the spring classics are done now, right? I'm free, right? I'll be able to resist the daily lure of the Giro, right? Sigh.

But it was an exciting finish. Caruso and Pozzovivo got a gap which looked like it might just hold... then Daniel Martin of Garmin-Sharp, last year's winner, bridged up, passing the weaker Pozzovivo and just about reaching Caruso. One more corner, then 300 meters to the repeat victory...

But amazingly, he crashed in the corner. Jonathan Vaughters, his manager, reacted:

Here's the video on YouTube.

As reported by CyclingNews, Daniel's post-race comments were:

“It’s one thing to make a mistake or know what you’ve done but we figure that there’s a patch of oil or something. I think I had tears in my eyes before I even hit the floor. There aren’t really words for it. To race for seven hours and for that to happen on the last corner…. it’s poetry.”

But was it oil on the road? In the video, there's an audible "click" right as he goes down, his inside Garmin Vector pedal down. Here's the frame from the video where everything went wrong:

image

His inside leg is clearly fully extended, and I don't see any indication either his front or rear wheel is out of line. It looks as if he simply tried to pedal through a corner, leaned over too far, and hit his pedal on the road.

Of course, early on the Vector was developed for Speedplay, but was later switched to Exustar due to the proprietary nature of the Speedplay design. Speedplay claims it has superior cornering clearance. Indeed, with Speedplay, the shoe will hit before the pedal. The Exustar body used on the Garmin Vector is much larger, and if you were to test without shoes, you'd conclude clipping an Exustar is much more likely than a Speedplay. But Speedplay has a lower stack height with 4-hole shoes, and thus the foot sits lower. So it's unclear if the maximum lean angle with Speedplays is actually greater. I could test, but I don't want to remove the Garmin Vectors from my bike right now, due to calibration issues. Speedplays can also be set up with short spindles, moving the shoes inward, which also contributes to lean angle.

The other contributing factor to cornering clearance are bottom bracket drop, crank length, crank "Q-factor", and tire radius. The Cervelo R5 has 68 mm drop. This is not exceptional. Dan Martin's bike is described here (from last year, but also an R5). The crank arms are photographed, but I can't tell the length. He has 25 mm tires, which help versus 23 mm tires.

So nothing here indicates he has a compromised cornering lean angle. If the pedal strike was the main thing, you wonder if the pedal choice made the difference between crashing and finishing on the podium.

For more info on the race, I recommend Inner Ring's analysis. It's probably the best site right now for pre-race and post-race reading.

Sunday, April 27, 2014

testing speed limit algorithm with real data

In previous post I proposed an algorithm for automated detection of speed limit violations from GPS data. Then in a later post I tested this on simulated data. Here I apply it to data which was collected by someone else during a short drive in San Francisco.

I've a general 40 kph speed limit within the city (also previously here), except on the interstate highways. This car trip didn't include any interstate highways, so it will be interesting to see how it would do with such a restriction.

Here's the speed versus time. As you can see, it was a short trip, only around 8 minutes:

image

It's mostly slow going, except for one stretch on Bayshore Boulevard, which is faster. That portion is generally 50 to 65 kph, Sure enough, this triggers the algorithm as speeding for a 40 kph limit. I assign a fine in dollars equal to the excess distance ahead of a "pace car" beyond the 50 meters allowed by the algorithm, at a ratio of one dollar per 100 meters. So if the car in question pulls 150 meters ahead of the "pace car" during the trip, that's a $1 fine.

Here's how the fine varies with speed limit:

image

At a 40 kph speed limit, the fine would be $1.74. It goes up substantially if the speed limit is lower. By 49 kph, it drops to zero. Note the car actually got up to 63.2 kph, but not long enough to trigger a speed limit violation between 49 and 63 kph.

At present, a speed fine is viewed as a big deal. You get pulled over, mailed a citation, given a big fine with additional fees, and possibly waste time in court defending yourself. With this system, there would be none of that. You'd get notified of a fine, and you (the registered owner of the car) would pay it by 30 days or whatever. The fine could be big or small. But acceptance of the responsibility to pay assessed fines would be part of receiving the privilege of driving a car. If you don't pay your fines accrued for your car(s), your driving privileges are suspended until you do.

This was just a short trip, so fines tend to be small. But the speed was quite modest over the majority of the drive. The goal here is to eliminate the uncertainty. If you speed to a certain threshold, you'll pay. Such a system would substantially slow the prevailing traffic speed, both increasing safety and reducing congestion. It would make getting around, by foot, bike, or even car safer and easier.

I can't simulate the advantages of reduced speed on traffic congestion here, but I can calculate, at the same level of congestion, how much longer the trip would have taken with a 40 kph speed limit cap. That's 24.8 seconds, taking the trip time from 8:36 to 9:00.8. In the spectrum of factors which contribute to delays in getting to a destination, that's a very small number, and it wouldn't take much improvement in traffic smoothness to take it to zero.

Saturday, April 26, 2014

San Francisco anti-smoking law and sidewalks

Waiting in line last night on the sidewalk for the Bicycle Film Festival, people tightly packed along the building waiting for the doors to open, a chatty fellow behind me lit up a cigarette.

Cigarette smoke is something I'm rarely exposed to for more than a few seconds, and despite the steady wind, the smell of the fumes was physically sickening, not to menton a tangible health risk. In fact, just typing this the next morning reminds me of the queasy feeling in my stomach as I tried my best to maximize my distance from him without losing my place in my line.

What to do? Politely ask him, as a favor, to stop? Inform him that such behavior isn't acceptable (that option probably doesn't work so well, in my experience). I probably should have done the former. But instead I waited it out, resolute that if he tried another, I'd speak up. Fortunately one was enough for him.

San Francisco likes to vigorously pat itself on the back for being anti-smoking, and indeed in indoor areas smoking is highly regulated. And in outdoor parks and outdoor eating areas, smoking is generally restricted, although the compliance with the park part of that can be quite poor. But the city remains under the myth that on crowded public sidewalks, second-hand smoke is magically swept away by the zephyr winds.

On sidewalks, waiting for a traffic signal, waiting in line, or even just walking along, smoking is extremely disrespectful to those around you. Sure, an almost century of film has, with the financial influence of tobacco companies, promoted the notion that it is a strictly personal, largely positive, behavior. But there I was, feeling sick to my stomach, uncomfortable creating a scene with a guy whom I didn't know and whose stability I couldn't trust. Sure, I could have said something, as I noted, but this is California where the direct approach must be handled with more care than in the Northeast, where I grew up.

The smoking ordinance is here. The only mention of sidewalks, as you can see, is if the sidewalk is within 15 feet of a building, unless it is at a curb, in which case it's okay. Presumably this is to allow pedestrian smokers to continue on their way, unimpeded by the law, by walking near curbs.

But this prioritizes the smoker's perceived needs over the health and well-being of others. Obviously smoking is a voluntary activity, done for self-destructive purpose to the detriment of others nearby. There is simply no reason for the city to preserve a "right" to practice it in crowded public areas, including sidewalks whether near curbs or not.

I experienced this again, the day before, as I went for an evening run and had to pass through the fumigated sphere surrounding each of a substantial number of pedestrian smokers. I'm well practiced in this: observe the characteristic hand-position, spot the white butt, inhale deeply, then 5 full strides on a slow exhale, testing the air tentatively once the breath is gone.

I've written Supervisor Malia Cohen and asked her to support legislation extending the anti-smoking ordinance to ban smoking on public sidewalks. Rationally, it's consistent with the restriction on smoking in parks. And it will make the city a more pleasant, healthier place to be for the majority who don't share this deadly addiction. I got no response. This position is, unfortunately, still viewed as extreme in a city where a surprisingly large fraction of residents across the age and economic spectrum fall victim to big tobacco's lure.

Friday, April 25, 2014

letter to Caltrain on bike capacity with electrification

electrified Caltrain rendering

With the Caltrain electrification process slowly, oh so slowly, moving forward, the organization is making important decisions about infrastructure investment which will affect capacity for decades. It's much easier and cheaper to do things right the first time rather than try to remedy poor decisions later. For example, when the Gallery sets were purchased in the 1980's, the cars were provided with only a single boarding site per train car, creating a choke point for boardings and deboardings which slows the train at every one of the many stops along the line. And when the Bombardier sets were introduced in the 2000's, on board bike capacity available with that car design was substantially restricted, which given the strong demand for bikes on board has made these newer, nicer cars unsuited for the express trains which handle the peak load during commute hours.

Now Caltrain is moving ahead with electrification, and it's important lessons be learned from the past. Caltrain presently and for the foreseeable future will have a large fraction of its customers need to ride a bicycle at both ends of their commute. These riders tend to be Caltrain's most loyal, because many don't own cars and thus rely on the bicycle to provide the transportion flexibility they provide at both ends of the Caltrain leg. Any hope that the fraction of passengers bringing bikes on board can be ramped down as total train capacity is increased is delusional. Caltrain needs to provide infrastructure to meet the demand patterns of the region.

With this in mind, I wrote the following letter to electrification@caltrain.org.


As Caltrain seeks to increase capacity to meet growing demand, it is critical that the bikes on board capacity be sustained at least to the present fraction of total capacity. It is not enough to simply continue with the present total bike capacity.

The reason is simple: Caltrain is not going to continue to grow if passengers feel the need to drive to the station. At-station parking is capacity-limited already, including corporate shuttle parking. MUNI and VTA buses are unreliable, at the mercy of local road congestion in addition to driver availability, deterring a large fraction of potential customers from using that, also limited-capacity option. Even "kiss and ride" creates a curbside congestion which cannot be scaled (as is evident at any suburban elementary school twice per day). And Caltrain simply is not going to meet demand relying on pedestrian access.

Indeed, cyclists have been responsible for a disproportionate fraction of Caltrain's growth. To continue that growth sustainably and robustly it is thus key that the demand for bikes on board be recognized, and that the infrastructure be put in place so on a per-total-capacity basis, not simply per-day, the recognition that, given the poor surrounding public transit grid, bicycles on board is the only way to meet the needs of a broad range of commuters and other transit users.

Of course, bikes on board needs to be supported with additional support for biking to and from stations. Secure parking, not simply racks, is necessary to allow riders with confidence to leave bikes at stations without fear of them being stripped of components during the day. Dedicated lockers, requiring cumbersome reservation processes, are inefficient as they restrict riders to the choice of a single station (I often board from MtView, California, or Palo Alto on the peninsula, 22nd or 4th in San Francisco, depending on my needs), and additionally are unused far too much of the time. The best approach is the bike station model which works so well in San Francisco and which previously worked in Palo Alto.

Additionally bike share needs to be expanded to provide a useful density of nodes so riders can get to diverse locations at both ends of their ride. Unfortunately, given the sprawling nature of industrial and domestic development on the peninsula, bike share will never meet the needs of a majority of potential cycling commuters there. This is especially true given the present pattern of not installing bike share stations at major corporate-specific sites.

So bikes on board will remain a critical component of Caltrain's continued health and growth. Cyclists going to and from trains are not congestion or traffic limited. They take no ultra-valuable parking spaces. And they tend to be loyal and committed to Caltrain for their transportation. Thus Caltrain should continue to invest in what has proven to be such a beneficial investment these past decades: continued and healthy on-board capacity. Relying on people driving to the station, walking to the station, or being delivered to stations is simply not going to work.

Thursday, April 24, 2014

First Noon Ride of the year: Old La Honda

On Wednesday I did my first Old La Honda since last November.

This was a bit of an immersion lesson in cycling again. I'd been so focused on running in preparation for my trail race 10 days before, I'd only started doing any "training rides" the previous Friday. On that ride, doing my favorite combination of Montebello-Peacock Court, my legs felt like sludge and I pushed my old Trek 1500 to the top of the climb in over 7 minutes, an unimpressive time. I felt fat and slow.

But things got a bit better from there. On Saturday I went to the Headlands for four Hawk Hill repeats. Sunday, mountain biking plans fell through and instead I did a solid 20 km training run. Monday was just basic commuting, but then Tuesday was my first SF2G in months, and my first Skyline route since last year. That went well, but instead of energizing me for the day, I felt depleted. But I felt well enough by Wednesday to indulge my urge to do the Noon Ride for the first time since 13 Nov last year, both the Wednesday version climbing Old La Honda. So at 11:30 I set off from work on my steep Ritchey Breakaway with its Powertap rear wheel.

I was ready to face the sobering reality that my power was poor, my mass was high, and I was going to be lucky to break 20 minutes, maybe 21. But I felt okay riding to the start, certainly not frisky, but fine.

It was a low-key turnout: mostly long-timers. There was one guy from Metromint who looked fast.

Around the loop I was always near the front, and took several long pulls, including one on the run-in to the base of the climb. This normally isn't advised for an optimal climb, but the pace was modest, and I didn't feel as if I was digging myself into too deep a hole.

On the climb, the Metromint rider quickly set a tempo I knew I couldn't hold, and I downshifted from the 36/19 I'd been riding into my 36/21, where I'd stay for essentially the rest of the climb. A rider was on my wheel: I could hear him breathing, but I never looked back. The others were all further back. My legs were not responding well to the effort, and I felt myself struggling to hold the power on the high side of the 200's.

Despite this struggle, eventually the sound of the rider behind me disappeared, and a glance behind showed that he'd dropped. So it was just me, chasing the Metromint guy, and he was gone for good.

Only when I approached the stop sign marking the finish did I indulge in a glance at my time. 19:10 I saw as I approached the intersection. Wow. I didn't expect that.... not bad at all! So this is promising for my further preparation for some upcoming hill climbs, both on the slopes of Mount Diablo.

It's interesting to compare this effort with the one from 13 Nov 2013. That one marked my fitness test before participating in last year's Low-Key Hillclimbs: an injury in June took me out of commission for a long time, and fitness came slowly when I started riding again. But that ride showed me I was where I needed to be to at least have fun in the series events.

First, a running average of power. This can be plotted versus time or distance. With Old La Honda, which has variations in grade, distance is useful because it allows comparison for the same point in the course.

image

It's immediately obvious I went out much harder on that November climb. My power was approaching 300 watts, but was straight downhill from there. I managed a little kick at the end which restored some credibility, bringing my average to 263.24 watts for the 19:03 climb.

This time I was still optimistic, but my power peaked later and lower, at only 280 watts. From there it faded, until a final push at the end brought it up to 268.0 watts. At peak fitness I'd like to be 280, 4.5% higher, but obviously my "preparation" for this climb was far from optimal: fatigue + lack of specific training.

Of note is I was slower than in November despite the higher power. There's two explanations for this. One is a second water bottle + a tool bag. On the November climb I left my tool bag home, bringing only a spare tube and patch kit. But also I was around 1 kg leaner then.

Next, I look at cadence. I noticed at the Headlands I was riding very low cadences: in the 60's. My legs feel like wood from all the running, no zippiness, so the low cadence isn't a surprise. I did a bit better on the OLH climb, but my cadence is still well below what it was

image

From cadence and speed, I can get the gear ratio. This plot compares cadence and speed, along with lines calculated for various gear ratios available on the Ritchey. This ride I spent most of my time in the 36/21, while on the November ride I spent a lot of time in the 36/23. I need to work to get my spin back, but that will to some degree happen automatically as my legs feel better.

image

So overall, a good effort. Maybe I'll try again next week, schedule permitting.

Monday, April 21, 2014

Training metrics and post-Woodside Ramble recovery

Today, as I'm typing this, is the Boston Marathon. If I'd been 40 seconds faster @ CIM 2012, I'd have had a qualifying time and maybe I'd be running the streets of Boston instead. But rather than running Boston I'm a week after meeting a New Year's resolution, to run my first ultra, the Woodside Ramble 50 km last Sunday.

I'd had a training plan for the race, and it crashed and burned when 3.5 weeks out oral surgery left me fatigued and prone to allergies. My running came to a virtual halt for 12 days, leaving just enough time for a brief test run and 3 decent volume days before I had to taper for the final week before the big day (indulging in a 17-km test run 3 days prior). This completely blew my plan to do big volume up to 2 weeks before, then taper in: maintaining CTS the second-to-last week, then letting it slip slightly that last week to come in with an optimized combination of freshness & fitness.

CTS is a metric of chronic training stress, ATS of acute training stress, and they are typically calculated using 42-day and 7-day exponentially weighted averages of daily training stress. For daily stress I simply use km run or hiked (my hikes all being early in the training cycle).

Here's the plot of my runs to date:

image

Races are indicated with the orange bars, along with the ratio of CTS (/day) to the race distance. The green points are CTS, the red points ATS, and the dashed lines are the trend lines I was targeting.

The CTS curve, which is correlated with fitness, came into the 50 km race at 6.6 km/day, only 10% higher than it had been going into my 30 km 7 weeks before. That hadn't been the plan. ATS went in at 7.2 km per day. This was a lot lower than it had been going into the 30 km race, 8.5 km/day, but it had been my plan to go into the 50 km race with ATS below CTS. The long break from volume due to the oral surgery compromised both my fitness and freshness.

Despite this total failure of metrology, the run went very well for me. If I'd been able to stick to plan, maybe I'd have made up the 7 minutes I'd needed to medal in my age group, which had been a stretch goal. But I met all of my primary goals which were:

  1. Run the whole distance except for climbs, which I would power walk. No death march!
  2. Race the finish.
  3. No acute pain.
  4. Finish in under 5:30.

Along the way, I managed to negative split the course. Not bad for first try!

Among my goals was not to recover well. Recovery simply wasn't a concern during the race. I was going to leave it all on the trail.

Despite this, I've recovered remarkably well. The next day, I was forced to run 500 meters to catch a train from work (not my most productive day at work, I admit: I was pretty wasted). Then Tue I rode my bike to the train (as opposed to Monday's walk/run). This committed my to climbing the 24% grade of 20th Street to get home, which I did in my 38/24 on my old pre-110mm BCD Trek 1500 (I avoid the 38/28 when possible). Then Wed I was ready to start actually running again. 5.8 km in multiple runs on Wed, then a 5.3 km evening run on Thu, then 5.8 km on Friday.

But I have some cycling goals in my near future, so I can't obsess about this chart any more. On 11 May, in particular, is a race up Mount Diablo, when the NCNCA USA Cycling championship hill climb time trial is in June. So I need to focus on converting some of that running fitness into bike fitness.

So Friday also included my first day of bike training in around 2 months. I took the old-school Trek 1500 up the Montebello-Peacock Court climb which I can access from work. I failed to break 7 minutes, Stravaing the segment in 7:02, but my legs felt like sludge, and my work Frog cleats were clicking in my pedals. So I took it. That I was able to get my run in that evening was good.

Saturday, according to my metrics, was a rest day, but physically it was not: I did 4 repeats of Hawk Hill in the Marin Headlands, then followed that with Fillmore Street, the most challenging climb in the old San Francisco Grand Prix professional one-day "classic". That was a good day.

Then Sunday, plans to go mountain biking in Oakland with Cara fell through, so I headed out for a run, and the gorgeous, uncharacteristically warm San Francisco weather induced me to run further than I'd planned: 20.1 km, again including Fillmore (this time on foot). Afterwards I did some more steep climbs on my bike, just part of riding places I needed to go.

My goal for getting my bike fitness in gear:

  1. Do plenty of climbing. Short hill intervals, a few Noon Rides up Old La Honda, at least one visit to Mount Diablo before that May race.
  2. SF2G at least once per week for some base. But no long weekend rides: focus on intensity, not ditance.
  3. The Memorial Day Bike Tour. I signed up for this months ago: 4 days from Campbell to Santa Barbara. This is a lot of fun and the fact it provides a nice dose of fitness is a happy side-effect.
  4. Keep running. I don't want to completely squander my running fitness, as I've done too often in the past. I like focusing on one thing at a time, so this will become a substantial challenge as my cycling improves. But I want to do more trail races this year, and that won't happen if I hang up my running shoes until June. This includes running after each day of the Memorial Day tour. Obviously I need to scale back my run schedule to fit in the bike work. But it would be fun to do another 50 km this year. I think that at least is reasonable. A 50 miler might be a stretch.

Hopefully, no more tooth problems!

Wednesday, April 16, 2014

Garmin Forerunner 610: 1 second mode isn't for ultras

In the Woodside Ramble 50 km my Forerunner 610 was powered up and recording for almost exactly 5 hours before it powered down, 4:47 into the race and 32 minutes before my finish. This was, needless to say, a downer.

The watch is rated for 8 hours, not 5. So what went wrong? Battery fatigue? False marketing? Accelerated draw due to a challenging GPS environment? A personal curse that I shall always get sucky batteries?

Well, this last option has proven likely with various cell phone and laptop batteries in my personal history. But in this case, I stumbled across the simpler explanation when reviewing DCRainmaker's "in depth review" of the Forerunner 610.

One-second mode. I had the unit on one-second mode to give better time-resolution on Strava segments. I looked at the Smart Sampling mode for the Forerunner 610 here. Sampling times blew out to as long as 7 seconds in smart sampling mode (a few longer, but those intervals may be due to signal loss), while 1-second sampling is quite simply 1-second sampling. I looked at Strava segment timing reliability here.

DCRainmaker has an outstanding GPS product comparison page. Forerunner 610 is on the low end of the battery life spectrum with 8 hours nominal. The Forerunner 220 and Forerunner 620, for example, are both a remarkable 30 grams lighter (DCRainmaker weighed his 620 @ 44 grams, and the 220 came in at 41 grams). Note newer 610s replaced the metal backplate with plastic, so are around 10 grams lighter than mine, at some loss in the advantages of metal backplates. But both the 220 and 620 are rated to 10 hours rather than 8. This should certainly be enough if I were to do a 50 miler. 8: unlikely.

But it's a mistake to say "my longest race is x hours, so this is how much battery I want." It's really nice to have a buffer. You're not going to risk turning on your GPS right at the start line: too much going on; too easy to forget. And you may well forget to shut it off when unplugging it before heading off to your race (the Forerunner tends to be power-up, in non-GPS mode, coming off the charger, and this mode alone results in significant power drain).

This is why seasoned ultra runners I've spoken with prefer the XT series of triathlon watches. These are heavier and bulkier, but have batteries designed to last the ironman distance. I'm a weight weenie so don't want to be carrying that sort of bulk around on my wrist. But I still have the option of simply carrying my Edge 500 cycling computer in my pocket. I'd lose some GPS accuracy that way (the 610 does better) but 14-hour battery life wouldn't be a problem (I used it most recently for that long in my 13:47 Devil Mountain Double Century last year).

Another ultra option is the Fenix series, the most recent the Fenix2. Here's the DCRainmaker review. It comes in at 85 grams. Battery life has its cost.

For now, I'll go back to smart sampling mode on the Garmin Forerunner. There's some loss in Strava segment timing (at least until they go to inerpolated segment timing, such as I use for timing Low-Key Hillclimb "self-ride" events), but the advantage in potentially improved battery life is worth it.

added: There are different opinions about whether 1-second mode should affect battery drain. Indeed, my initial feeling was that it shouldn't. I suppose I'll need to experiment.

added 2: According to Garmin Support (see this post on Garmin Forum), Smart Recording has no benefit to battery life. So my unit is just cooked.

Tuesday, April 15, 2014

Woodside Ramble 50 km report

After months of carefully tracking my training metrics to ramp my volume up to where I thought it needed to be for the Woodside Ramble 50 km race, my first race over marathon distance, my training had been diverted off-course by 12 days of fatigue after a relatively innocuous tooth extraction. This had left me just a week and a half until the race. So after an test run on a Wednesday, I did a solid 3 days on Thu-Fri-Sat, the last of these a 31.5 km run through the Marin Headlands. This was a very important test for me, as it included 1300 meters of descending (and, less importantly, climbing), and my legs survived running all of it. This was 90% of the descending I'd need to do in the 50k, where I'd have the additional advantage of fresher legs.

But that was it: my last chance for training. Instead of a controlled taper in the last two weeks, I had one week to get my legs into something resembling race shape. I did a series of short runs until Friday, when feeling fat and out of shape, I couldn't resist doing 17 km on my usual lunch run on Steven's Creek trail in MtView. I figured it was only 1/3 the race distance, so I should be able to handle it fine. Then Fri I did 3 separate short runs totaling around 10 km. Then it was Sunday. Nothing could be done about it. The 50 k had arrived.

The course, to very crude approximation, is start in Huddart Park, 5 miles up to Skyline Drive, 5 miles rolling along Skyline Trail, 5 miles down in Wundelich Park. Then basically repeat (with some different trails) in the opposite direction. So the race is divided into 6 natural segments. I planned to take them one by one.

image
Cara Coburn photo

I lined up around 3rd row from the start in the mixed 35km + 50km field. At go, there was a sprint across a meadow into singletrack, which opens almost immediately with a "bottleneck", a narrow bridge. I had no interest in sprinting from the start, so I lost places, but that was fine. Onto the singletrack, I had to wait a few seconds for congestion at the bridge, then found myself behind a strong-looking woman, who I think was Francesca Conte. My simplification of the course neglected what is a substantial initial descent, aan nd since there's not much room to pass here, there's a considerable social pressure to stick to the runner ahead. I followed Francesca reasonably well, thoigh, and when we took the left at the bottom onto a fire road, she was right in front of me.

After a bit we re-entered single track, on the Chapparel trail, and the climbing began, and I took every opportunity to do a brisk walk rather than run. I tried to imagine I was running on ice, and I didn't want to break the surface. Despite the walk, I had no problem keeping up. Francesca faded back and I found myself 2nd in a line of around 7 runners. The guy in front of me eventually faded, and I passed him, gapping the others. The climbing was never steep, and wasn't even continuous, but I persisted in my fast walk and was keeping these guys behind me.

When we got to Richard's Trail, which is wider, the others caught me then passed. I guess this is where it was "game on" for them. I didn't feel as if I'd faded much, so looked forward to them being out of sight so I'd lose the temptation of following them. I didn't know what distance they were running (the background color on the numbers tells the story, but on runners' chests, they're not obviously observed), and in any case I could only do what I could do, and I was still on the first of my six segments for the race.

Soon enough, they were gone, and I was on my own again. I rehersed what I wanted for the first aid station: refill my bottle, get a drink, top it off, get some food for my pocket, gone. And that's pretty much how it went. I'd started the race with some Enduralytes and some "extra-salt" Clif Blocs in my pocket, a single Low-Key water bottle in my right hand. This worked out fairly well. The bottle was empty before I reached the rest stop, but by drinking a bit at the stop, I could make up the deficit. I saw other runners with elaborate, heavy hydration packs but in my view every gram counts and if in the end it meant that I needed to spend an extra 20 seconds at each rest stop I thought it was worth it.

One out of six done.

The volunteers @ the "Dunlap" aid station said 1.7 miles to the next stop, but I knew better. They were correct, actually, for the half-marathon distance, where there was a turn-around 0.85 miles away. But no half-marathon fun for me this day.

Instead, it was single-track Skyline Trail to the entrance to Wunderlich Park, across Bear Gulch Road. This turned out to be arguably the hardest part of the run, since I tend to think of the run as two big climbs and two big descents, so the Skyline segments get deligated to glue status.

Early here I was caught by a much bigger guy in a white shirt. I asked him what distance he was doing, fairly convinced it was the 35 km, which had a turn-around at the end of this segment. I was surprised when he said 50. "50 km is my distance," he said, "but occassionally I do 50 miles." I asked him about that distance. "If you can run a strong 50 km, 50 miles is no big deal," he responded. Flash back to Lake Chabot 30 km, when someone told me "the way you're running now, 50 km should be no problem." These ultra guys, I think, are highly prone to underestimating the challenge of the distances they take for granted. Maybe it's part of the self-deception process, to allow them to forget the pain and thus sign up for the next orgy in self-abuse.

I eventually found myself with a chatty group, which made the time go by quicker. The topic of discussion was, of course, running. One guy was describing how he'd done the New Years Eve/New Years day 24 hour ultra at Crissey Field in San Francisco, the hardest run he's ever done, he said, because it had no variation in terrain. The woman was saying how she'd signed up for a 12-hour race and her goal there was "just 50 miles". The ultra crowd is pretty amazing: if I was ever tempted to feel smug over running 50 km, that temptation was wiped away here.

The next aid station was an important one, because following it was both the descent and the climb of Wunderlich. This was going to be a stress on my single-bottle approach. Despite this, I wasn't able to drink as much as I probably should have: I had around a 1/4 bottle. I topped it off then and set off onto what for me would be new trails: I'd never before hiked here let alone run.

So off I went. Remarkably quickly I saw Ryan Neely and Jacob Singleton running the opposite way. I couldn't believe it: Rickey Gates course record was 4:02, and here it was just over 2 hours in and the leaders were most of the way done with the the climbing. Ryan would go on to finish in 3:31:15, over 30 minutes faster than Gates' record. I was later told that Gates had treated the race as a "fun run", so the record had been far from his best effort. But that takes nothing away from Ryan's accomplishment.

I was taking it fairly easy on the descent because, quite simply, I was getting tired. This was not a good sign, I felt. with less than half the course covered so far. So I plodded along until Dylan Newell, in third, came by also in the opposite direction. I felt slow, so very slow.

I was spared further misery when the descending route deviated from the climing route, along the Bear Gulch Trail which roughly parallels the private section of the heavily gated Bear Gulch Road. I felt a little thrill at seeing this road, which has long tempted me as a cyclist, it being the steepest, most challenging way up the eastern side of the ridge. But cycling wasn't my principal concern.

My focus was to just reach the 25 km point, half-way, per my Garmin Forerunner 610 watch. Before this occurred, however, the climbing began again, and that diverted my attention away from the distance. On the climb, I focused more on the moment, just sticking to my power-walk approach which has always served me well.

My time to the beginning of that climbing was around 2:45 according to my watch. My goal had been 5:15 to 5:30, so this would require an even split on the second half. This would require everything to go well from here one. It still seemed hard to conceive I'd be able to run the full 50 km, since the first 50 ((25)) km, with all the climbing, had been so challenging. But I thought back to the hard bike rides I've done: Death Ride, Climb to Kaiser, Terrible Two, Death Ride, every single sanctioned race. In evey one at some point it seems like I'm not going to be up to the challenge. And sometimes I'm not, but quite often I exceed expectations. Simple fatigue wasn't going to stop me here, either. What would stop me would be the stabbing pains I'd felt on each of my previous two marathons. I simply needed to hope that my training was sufficient to keep those pains at bay.

As I climbed I heard someone approach from behind. It was Yvon Wang, a Stanford student, who blew by with apparently little effort. So was she going fast, or had I slowed? I didn't think I was climbing more slowly, although obviously I was more fatigued than I'd been on the Huddart Park climb. She went on to finish second among the women, in under 5 hours.

Not long after I heard discussion ahead. It seemed Yvon was engaged in discussion with some people. Then I came upon the source: a line of four equestrians with horses. I moved to the extreme side of the trail to provide plenty of room.

"Please walk!" one of them sternly asked. Given the state of my legs, that was an easy sell. So I walked on by the line, doing my best to not appear threatening to the horse brain. The riders ignored me otherwise, one telling the others they should move the horse to the uphill, rather than downhill side of the trail when making room for hikers. This would be my only encounter with equestrians today, but I'd hear more about them later.

Eventually I reached the stop. I was starting to think more about "just finishing" at this point than shaving seconds, so I took just a little bit more time to drink and eat, the watermelon cubes being a most welcome option here, and I ate a few bite-sized boiled potatoes. Then I refilled my bottle, grabbed some salt capsules (my Enduralyte supply gone) and was off again.

Skyline Drive wasn't any shorter on the return than it had been on the way out, but once again company helped me pass the distance. Not long after reaching the stop, I was overtaken Don Rodrigues, with another not too far behind. The one immediately behind me caught me, and not long after passed me on a downhill. I was a bit faster than him on the climbs, but since there was a lot more descending then climbing left, I made no effort to pass him on any of the short climbs on the rolling trail. Once, I attended to nature rather than be forced to slow behind him. I soon recaught him, though, on the rest of the short climb, figuring he'd open a big gap on the descent.

But his earlier speed must have been post-aid-station enthusiasm, because I was able to keep a fairly close gap on the descent, then on the following climb I was right behind him.

We reached the token mud patch on the course. I'd walked this on the outbound leg. But I did closer to a trot on the way out. It wasn't a problem. He was slightly quicker than I was though. But I closed the gap again

He offered to let me pass, but I wasn't sure that made sense. Eventually, though, he slowed to a stop on a short upward slope. This forced the matter, and I passed him. The runner who had been following the two of us was by this point further behind. Endurance is a key part of the 50 km distance, and my goal of "keep running" was paying off here.

The rest of the rolling 5+ miles to the final stop I ran solo, but having the motivation of knowing someone was behind me helped spark my energy. I rehearsed what I'd do at the last rest stop. Key: a small amount of caffeinated soda. I don't consume caffeine in daily life, so a little goes a long, long way.

The stop, Scott Dunlop's driveway, appeared across Kings Mountain Road. The trail runs along the road for maybe 100 meters, then it's a quick jog across to the stop. As I approached, I heard the unmistakable sounds of the Gladiator soundtrack. Perfect! What was my pain compared to the pain of battling lions in hand-to-hand target in the Collisseum?

There, I was super-pleased to see Dixie cups of soda were already poured. I took one, dumped it into my empty water bottle, added some water, and topped it off with sports drink. This was a mistake: cola + sports drink is a foul combination. But this wasn't a tasting event, it was a race, and I was entering the end game. Rather than pause to drink here, I was close enough to the finish I knew this one bottle would get me through. So I shoved some more tasty watermelon into my mouth, grabbed two salt tablets, and took off down the trail, sipping from my bottle.

I'm not sure if it was the music, the caffeine, or the knowledge I was entering the end-game, but for whatever reason, I felt great. I passed one runner, probably a 35 km-er, at a horse barrier, then was soon at Archery Fire Road, where the trail continues straight across for the return, deviating from the upward path. The first race I did here this had confused me, but I remembered it well this time.

And so I was onto the main descent. This is really an amazing section of trail, switchbacking its way down the hill, crossing bridges, and a nice smooth surface to build speed between the corners. I was running at what I thought was a nice, safe pace, still fearfull I'd get those stabbing pains which would put an end to the fun and games.

But I was being a bit too safe, as I was caught by Ben Bellamy, who was obviously descending faster. I'd been super-pleased in my two trail races this year, that unlike what was common in years past, I'd never been passed on a descent, It appeared my streak was to end here (I didn't count the descents which preceded this one, since pacing had been the dominant consideration to this point).

"You're okay" he said as he caught me. Such trust needs to be earned, so I cranked up the volume a bit, making surprisingly good time on the downward trail. Despite a reckless disregard for the distance so far in my legs, they responded fine, however. No pain.

Finally the trail leveled a bit and widened, and Ben passed. That was fine, I decided: now he can pace me. And I was keeping up with him fairly well. One small scare: I turned my ankle at one point, but recovered. That would have been terrible to come so far then DNF with a sprain. I focused more on form at this point, hoping to avoid the problem again. The descent continued.

Then we hit the first of two significant intermediate climbs on the "descent" from Skyline. First he slowed, then he stopped. This was it -- game on! I went by him and tried to dial up the intensity, knowing I need to put as much of a gap as possible on him before the descending began again.

Suddenly, I was ripped free of my euphoria by a horrible sight. My Garmin Forerunner 610 went from "low battery", the first such message I noticed, to a blank screen. Noooooo! This wasn't my fault! I'd carefully recharged it before the race, then hadn't turn it on until within 20 minutes of the start. Strava or it didn't happen, and now I'd be without GPS the rest of the way, a social media DNF. Not only that, I'd be without distance, so would need to run without a firm estimate of distance remaining. I tried to snap myself out of my self-pity. Just run, I decided. It's all I could have done anyway.

Inside Trail Running does an amazing job of marking the course. They use probably 3 times as many ribbons per unit distance as Wendell uses for Coastal Trail Runs. Not only are turns marked, which is a given, but additionally they mark along the segments between the turns, and put blue ribbons along obvious wrong turns. For a navigationally super-challenged individual like me, it's great. But on this descent I started to get nervous. Suddenly I realized I'd not seen on of the yellow ribbons I'd been following for awhile. Maybe around this turn, I decided, only to find nothing. Then the same for the next turn. Then I came to what appeared to be a side trail and no blue there. I stopped, but then saw a runner approaching from behind, probably Ben again. Did he know the way, or was he just following me like a lemming? No time to think: I just had to keep going and hope for the best.

Several turns later, I came across a yellow ribbon. I was safe. Or was I? I may have accidentally re-entered an upward-only portion of the trail network. But I hadn't; I was fine, and this was later confirmed when I reached a turn, staying on Crystal Springs Trail instead of descending the Chapparel trail we'd taken up the hill, specifically marked as "Yellow: return".

Later, after the race, I heard that some irate equestrians may have removed ribbons from near the end of the course, and even turned a direction sign. Of course I don't know if this is true or not. But the equestrian population feels, I dare say, a certain ownership of the trails in the Woodside area, something to which they perhaps feel privileged with the huge per-capita wealth of residents, and don't welcome the influx of outsiders running or, the horror, bicycling on them. Bicycling is banned on a large fraction of the single track in the area, but of course running is allowed. That doesn't stop the resentment, however.

But back to the race.: this turn marked the start of the second of the two extended climbs on the "descent". Once again, I turned up the intensity, and this took care of any risk of being caught from behind. My legs had no problem with the climbing. When it was done, it was the final descent to the finish.

This arrived sooner than I'd expected. Suddenly, there I was, re-entering the meadow on which we'd begun, from the opposite side. I'd gotten lost here once while trying to outsprint another runner: the route to the finish winds around playground, a picnic area, and a toilet building before the clean line to the finish is exposed. But this time I navigated fine. I crossed the finish in a sprint.

And so it was over.

The finish was great. I got my finisher's medal and ultra-finisher pint glass, both very nice and "technical" T-shirt, all very nice, although I was 7 minutes too slow to score top 3 in age group. No problem: 17th overall, 5:19:15 chip time... all excellent by my standards. 4th was fine.

I hung out, had 3 cups of lentil soup, got a 10-minute massage from the women with the table set-up there (fantastic: she wasn't at all afraid of causing pain), then eventually found a very interesting ride to the Redwood City Caltrain for the northward trip home.

Monday, April 14, 2014

Paris-Roubaix statistics

Results for Paris-Roubaix, which was yesterday, are available on CyclingNews.

I was generally occupied getting to the Woodside Ramble for my 50 km race, so missed it, catching only the final results before my race began. My hot pick Taylor Phinney flatted out of contention on Carrefour de l’Arbre with 15 km remaining, so so much for prognostications. But I find it interesting to compare some team and nation stats that aren't reported by the organizers.

For each, team and nation, I calculate the number of starters (8 for teams), finishers, points, and time, using a Perl script to parse the CyclingNews results, which are the most parseable on the web. For points, I use the placing (reported by CyclingNews) of each team's top 3 placers, and sum. For time, I take the sum of the times of the top 3 finishers relative to that of the winner, Niki Terpstra.

Some interesting results: while United Healthcare finished last in both points and time, they were tied for first with most finishers, finishing all 8 of their starting riders. That's impressive for a Pro Contininental team.

Not too surprisingly, the time rankings and placing rankings are very close. On the team rankings, Omega Pharma was first, followed by Team Sky, with Belkin in 4th, BMC in 5th, and AG2R La Mondiale a strong 6th.

On the national results, the most starters and finishers were both from Belgium, with France second. The Netherlands was 4th in the number of starters, after Italy, but remarkably finished all 15 of its riders to rank 3rd in number of finishers. The United States finished 6 of its 8 starting riders.

National team standings have Belgium first, the remarkable Netherlands second (getting huge value from their 8 riders), and France and Great Britain sharing 3rd and 4th... GB third if you go by time, with France third if you go by placing. Bradley Wiggins' weak sprint in the chase group hurt GB's points total. The United States was 14th out of 15th in both standings, Taylor Phinney's puncture hurting there. The next two riders for USA were Tyler Farrar (Garmin) who finished over 7 minutes down, the John Murphy for United Healthcare losing almost 19 minutes.


Teams

finishers


rankfinisherspointstimeteam
182987Team Sky
1848132Belkin-Pro Cycling Team
183242863UnitedHealthcare Professional Cycling Team
4758177BMC Racing Team
4764177AG2R La Mondiale
4768287Wanty - Groupe Gobert
47110625Garmin Sharp
47111872Trek Factory Racing
961640Omega Pharma - Quick-Step Cycling Team
9682472Team Giant-Shimano
9695494Orica GreenEdge
9684543Cannondale
961601027Tinkoff-Saxo
961851278Team NetApp - Endura
962161447Topsport Vlaanderen - Baloise
962171447Team Katusha
175103648FDJ.fr
1751371005Cofidis Solutions Credits
1751971423IAM Cycling
1752171427Astana Pro Team
1752421690Lotto Belisol
2241951628Bretagne - Seche Environnement

time (seconds)


rankfinisherspointstimeteam
161640Omega Pharma - Quick-Step Cycling Team
282987Team Sky
3848132Belkin-Pro Cycling Team
4758177BMC Racing Team
5764177AG2R La Mondiale
6768287Wanty - Groupe Gobert
7682472Team Giant-Shimano
8684543Cannondale
9695494Orica GreenEdge
105103648FDJ.fr
117110625Garmin Sharp
127111872Trek Factory Racing
1351371005Cofidis Solutions Credits
1461601027Tinkoff-Saxo
1561851278Team NetApp - Endura
1641951628Bretagne - Seche Environnement
1751971423IAM Cycling
1862161447Topsport Vlaanderen - Baloise
1952171427Astana Pro Team
1962171447Team Katusha
2152421690Lotto Belisol
2283242863UnitedHealthcare Professional Cycling Team

placings


rankfinisherspointstimeteam
161640Omega Pharma - Quick-Step Cycling Team
282987Team Sky
3848132Belkin-Pro Cycling Team
4758177BMC Racing Team
4764177AG2R La Mondiale
6768287Wanty - Groupe Gobert
7682472Team Giant-Shimano
8695494Orica GreenEdge
9684543Cannondale
107110625Garmin Sharp
115103648FDJ.fr
127111872Trek Factory Racing
1351371005Cofidis Solutions Credits
1461601027Tinkoff-Saxo
1561851278Team NetApp - Endura
1651971423IAM Cycling
1752171427Astana Pro Team
1862161447Topsport Vlaanderen - Baloise
1862171447Team Katusha
2041951628Bretagne - Seche Environnement
2152421690Lotto Belisol
2283242863UnitedHealthcare Professional Cycling Team

Nations

finishers


rankfinishersstarterspointstimenation
123292566Bel
2193049159Fra
315152767Ned
4121468487Ger
511161961256Ita
666126855Aus
6682181735USA
85647215GBr
944139780Slo
9441841196Rus
9453122914Kaz
946113666Swi
9461571249Nor
9462141447Den
9471971447Spa

time (seconds)


rankfinishersstarterspointstimenation
123292566Bel
215152767Ned
35647215GBr
4193049159Fra
5121468487Ger
646113666Swi
766126855Aus
844139780Slo
9461571249Nor
10441841196Rus
1111161961256Ita
12471971447Spa
13462141447Den
14682181735USA
15453122914Kaz

placings


rankfinishersstarterspointstimenation
123292566Bel
215152767Ned
3193049159Fra
45647215GBr
5121468487Ger
646113666Swi
744139780Slo
866126855Aus
9441841196Rus
10461571249Nor
1111161961256Ita
12471971447Spa
12462141447Den
14682181735USA
15453122914Kaz

Saturday, April 12, 2014

Low-Key sticker design: revision

One day until my 50 km race... After my 12 day out-of-action following my oral surgery, then 4 days of running, then a last-week taper, I feel woefully fat and out of shape, and indeed I'm a solid 2 kg over my cycling "race weight". Some of this is probably leg muscle from running: my legs are looking a bit bigger. But that's not all of it. I definitely need to lose that weight before the Diablo hillclimb on 11 May.

As a distraction, some sticker design revisions. First, I updated the square design to provide two options, one with a smaller cyclist, steeper hill, and squarer aspect. Then that freed room on the 2.13 inch by 2.75 inch template for a text sticker then additionally a cyclist-only sticker.

design 1

Then a design for a circular sticker. The black border is not part of the sticker. The "sky" is transparent: for some reason I'm not able here to use my usual trick of putting a colored table behind the transparent image to change the background.

design 2

Friday, April 11, 2014

Low-Key Hillclimbs sticker design

I'm working on a possible sticker design for Low-Key Hillclimbs.

Here's the design. It has a transparent region so it's important it has decent contrast against any color bike. So I preview some colors here (if the backgrounds don't work, check this link).

sticker


sticker


sticker


sticker


sticker


sticker


sticker

Maybe I can get these printed up by Sticker Guy or someone similar. The smallest size StickerGuy sells are 2.75 inches by 2.13 inches. But I could always do 4 stickers on one die. Then I'd have 1.375 inches by 1.13 inches, plus a margin, getting the printed area down to something more suitable for a down-tube.

So here's how that would look:
preview

Thursday, April 10, 2014

Soquel Demo Forest "flow trail" project

A friend of mine is working on this project. Really cool: a "flow trail" in Santa Cruz. Honestly I thought the US was way too litigious for anything like this to come together, and there was too much anti-bike sentiment among California NIMBYs: I thought you had to cross the border up north, to Vancouver maybe.

They're competing for a grant from Bell Helmets. Consider voting for them. Their project page is here.

Rumor is we may be able to ride it in the opposite direction for Low-Key Hillclimbs at some point.

While you're on the Bell site, make sure to also check out the video for the Stafford Lake Bike Park.

Wednesday, April 9, 2014

low-drop pro bike

Cyclismo-Espresso showed this photo of a United Health Care bike with a Pioneer Power meter mounted. Okay, big deal: I've seen Pioneer power meters on pro bikes before. Apparently they work well enough.

UHC bike

What interested me was the remarkably modest handlebar drop. Okay, riders are sometimes limited by the geometry of commercially available frames, but this one has a -6 degree stem and even a spacer under that. This one would get the big reject from SlamThatStem.

So I measured the drop. I do that by the following sequence:

  1. Load the photo into GIMP.
  2. Level the wheels. I used the top of the wheels for this. I had to rotate the photo by 1.0 degrees, according to a measurement with the GIMP measurement tool.
  3. Measure the height of the front wheel. This gives me a coversion between distance and pixels. I know the rolling circumference of the wheel is around 210 cm, so this height, in pixels, gives me a conversion.
  4. Put horizontal guides at the "saddle point" of the top of the saddle, and at the top of the handlebars. Note the bars are rotated upwards, rasing even further the height of the hoods.
  5. Convert the pixel difference to height.

There's some shortcuts here. For example, I could try to correct for shear distortion. But I think the measurement is close enough.

Here's the result:

measurement

6.5 cm. That's quite modest, from a successful professional bike racer. It proves yet again (Chris Horner and Mark Cavendish being other examples) that you necessarily don't need a 10-15 cm handlebar drop to be fast, nor to "be pro".

But what about the rest of the bike? I'm not particularly impressed by white carbon fiber frames. They look plastic to me, and needlessly add mass. But this isn't my bike: it's a pro bike. And the point of pro bikes, like pro race cars, is to advertise the team and the sponsors, not to make a good-looking bike. I wouldn't drive a NASCAR-colored car (I don't drive a car period, but that's another matter), and I wouldn't necessarily ride a UCI-team bike.

Tuesday, April 8, 2014

Contador: 2241 VAM @ Vuelta al Pais Vasco stage 1

Contador

Yesterday at stage 1 of the Vuelta al Pais Vasco, Alberto Contador won, finishing 14 seconds over Alejandro Valverdi and 34 seconds over Michal Kwiatkowski.

The route included 8 rated climbs, including two ascents of the steep Alto de Gaintza, which gains 290 meters in only 2.3 km (see stage preview). Four riders have uploaded the stage to Strava, Here's Kenny Ellisonde's activity. He was 40th, @ 2:42 down on Alberto.

Here's the report on Alberto's time up the last ascent of Alto de Gaintza, the final climb of the race:

If these numbers are good, that works out to a VAM of 2241 meters/hour. Assuming a CdA of 0.32 meters squared (recommended by Vetooo based on extensive comparison of VAM and rider-reported SRM data, and coincidentally measured by Tour magazine in a wind tunnel), an air density of 1.15 kg/m3, a drivetrain loss of 3%, a rolling resistance coefficient of 0.4%, a rider mass of 62 kg (Wikipedia), equipment mass of 1.5 kg, a bike mass of 6.9 kg, I get a total power of 464 watts for these 6 min 56 seconds. That's 7.49 watts/kg.

But the effort was short, and this was the first stage of the race, so Contador was fresh and ready to go at the starting line. This is a big difference from the same numbers up a Alpine or Dolimite climb late in a 3-week stage race.

I can convert to an equivalent critical power number assuming AWC/CP = 90 seconds. Then from power = CP + AWC / t = CP[1 + (AWC/CP) / t], I can estimate an upper bound on CP = power / (1 + 90 seconds / 416 seconds). That's an upper bound CP estimate of 6.16 W/kg.

Curiously, it's only 1% more than the CP estimate I got from Chris Horner climbing Sierra Road in the 2011 Tour of California, 6.12 W/kg.

And it is a very good number. Contador's altitude training clearly paid off.

Sunday, April 6, 2014

training crash & burn, then running the Marin Headlands

Training for the Woodside Ramble 50 km race, while not without hiccups, was going nicely to plan. I'd plotted a gradual increase in my simplified running form of CTS, ramping 0.43 km/day per week, and I was sticking to that schedule, initially by increasing the length of my runs, but later transitioning to an increase in frequency. I was tired, of course, but a good tired. I was pushing my limits, as one must to get them to shift, but it seemed to be working.

The problem with simplified training stress metrics is they exclude stress from other sources, and in this case the big ugly stress source was oral surgery. A tooth broke and the dentist gave me the sobering news the next day: it should be removed and replaced with an implant. So the tooth came out, a process involving the expenditure of extremely few kilojoules on my part, but which nevertheless left me very, very tired.

Initially I persisted in my plan to run every day. Length didn't matter, speed didn't matter, just go out and get a few kilometers in to see how the legs responded. But then allergies defeated my weakened defenses. Running simply wasn't an option. Commuting to work wasn't even an option. Monday to Thursday I worked from home.

On Friday I finally felt good enough to walk to the train. But it wasn't until the following Wednesday that I was able to go out for a run. It was a short one, just to test the waters. But it went fairly well. I didn't feel as fat and out of shape as I'd expected to.

On Thursday it was 10 days from my race: time for a taper. But you can't taper from nothing, so I chose instead to provide a last impulse of training to stop the slide into the abyss. So 14 km (estimated) on Thursday, 17 km Friday, then to the Marin Headlands for a long one on Saturday.

And what a glorious run that was! The weather was gorgeous, and the trails seemed to guide me around the park. Starting from Bunker Road with Cara, who was mountain biking, I went up and over Miwok at a steady pace, through the stables, up the rugged climb of the northern part of Miwok, then the more gradual, wider climb of Coyote Ridge. Suddenly I passed Green Gulch on the right, and without much thought, I diverted onto this, a trail I'd not run before. Down Green Gulch I entered the Green Gulch Farm Buddhist retreat, where I found some much-needed water to supplement the two bottles I was carrying, one almost empty.

Choices, choices...

water

On I went, moving onto the Coastal Trail not far from Muir Beach after being tempted by the fork to Diaz Ridge Trail in Tamalpais, across Highway 1. I'll take that route another day.

Here the climbing got more difficult. I was still running, albeit slowly, up the steep Coastal Trail climb south from Muir Beach. But before the summit, I reached the turn-off for the trail to Pirates Cove. That had been my target. Since passing it up 3 weeks prior when I'd been here with Cara, I knew I wanted to return soon.

It's a gorgeous route with spectacular ocean views: single track gradually descending, sometimes climbing, to out-and-back spur to the Cove itself.

image

The spur is too steep to be runnable, so I skip it, instead turning onto the stairs which climb south. Here the spring in my step failed me, and I was reduced to hiking.

image

Eventually, however, it leveled out, and I was able to run again. Then onto the steep descent, but on smooth fire road, to the Tennessee Valley Trail. My goal was the Coastal Trail climb to Hill 88, which had defeated me the last time I tried it on a long run, in 2011.

By Headlands standards, that trail isn't easy to find, access essentially unmarked on the southern lower of the two parallel Tennessee Valley trails.

It starts innocently enough:

image

But then it gets steeper. And once again I was reduced to hiking, just as in 2011, but this time sooner. My runs of the preceding two days having taken their toll.

To the top, I recovered a bit. After a short descent, I ran the climb of Wolf Creek to Miwok, but then walked the steeper Miwok climb to Bobcat. But the rest of the way was rolling or downhill, and I ran from then on. Bobcat, Alta, SCA with its amazing views of the Golden Gate, then Coastal trail descent back to where I began. Total distance: 31.5 km.

After the run, I got on my bike and rode the 21 km back home, just in time for a chocolate tasting tour of San Francisco I was planning on doing with Cara. If I ever felt justified eating chocolate treats, it was now.

The impact of the 12 days of downtime on my training metrics was, not surprisingly devastating:

image

So it's time to stop worrying about training metrics and start worrying about recovering and doing a good run in Woodside. If this makes me more cautious, more careful in pacing, nutrition, and hydration, well then it may turn out to be a good thing. I just need to avoid any more oral surgeries in the next week.

Friday, April 4, 2014

simulating vehicle speeding detection algorithm

Last time I proposed an algorithm for detecting speeding vehicles. In summary, the algorithm was to set an actual speed limit 5% over the nominal speed limit, then compare the car's position to a "virtual pace car" going that speed limit, where the pace car would slow to avoid getting behind the car in question. If the driver got 50 meters ahead of this virtual pace car, he'd be fined proportional to how far ahead he got. If he kept pulling away, the fine would keep increasing.

I tried various approaches, but I think the best one is relatively simple. I assume a car starts at rest, then accelerates at 3 m/s2 (0.31g, or 0-60 in 8.94 seconds) to some final speed, then holds that speed. There's a posted speed limit of 20 meters/second (72 kph), implying an enforced speed limit of 21 meters/second (75.6 kph) which the driver reaches in 7.00 seconds, having traveled 73.5 meters.

If the peak speed is no more than 21 mps, the driver is never cited. But if the peak speed is in excess of this, he will eventually be cited. The key question is how quickly this happens, given my 50 meters-ahead-of-pace distance criterion.

Here's a plot of the time taken for the speeding indicator to trigger. For the 22 mps peak speed, only 10% over the nominal speed limit, it takes 53 seconds before the 50 meter threshold is reached. For the highest speeds more of the time is taken by the acceleration phase after the 21 mps threshold is crossed, and the time hits a limit of approximately 8 seconds.

time to speeding

The other statistic of interest is how much distance was covered before the threshold was reached. This is easy to estimate from the time: the "virtual pacer" goes 21 meters each second, so add to that distance the 50 meter threshold and you have the distance covered by the perpetrator. The result isn't exact because of the requirement three points in a row are over the distance threshold, so the final margin may well be more than 50 meters, increasing the net travel distance.

Here's that plot, with distances taken from the simulation, not the estimated calculation:

distance to speeding

It's around 1180 meters in the case of the 22 mph speed, decreasing to around 250 meters at the highest speeds.

200 meters is typical for a city block, so the algorithm won't catch a violator in a single block for this 20 mps (72 kph) speed limit. But a more typical urban speed limit is 40 kph or even 30 mph. With the "virtual pacer" going slower, it's possible the 50 meter threshold could be reached in a single block. But more likely is to get caught traveling over a longer distance, for example two blocks connected by a green traffic signal.

Here's an example of a randomized speed schedule, where the driver is averaging 20 mph, equal to the assumed speed limit:

random speed

It takes around 20 seconds for the driver to go from the 20 mps speed limit to 25 mps, at which point he triggers the speeding threshold. Then the speed decreases, eventually back below the speedlimit, and the "excess distance" decreases, eventually back below the 50 meter limit.

So the criteria I propose are fairly strict, yet only marginally strict enough to catch someone speeding along a single city block. Yet I stand by the strictness. If you don't want to be fined don't go over the speed limit. When you do could be the time you kill someone.

The one key point here is when the speed limit changes. A way around this is to extend the boundaries of the higher speed limit somewhat, to give drivers a safety margin, relative to where the higher speed limit is marked on the roadway.

Thursday, April 3, 2014

proposed automated speed limit enforcement algorithm

Last time I argued for automated speed limit enforcement using GPS receivers installed in all new vehicles sold. I would be negligent in doing so without at least proposing an algorithm.

So the algorithm is this:

  1. Determine the present speed limit. If GPS is used to monitor speed, the GPS coordinates would need to be mapped to a street map to determine a local speed limit. This seems complicated, but in many urban areas the speed limit is the same on all roads in a local grid, so you'd basically just need to identify if the driver might be on an expressway or an interstate based on position and direction. If the speed limit was varying wildly from one road to the next, this would become more complicated, and only the maximum of the local speed limits could be enforced this way. GPS has only a certain position precision.
  2. Set a true speed limit 5% higher than the nominal. This gives some margin for error. GPS doesn't have systematic error nearly this large, but it makes sense to have a small buffer if you're going to be fining people.
  3. Set a distance threshold. If you pop over the speed limit for one second, that shouldn't be fined. GPS has a finite position accuracy, for example 10 meters under the best of conditions, and you want the threshold to be sufficiently above this threshold. On the other hand, if the threshold is too large, streets like Potrero Ave where peak speeds in excess of the speed limit are not typically sustained for long due to frequent traffic lights will go essentially unenforced. So I'll propose 50 meters for this.
  4. Set an acceleration threshold. GPS can sometimes jump due to reflections off mountains, building, etc, yielding an essentially instantaneous displacement of position. These jumps should be ignored. So if the position jumps in a way inconsistent with the operation of a realistic motor vehicle, that should be ignored. This can be checked as follows. For any 3 points, p1, p2, and p3, with sampling times 0, 1, and 2, if p2 deviates from the average of p1 and p3 by more than 10 meters, then the interval from p1 to p3 is assigned a speed equal to the true speed limit.
  5. Maintain an integral equal to the accumulated distance in excess of that which would be traveled at the true speed limit. So for travel from p1 to p2 over time Δt, add ( p1 − p2 ) − vmax Δt to the excess distance. Excess distance may increase or decrease, but it is limited to zero if it would be reduced below zero. At the start of a trip, it's zero.
  6. If the car is in excess of the maximal allowed excess distance continuously over some period, for example 3 seconds in one-second samples (to allow time for checking the points as described above), the registered owner of the car is fined proportional to peak excess distance sustained over a 3 second period.

Note there's plenty of slop in this approach. First, the true speed limit is set higher than the nominal speed limit. Then the driver must get a certain threshold above a "virtual pace car" going at the true speed limit. Then he must stay beyond this threshold for a certain period of time. Additionally, the "excess distance" is unchanged if the GPS data are suspiciously inconsistent. All in all this would create a challenge for automated enforcement on urban roads.

For urban settings, fixed-point speed monitors would thus still be required. A video camera in conjunction with sonar, for example, can identify vehicles moving quicker than the speed limit. These are not presently used for automated speed limit enforcement, but with a change in state law, that could be changed.

This proposal is not in any way a threat to individual liberty. Driving is not a right, but walking is, and there is no greater denial of liberty than to create conditions where someone can't walk across their street safely. By driving in a way which compromises safety you are depriving others of their fundamental right. So this is about preserving rights, not denying them.

I'll show a simulation next time.