Wednesday, February 23, 2011

adjusted VAM for hillclimb performance comparison

Strava reports the VAM, or rate of vertical altitude gain, for riders on different climbs. VAM allows comparison of climb efforts on climbs gaining different altitudes. It's correlated relatively strongly with power-to-mass ratio, and is considered a decent gauge of fitness. For example, I may climb Old La Honda while someone in France may climb Col de la Madone. We obviously can't simply compare times, or even average speeds, but the rider with the better VAM is probably the better climber.

However, one issue with VAM is that it is easier to produce a higher VAM on shorter climbs. It is well understood riders can produce a higher average power for shorter durations than for longer ones. So you certainly do not want to provide an unfair advantage to those riding shorter climbs.

Well, whenever one brings up the subject of the time-dependence of power, the CP model is the first thing to pop into mind. The CP model says:

maximum power = critical power + anaerobic work capacity / duration

or

Pmax = CP + AWC / t

A "typical" number for AWC is 90 seconds × CP, so assuming this, I get:

Pmax = CP ( 1 + 90 seconds / t )

The critical power model is very simplistic. It's based on the assumption there is a fixed reservoir of energy which can be fully converted into work over the duration of the effort. Additionally it assumes the "critical power" can be maintained indefinitely. Neither of these assumptions is particularly good: for times shorter than maybe 5 minutes, it becomes difficult to do all of AWC more work than CP times the duration. And for times much over a half hour it becomes difficult to sustain CP. But for climbing durations from 5 minutes to 30 minutes it works fairly well.

If instead of "critical power" (CP) I'm interested in "critical VAM" (cVAM), and I assume VAM is proportional to power (for fixed mass) I can then write:

VAM = cVAM ( 1 + 90 seconds / t )

or since I measure VAM and t :

cVAM = VAM / ( 1 + 90 seconds / t )

This is what I propose you use to rank climb efforts. Instead of "critical VAM" I'll call it "adjusted VAM":

adjusted VAM = VAM / ( 1 + 90 seconds / t )

For example, supposed someone does a climb lasting 6 minutes and climbs at 1800 meters/hour. Then I get

adjusted VAM = VAM / ( 1 + 1/4 ) = 80% VAM = 1440 meters/hour.

Someone else climbed for an hour with a VAM of 1500 meters/hour... his adjusted VAM would be (40 / 41) (1500) meters / hr = 1463 meters / hr.

So the 1500 meters in one hour beats the 1800 meters/hr for 6 minutes.

Considering the limitations of the CP model, getting an especially high adjusted VAM from a very short (less than 5 minutes) or very long (more than 30 minute) climb will be challenging. But it's better to undervalue short efforts than to overvalue them, as you want to encourage riders to make their efforts on substantive climbs, not little bumps which might not even qualify cat 4 in the Tour. And while the challenge may be tougher on very long climbs, with raw, naked VAM it's harder still.

added: It is true that steeper hills produce higher VAMs than more gradual hills due to the increased rolling resistance and wind resistance. However, I explicitly ignore that here. The point isn't to estimate power, the point is to give credit for climbing, and climbing is about increasing altitude quickly. After all, some people can produce enormous power on a dead-flat road, yet that is not an example of "climbing". So no extra points for pushing the wind aside or deforming rubber. If this means riders of steeper hills get more credit, that's not such a bad thing.

Tuesday, February 22, 2011

breaking news: SRAM released black version of Red group

Yawn...

Seriously: they should have done a pre-release to magazines like VeloNews, Bicycling, etc and implied there were substantial mechanical advantages to what is actually just a paint job. It would have been fun to see how many of the reviewers reported on the "subtly improved shifting", "enhanced brake modulation", or "superior power transfer".

Tuesday, February 15, 2011

setting checkpoint times for the MegaMonster

Almost every year, I've helped Kevin Winterfield put on his "MegaMonster Enduro Ride", which goes back to 1996. It's a sort of unique event: a timed 100 mile (actually 102 mile) ride with checkpoints along the way. It's either a very short Paris-Brest-Paris or a very long time trial, depending on how you view it. The goal was to formalize the competitive side of "century" rides, which are typically inappropriate for "racing" since the courses are not well controlled, but allowing riders to keep a healthy emphasis on endurance and success-in-finishing, unlike the "fast-or-total-loser" culture of bike racing yet without the massive distances of the more serious brevets.

The course is simple: out 51 miles, back 51 miles. There's a checkpoint @ 32 miles on the way out (Bitterwater), another at the turnaround @ 51 miles, then back at Bitterwater @ 70 miles, then of course the finish, at 102 miles.

Since it's not a mass-start event, and we don't allow drafting to prevent it from becoming a road race, we want riders to start at different times. To make things easier on volunteers, we wanted to encourage faster riders to start later, slower riders to start earlier. It's easy to see this results in less of a spread in finish times than if fast riders started before slower riders.

So to economize on checkpoint time, we put the responsibility on riders to start an an appropriate time for their speed. If you want to blitz the course at 25 mph, you need to start as late as possible. On the other hand, if you want as much time as possible to finish, you should be ready to roll when volunteers are ready for the first starter at 8 am.

A slight complication is we also offer a 100 km option: out to Bitterwater and back. The only compromise needed to be made here is the finish checkpoint needs to open earlier. So I won't discuss the 100 km scheduling here.

First is the start and finish line scheduling. So I pick a target fast and slow time: in this case 4 hours for the fastest scheduled time, and 8 hours for the slowest. 4 hours seems fast, but since we allow teams, a fast team time trial squad can go 25 mph on flat roads without too much wind. These roads aren't flat, and there's rarely too much wind, but since we also allow recumbents, hybrid-electric bikes, and even hybrid-electric recumbent bikes, 25 mph is a real possibility (Bill Bushnell was even faster on his hybrid-electric recumbent this year, but in Bill's case we stretched the limits of the schedule slightly).

This is a spread of 4 hours which must be allocated between start and finish checkpoints. I picked a 1.5 hour window for starting, and so would need at least a 2.5 hour window for the finish line, with the finish closing 8 hours after the earliest start time, at 4 pm. This implies the finish line should open no later than 1:30 pm. Well, here's where the 100 km comes in: someone leaving for the 100 km ride at 9:30 am might finish in 2.5 hours, using that 25 mph "fast" target. So the finish line opens at noon.

So the start goes from 8 am to 9:30 am, and the finish opens from noon to 4 pm. That leaves Bitterwater and the turn-around at CA198. You'd think the limit here would again be the early slow rider and the fast late rider. But you'd be incorrect. Actually, the checkpoints need to open in time for the fastest early starter, and need to close based on a schedule for the slowest late starter. So this means limits need to be set for these riders.

Obviously we don't want to accommodate a 25 mph starter at 8 am: anyone that fast should be leaving later. If an 8 am starter were to just be caught by a really fast 9:30 am starter (4:30), the 8 am starter would finish in 6 hours, so we allow early starters to go at this pace. Anyone faster than 6 hours should realize they're not going to need that full 8 hours to finish and can start later. This puts the 100 km opening at 10 am, the turn-around at 11 am, and the opening for Bitterwater return riders at noon, rounding liberally. The rider would then finish around 2pm: exactly the time I'd established for the fastest starters leaving at 9:30 am.

But what about slow 9:30 am starters? If they finish just at the closing of the finish at 4 pm, that's 6.5 hours. That puts the limit on closing times. So Bitterwater should close at around 11:50 am, the CA198 turnaround at 1 pm, then Bitterwater on the return at around 2:10 pm.

Since 200 km riders might be slower, we simply let them check into Bitterwater any time from 10 am to 2 pm. We then stretch the Bitterwater closing for outgoing riders to noon, which gives a bit of extra time unless an early puncture causes delays to a 9:30 am starter.

So considering these four rides: fast late and slow early setting the start/finish times, then slow late and fast early setting the checkpoint times, I make the following chart:

checkpoint timing

I need to come clean, however. The actual scoring code has the following:
my %cut_off_time = 
  (
   "BW(out)"   => "12:05",
   "198(turn)" => "13:05",
   "BW(in)"    => "14:15",
   "BW(turn)"  => "14:15",
   "finish"    => "16:05",
  );

So I coded in a 5 minute safety buffer. Of course no guarantee the checkpoint volunteers will stick around that long.

We've been using this schedule since 2008, the first year on the present course, and it still amazes me how well it works.

Sunday, February 13, 2011

letter to Caltrain

My friend John Murphy is rather negative on the proposed "scaled back" Caltrain schedule, designed to reduce expenses in a time of reduced support from the counties. Caltrain, surprisingly, has no dedicated funding source, but rather must rely on the good-faith contributions from San Francisco, San Mateo, and Santa Clara counties. This is because the Bay area has such a hopelessly fragmented transit system. Counting Marin, 511.org lists 22 bus agencies, 6 rail agencies (not counting Amtrak), and 7 ferries. Really it makes a lot more sense for there to be a "Bay Area Transit" agency, since with so many interregional routes (San Francisco to Marin, Marin to San Francisco, between San Francisco, San Mateo, and Santa Clara counties) there's extensive duplication of service, and seemingly arbitrary rules to avoid competition (like SamTrans can't pick anyone up on northbound trips which enter San Francisco). It's a mess.

With all of the fragmentation, it's natural there would be some attempts to combine resources. For example, the VTA and Caltrain share the same Chairman, Mark Scanlon. Unfortunately, however, Mark sees fit to get paid for both jobs, reducing any efficiency gains, resulting in a salary of over $400 thousand per year. Yes indeedy, in these trying times, public transit is running lean and mean.

Consider as well the trains run overstaffed. Conductors an "fare agents", with staffing levels dating back to the days when tickets were sold on the train and pre-sold tickets were all punched, spend most of their time chatting with each other. Typically once each trip down the line a "fare agent" will walk down the aisle, glance at paper tickets and scan Clipper cards with his electronic reader. They do two round trips per day. I do one, and that's just my commute.

Anyway, back to the schedule. It's obviously a negotiation point, a line in the sand to get the counties to cough up the cash. Poor, broke Caltrain. I fully agree Caltrain needs dedicated funding, however. Although I'd prefer it come from a regional gas tax, to help offset the increased societal and environmental impact of driving.

Here's the letter I'm writing to Caltrain about the schedule:

Suggestions for schedule:

First, I fully support the emphasis on fewer stops. This should be kept in the future upon the restoration of more trains. I don't take local trains generally, so if others are like me, you lose customers during periods when local trains are the only option.

But the schedule has serveral weaknesses.

First, it's shifted earlier than the present schedule. From SF, there's a limited @ 6:11 am and an express at 6:59. In the proposed schedule, there are 6 expresses by 7 am. That makes no sense in a time of cut-backs.

Similarly, there is no reason to maintain a 15-minute schedule well past the peak commute. Going to 30 minutes at this phase is better.

Second, the "drop-dead" times of 8:30 am and 6:30 pm are too draconian. You need a safety net.

Third, completely eliminating mid-day trains is a mistake. You should have one mid-day train in each direction, as occasionally commuters will have a dental appointment or other work conflict in either the morning or afternoon.

I recognize you have labor issues which suggest compressed schedules are preferred. But it is clear that trains are overstaffed since conductors/"fare agents" have substantial idle time.

So with the total number of trains taken as a constraint, but liberalizing the schedule compression, I propose the following departure times:

6:00 am, 6:30 am, 7:00 am, 7:15 am, 7:30 am, 7:45 am,
8:00 am, 8:15 am, 8:30 am, 9:00 am, 9:30 am, 12:00 pm

3:45 pm, 4:15 pm, 4:45 pm, 5:00 pm, 5:15 pm, 5:30 pm,
5:45 pm, 6:00 pm, 6:15 pm, 6:30 pm, 7:00 pm, 8:00 pm


This schedule actually gets me to and from work better than the present one. And while "bumping", where cyclists are denied boarding due to lack of nominal capacity, is a concern, it's a concern today. The key is to have a "safety net" train which will be there if you get bumped from the last commute train. If I get bumped from the last commuter I still have time to ride 2.3 miles back to my office, leave my bike there, then run back to the station (or take a taxi) for guaranteed boarding on that evening train.

There's presently 14 trains in the morning and evening, the proposed schedule has 12, and my proposed revision has 11 (with one mid-day and one night). The capacity is lower but with increased fares further widening the cost gap between the train and driving, demand will be less. More people will drive instead.

So not a disaster. What would be a disaster would be converting the weekday schedule to a weekend-like schedule. No station left behind. Then everyone gets to work equitably late. There's a good reason I never take Caltrain on weekends.

Monday, February 7, 2011

Tour test of "aero" mass-start frames

Many years ago, Kestrel Bicycles came out with the "Talon" road frame which was targeted as spanning the bounds between triathlon and mass-start racing. There were no wind tunnel data, no scientific tests: it was simply designed to look aero, and who knew if it was or not? Kestrel, who was among the early groundbreakers in building and selling carbon frames, was sold by a Japanese group. However, the Talon survives, both with a standard model and a lighter a Talon SL. It got redesigned for 2009 (see video review). I've always found the bike attractive, in part because of its Santa Cruz roots.

Years later Cervelo, a long time leader in time trial frames, came out with their Soloist aluminum-frame bike. CSC showed the bike to be quite race-worthy at the highest level, notably with Bobby Julich winning the 2005 Paris-Nice. Then they developed the SLC, a carbon version, for that year's Tour de France. An SLC-SL followed in 2006, and for the 2008 Olympics, the S3 was released. It is still sold today.

Cervelo did do wind tunnel testing. They compared their SLC to their R3, the two sharing very similar geometries, and reported in a white paper, Col de la Tipping Point, that the SLC offered an advantage in coefficient of drag of 0.0045 m². This is out of a typical CdA for a rider and bike of around 0.31 m² (see later), so the advantage is only 1.5% in power, and that translates into a speed advantage of no more than 0.5%. See my analysis on the topic here.

Riders continued to show considerable success on Cervelo aerodynamic road frames. However, they certainly didn't win everything, and riders who switched from Cervelo didn't seem to do particularly worse when they switched to other bikes, typically designed for stiffness and weight but generally flaunting aerodynamics. You'd expect that from an only 1.5% difference in wind resistance power. Of course, every bit helps, but you can gain as much benefit from taping the front vents shut on your helmet, keeping your jersey zipped, pinning numbers carefully, etc.

But then the aero bike thing became more popular, at least among bike companies always looking for that extra edge to excite the huddled masses. With more competition, claim inflation began. For example, Felt produced the AR debuted for the 2008 Tour de France, but it was hard to find any photos of riders actually using it. Felt claimed initially it offered a 2% reduction in power versus a similar yet non-aero frame (one supposes the Felt F1). That's 2% of total power, so is closer to 2.2% of wind resistance power, considering rolling resistance. Felt has since redesigned the frame to stiffen it up, so at least it was getting occasional use last year, the last year Felt sponsored the Slipstream boys.

Also in 2008 Ridley debuted the aerodynamic Noah frame (see CyclingNews article). They claimed in tests that at 40 kph on a test track, Cadel Evans and Robbie McEwen averaged "12-15 watts" less on the Noah than on the control Ridley frame. Assuing a baseline CdA of 0.31 m², that's between 5% and 6% of total power.

So what's up? There's no way I believe the Ridley was three times more aerodynamic than the Cervelo or twice as aerodynamic as the Felt. Were they using the same wheels on the two bikes?

Litespeed, famous for its Ti frames (the light but whippy Ghisallo, the super-smooth but heavier Archon) came out with its Archon C1 last year, a road frame for which they put up some extremely impressive numbers (click to zoom):



In this case it was confirmed via an on-line forum the same wheels were used in each case.

20 watts saved?!? Granted, Litespeed has taken the revolutionary step of designing their frame around an assumed round waterbottle attached to the downtube, so that offers some advantage versus a frame which isn't optimized for the typical water bottle, but these number just seem huge.

Finally some independent data: Germany's Tour Magazine, famous for rating bike frames by how much they bend while ignoring how they actually ride, did a really nice wind tunnel study of the following:
  1. Kestrel Talon SL
  2. Cervelo S3
  3. Felt AR1 (nice design this year)
  4. Canyon Aeroad CF
  5. Stevens SLR
  6. Merida Reacto
  7. Kuota Kult
  8. Cannondale Super-Six
This last bike, the Cannondale, is the non-aero "control". Cannondale used to be called "Cannonwhale" because of it's fat-tubed frames. Such fat tubes, however, no longer look quite so fat because that design paradigm, started earlier by Gary Klein when he was building Al frames at MIT and copied by Cannondale, has become the norm.

No U.S.-based Litespeed here, but Tour is a German magazine, so prioritizes testing German bikes.

Tour tested the frames with a "dummy" rider in the wind tunnel. I think this is a good approach. Some argue the legs should be spinning as well, but assuming 50 kph speed and 90 rpm cadence, the legs have time to spin only 15 degrees in the time it takes relative wind to go from the bottom bracket to the rear hub. So it's not as if the legs are churning much during the time it takes wind to interact with the legs and frame: modeling the rider as stationary is probably just as good. And the advantage of a dummy is his position is perfectly predictable on a given frame. Of course, it's challenging to match position from one bike to another: I'll need to study the German in the article (Google Translate is my friend) to see how they dealt with this. For example, it's meaningless to say you're going to choose the same "size" of bike from different brands. Even ignoring the fact that bikes are only sold in a few sizes, so a match in any given parameter is unlikely, there's no widespread agreement on how to even measure a bike's size

But these concerns aside, here's Tour's front page from their web site:



Hopefully they'll offer the article for on-line purchase: it's hard to find in the US. Anyway, here's a plot of their main result: CdA for each of the tested bikes as a function of yaw angle.... drum roll....



Finally the fog starts to lift.

Look first at zero yaw. This means the relative wind is straight head-on. The Cannondale control bike actually does quite well: CdA = 0.312 for the Cannondale, 0.310 for the Felt, and 0.308 for the Kestrel. All of the other aero frames are actually slower, with the Stevens a total dog at 0.325. Shocking, really: the "aero" frames actually appear to have more drag than the 'whale.

Go to 5 degrees yaw, and the aero frames start to move ahead. The Cannondale has fallen back to be tied with the Merida at 0.325. The Canyon still trails the Cannondale with 0.328 while the Stevens is still the dog of the pack at 0.332 The Kuota now edges past the Cannondale, however, with 0.322. The Felt is a bit faster with 0.321. The Cervelo surges dramatically,, actually reducing its value from zero yaw, with 0.317. The leader remains the remarkable Talon, however, with 0.315.

So here we have the Kestrel 3.1% lower drag than the Cannondale, the Cervelo with 2.5% less drag, and the Felt with 1.1% less drag. These numbers are proportional and comparable to the early claims on the Cervelo and Felt of 1.5% and 2.2%, respectfully (assuming 10% rolling resistance power in the case of Felt's claim). Kuota now has an outside claim to the "aero" table, but none of the other frames can claim an advantage over the Cannondale yet.

Moving to 10 degree yaw, the aero frames start to really shine. The 'dale is now tied with sad Stevens for last @ 0.336. The Felt's surged ahead @ 0.322, then the Kestrel @ 0.324, then the Cervelo @ 0.325. The Merida, Kuota, and Canyon are all stuck in no-mans land between 0.331 to 0.332. Too bad, at 5 degrees the Kuota looked as if it might have been a contender.

The Felt leads with a 4.3% power advantange @ 10 degrees. The ancient Kestrel is surprisingly strong with 3.7%. Cervelo here shows a 3.4% power advantage versus the Cannondale. The rest of the pack is far behind: less than a 2% advantage for the Merida and Stevens and Canyon, with the Stevens still unable to pass the Cannondale.

Out @ 20 degree yaw, I'd be worrying about the cross-wind handling of the aero frames. But here is where the Felt and Cervelo really show the big power gains: 0.305 versus 0.330 for the 'dale (finishing ahead of 0.331 for the Canyon, amazingly). So if you want to claim big advantages for an aero frame, do it at 20 degrees yaw. But is 20 degree yaw realistic? I'll address this in a bit.

Overall, only the Cervelo, Felt, and Kestrel really score conclusively as "aero" frames. The others are all marginal or full-on faux. Filling in the seat tube and declaring aerodynamic advantage isn't doing much if you're leading off with a big fat head tube and down tube. Well, it may trigger the placebo effect, the rider convinced he's slicing through the wind faster than ever before. And maybe he is: placebo is among the most effective performance enhancers.

The rankings don't surprise me, really. The head tube of the Cervelo is a thing of beauty: slim and contoured. The Felt is also a pretty thing to behold. The Euro frames, though, subscribe to the "fat head tube, fat down tube" school of stiffness-über-alis, and since the head tube is the first thing seen by the wind, if you lose the head tube/down tube battle the war is lost. The seat tube isn't going to save you.

But what about the wind yaw number? Wind yaw was tested out to 20 degrees. A 20 degree wind yaw with a rider going 45 kph corresponds to a cross-wind of 16 kph. If weather stations are reporting winds this strong, that's substantial enough that handling from cross-winds becomes a concern such that I think I'd be more worried about dicey handling from broad frame tubes than I would about a few percentage difference in wind drag.

Even so, meteriological stations are at a standard height of 10 meters above the ground, and we care more about the 0.26 to 0.9 meter altitude where the frame lives. To translate wind speeds higher up to those closer to the ground we need to account for shear, and for that we can use the Hellman formula:

(v / v10) = (h / 10 meters)α,

where v10 is the wind speed at 10 meters, and v is the wind speed at the desired altitude. This model roughly applies up to a few hundred meters above the ground.

In the model, α describes how rapidly the wind speed drops off approaching the ground, and depends on the nature of the terrain. For "neutral air above human inhabited areas", which I think describes much bike racing, α = 0.34 is recommended. So if I want to evaluate the average of the square of the wind speed from 0.26 meters to 0.9 meter, I integrate (z / 10 meters)0.68 from 0.026 to 0.09 and divide by (0.09 ‒ 0.026), then take the square root, yielding 0.38. So the "effective" crosswind at the height is more like 38% of the wind at 10 meters altitude (assuming the wind is partially blocked by ground objects like trees or buildings, etc). This means that 16 kph wind, quite strong, is more like 6.0 kph, yielding a yaw angle of 7.6 degrees.

Here's a plot showing the yaw angle (x-axis) plotted versus height above the ground (y-axis) where I assume a 20 degree yaw angle at 10 meter elevation. The bike is shown to same height scale. You can see the yaw angle at the height of the bike frame is well below the 20 degree value calculated from meteriological wind speed, and indeed below the value associated with the rider's body.

yaw angle versus height


Sanity check on this wind speed: if I'm riding @ 45 kph on the flats, and then I suddenly need to deal with this sort of wind, to what will my speed drop at the same power? Well, first the wind resistance to be considered needs to include the human body (before I was focusing on the yaw angle at the bike, which on the road, unlike in a wind tunnel, will vary with height). The body is around 2/3 the total, so the "center of wind resistance" will be higher around a 1 meter altitude. Here the Hellman formula gives 7.3 km/hr of wind. Neglecting difference in rolling resistance, I solve to get my new speed at the same wind resistance power: v ( v + 7.3 )² = 45³, yielding v = 40.3 kph. So that's a fairly significant breeze, dropping my speed 5 kph, yet is enough to produce only an 8 degree yaw at 45 kph.

So I think the 20 degree yaw angle is way beyond typical conditions. Even the 10 degree yaw angle is on the high end (like riding next to open fields in a stiff breeze, or along an unobstructed coast road). 5 degree yaw is probably more applicable in a cross-wind, with a zero degree yaw often applicable as well. The important thing is to remember yaw at the bike isn't the same as yaw at the body or yaw up at a 10 meter reference altitude.

That old Cervelo number of 1.5%, or even the original Felt number of 2%, is looking a lot better with these assumptions.

Just yesterday the Tour of Qatar held a prologue time trial. To reduce travel costs, they require the riders to use their mass-start bikes in the race. Garmin-Cervelo was the only top team there with aerodynamic frames: the Cervelo S3. Even better, they had Jack Bobridge, who had just remarkably beaten Chris Boardman's 14+-year-old pursuit record. A super-lock, right? Except Bobridge, the top finisher for his team, could place only thirteenth, ten seconds down over the 2 km course. The race was won by Lars Boom on his very "non-aero" Giant frame.

So the message from the Tour test: it's all about the yaw angle. And in the real world, yaw angles tend to be fairly small. Maybe the Chung-on-a-stick will give us some decent real-world data of what the wind is like from the level of a bike frame.

So while aero bike frames may well make a difference of 1-3% in power, and that's a wonderful advantage to have, these claims of 10% or more of power claimed appear to be unjustified. Well, there are those Litespeed numbers.... we'll need to see what happens with those. Maybe VeloNews, which promises to do more analysis this year, will conduct a similar analysis, testing the LiteSpeed and hopefully the Scott F01 which is in pre-production.

The Tour test also looks at stiffness, in the Tour magazine tradition of immediately dismantling test bikes and bending them. Here the faux-aero bikes do better, in particular the Canyon, which is generally considered to be designed specifically to do well in the Tour tests which are important for German bike companies. The Talon, on the other hand, finishes last in the stiffness tests. Yet reviews generally rave about how smooth the ride is on the Talon. I think this is no coincidence: a bit of compliance, not just vertical, goes a long way to smoothing out the ride. But that's another topic.

So which do I like? Of the Tour test bikes, the Cervelo gets the nod for its combination of aerodynamics and race-proven ride quality. Next probably the Kestrel, although it's at least 100 grams heavier than the Cervelo. Third place to the Felt, which has exceptional aerodynamics and looks really good, but is heavier even than the Kestrel. But looking beyond the test, that Litespeed C1 is intriguing, especially since it's now available in black, which should take a bit of weight off (white paint is heavy). And the Scott F01 is still pre-production, but by using the Kamm tail design approach popularized by Trek, Scott claims to attain aerodynamic efficiency without the elongated cross-sections we've come to associate with aerodynamic frames like the Felt. It claims to come in under 900 grams, and Scott, with its Addict, has a proven record of producing frames in the sub-800 gram range (real weight, not a fairy-tail number like you see from Pinarello, for example). And HighRoad riders used it extensively in the Tour last year.

Neil Pryde also has an intriguing design, which Cycling Plus liked (see PDF) in their Feb 2011 article. However, Cycling Plus didn't evaluate aerodynamics, just perceived ride quality.

My vote, if one is willing to wait, goes with the Scott. Not that I'm in the market for a new bike, my engine presently much more neglected than my chassis. The way for me to get faster is to "ride bike".

Saturday, February 5, 2011

filtering motorized ride segments with power estimation: finally done (for now)

I implemented the biexponential filter, along with power filtering, in my motorized segments detection code. This program, along with my other Garmin Perl codes, can be found here.

I made a few changes from previous descriptions. One is I changed the anaerobic time constant to 120. This is more the upper end, rather than the median, of typical numbers. This may provide a bit more margin against falsely identifying a segment as motorized.

The other change I made was to separate the altitude-smoothing time constant from the anaerobic time constant. Oversmoothing of altitude can result in the overprediction of power when a rider descends small dips. I set the default to 30 seconds. I also added some speed smoothing but only 5 seconds. Without any speed smoothing and there's too much effect from when the Garmin occasionally spits out a single point with ridiculously high speed. But too much longer than 5 seconds and I lost more of the train segments in Italy where the train was making frequent stops and passing through tunnels:

effect of VAM on smoothing
partial tagging of train travel in Italy


Impressive how the train maintains speed constant to the precision of the Garmin...

With these parameter settings, of all the rides I tested segments which tagged as motorized with a speed threshold of 14 m/sec also did so with a power threshold of 6.5 W/kg, and vice-versa. This isn't too surprising, as I've not taken motorized transport up any significant climbs with my Garmin running (if I take motorized transport on my own, it's almost always a train, and the train well exceeds 14 m/sec). Were I to drive up a winding road like Old La Honda, the power criterion would make a difference. Even a car is unlikely to average 14 m/sec up the road, but on the other hand even a pro cyclist with unrestricted pharmacutical access wouldn't be able to exceed 7.4 meters/second.

Back to altitude filtering: here's some examples of the biexpontial filter I so painfully described. First that trainer ride:

Garmin trainer data with smoothing
Garmin Edge 500 data from riding a trainer, with smoothing


Here you can see that the 30-second smoothing doesn't do as good of a job as the 90-second smoothing, but still not too bad: the early rapid jumps in altitude result in around a peak artificial grade of 0.5%, which corresponds to an extra 0.5 W/kg which results in a VAM of increase of around 120 meters/hour from these data, or around 0.36 W/kg for less than 30 seconds given reasonable assumptions about bike mass to body mass ratio.

And here's another ride, where I show the result of applying different time constants to the resulting VAM:

effect of VAM on smoothing
VAM from an outdoor ride, with different biexponential time constants.


Of course, the resulting power estimation knows nothing about cross-winds, head-winds, rough versus smooth roads, drafting, or mass changes. But the goal here is to simply identify "extraordinarily high" power. For that purpose I think it does fairly well.

P.S. It would be fairly easy to modify the code to add an estimated "Power" field to a FIT file for rides where you want power but didn't have a power meter. However, I'll pass on this one, as I don't think power estimation from speed & altitude is good enough: it would just result in data contamination.

Thursday, February 3, 2011

exponential filter impulse response for altitude data smoothing

Linear filters are characterized by their impulse responses. To test my exponential filter algorithm, I created an input data set with zero-value points randomly spaced in time. At zero I put a finite approximation to a unit impulse:

‒0.01, 0
0, 100
0.01 0

I then ran this through my exponential filter in various ways.

One way is to run the points in time-forward order. This is the "causal" approach: I'm analyzing data as it comes, and I want a smoothed result as I am receiving data. Also causal is to run the smoothed curve again through the exponential filter. You'd expect this second smoothed result to be even smoother than the first time through, and of course it is.

Then there's a single and double application of the exponential filter in reverse time order. You'd expect the result to be flipped from the forward-time order (there's nothing special about forward time versus reverse time which should make the shape different; if it were there'd be something wrong).

As a third option, I do the filter in forward time and in reverse time, separately, and average the two results. This is very similar to running the filter for forward time and then running that smoothed result in reverse time.

Here's the results:



The x-axis is plotted versus my unitless normalized time variable u, which is the ratio of time to the time-constant of the exponential smoothing. The y-axis is then the filter response on a logarithmic scale.

Note despite the random spacing of points, the impulse responses follow smooth lines. For forward time, there is no response until the impulse hits, then the result exponentially decays from a maximum value. When the filter is applied twice in the forward direction, it no longer jumps instantly to the maximum value, but builds rapidly by continuously, then hits a peak, and fades from there. Reverse time, as expected, has the same shape as forward time but with the u-axis flipped. Given this, the result for averaging the forward and reverse time result is as expected: rising exponentially up, hitting a peak, then decaying exponentially down.

The Fourier transform of the impulse response of a linear filter is the filter's frequency response. If the goal is to smooth a function, you want the frequency response to drop off for high frequencies.

To test this, I created some new data, which were spaced at 0.01 intervals in u, again with an approximation to an impulse at zero. I ran the exponential filter twice in each case, as these generate the smoothest results without jumps: either twice in forward-time, twice in reverse time, or once in each direction. For the last option, I either averaged the forward and reverse directions applied in each case to the original data, or I ran forward direction to the original data then in the reverse direction on the smoothed result of the forward direction. I plot only the magnitude of the transform, not the phase:



Start first with the impulse. The frequency response of a perfect impulse is a flat line. Since the transform was done with a discrete Fourier transform as opposed to a continuous Fourier transform, my approximation to a perfect impulse produces a flat result. Impulses have equal contributions from all frequencies.

Look then at the results of the filtered signals. Somewhat surprising, or maybe not, the frequency response in each case is almost the same: following a 1 / frequency-squared trend. In the case of applying an exponential filter twice, whether they differ in phase (which I'll show next), the magnitude of the frequency response is the same. The magnitude describes smoothness, while the phase describes delay, and while they have different degrees of delay, the smoothness is the same. Okay, there's some difference at the highest frequencies at the edge of the plot, but that's due to interpolation errors when applying the filter twice. The first exponential smoothing should result in an instant jump from zero, but since I interpolate between samples, there's some "leakage" into negative time for the forward-sweep, or positive time for the negative sweep. The averaged forward - reverse filter is less prone to this interpolation error.

Analytically the Fourier transform of an exponential convolution is about as simple as Fourier transforms get... the complex frequency response is:

F(ω) = 1 / [ i ω τ + 1 ]

where i is the unit imaginary number.

For the reverse direction, I flip ω → ‒ω :

F(‒ω) = 1 / [ ‒i ω τ + 1 ]

Each of these has the same magnitude:

|F(ω)| = 1 / sqrt[ω² τ² + 1 ]

So if I apply either twice I get a magnitude (since magnitudes of complex numbers multiply under multiplication):

|F(ω)²| = 1 / [ω² τ² + 1 ]

The phase, on the other hand, is:

φ[F(ω)] = ‒arctan[ωτ]

where φ specifies the phase, and arctan is the arctan. The sign flips on the phase for the reverse direction. If I apply a forward exponential twice, I therefore get double the phase (and therefore double the delay):

φ[F(ω)²] = ‒2 arctan[ωτ]

On the other hand, if I apply a forward followed by a reverse exponential, the phases cancel:

φ[F(ω)F(‒ω)] = 0

Zero phase is what we want: we want no time lag, just smoothing. So from a smoothing perspective, the double-forward or double-backward smoothing is the same as forward-backward. But from a delay perspective, the last option is the clear preference.

However, instead of doing forward then backward on the result (or vice-versa), I choose the average of the forward and backward. This results in a Fourier transform:

(F(ω) + F(‒ω)) / 2 =
[ 1 / [ i ω τ + 1 ] + 1 / [ ‒i ω τ + 1 ] ] / 2 =
1 / [ω² τ² + 1 ],

which is the same as the forward and reverse applied sequentially. Again the phase is zero (the imaginary part is zero). The advantage is in dealing with the first and the last point of the sequence, where the averaging approach is guaranteed to treat each end the same, but the sequential filter application can treat the end points differently.

Here's a plot of phases from a discrete Fourier Transform applied to my sample data (over 20 seconds with 100 points per second with an impulse approximated half-way):



Things are behaving as expected from the equations out to extremely high frequencies where the "mesh" makes itself evident.

Okay, way too much on this, I suppose. The end result is taking the forward exponential smoothing, then the reverse exponential smoothing, results in a nice 1 / [ω² τ² + 1 ] low-pass filter without phase lag with the computational efficiency of exponential filtering. So that's what I use for altitude data.