Sunday, February 28, 2010

Dynamic pressure and barometric altimetry: simulation results

In my last post on the subject of barometric altimetry, it was discussed that a moving altimeter may report a lower altitude than a stationary one, due to the dynamic pressure of the air piling up in front of the moving cyclist. The amount of this altitude increase is determined by a coefficient between zero and one relating the effective wind speed to the cyclist speed.

Before that, I described how I'd combine a barometric altitude signal with a GPS signal to get the best of both: the short-term responsiveness and reliability of the barometric altimeter with the general accuracy of the GPS (at least when there's a signal). I'd applied this to randomly generated data, which were derived using a simple pacing model with the bike power-speed equations.

The effect of dynamic pressure on measured altitude is simply derived from Bernoulli's equation:

Δz = ‒½ ( kdp v )² / g,

where Δz is the error in altitude, v is the speed relative to the wind, kdp is the coefficient dependent on the altimeter position and orientation, and g is the acceleration of gravity.

The the smoothed average of the GPS signal is used to correct for errors in the barometric altimeter, how the dynamic pressure affects the derived altitude with my algorithm depends on how speed fluctuates, not just on the speed. So to account for this, I changed my assumption about power. Instead of a nice steady power, I assumed power fluctuates between 0% and 200% of the "optimal" value, varying sinusoidally with a period which I varied. Furthermore, just to be pedantic, for each period of power I used each of four phases. I did each of these "simulations" for kdp of 0 (no stagnation pressure effect) and 0.5.

Here's the result, where I only averaged over times starting with 100 seconds, to avoid the details of the initial acceleration having too much effect:

simulated errors in derived altitude

The dotted lines show the result with the smooth power. The oscillating power results are shown by the dots connected by the lines. Note I'm reporting root-mean-squared error, not mean error, so positive and negative errors fail to cancel.

There's a bit of an error in the altitude in every case: around 174 cm. But the effect of the dynamic pressure is very small: just a few cm.

Were the effect larger, I could propose methods to try and correlate speed with altitude, and correct for that. But this error is so small, such an effort would likely do more harm than good. Indeed, on these data, when I tried to apply such a correction, the error increased: there's just not enough data in a typical ride to see such a small effect. So the conclusion is: don't worry about it.

Friday, February 26, 2010

Powertap torque test: more wheels

My torque test I described in my previous post indicated something was wrong: obviously the powertap was reporting substantially less torque than I was applying. So something was up. Of course, it could be one of three things: with my hub, with the rest of the drivetrain, or with my test protocol. An example of a test protocol problem would be error in determining the weight I'd loaded on the pedal, or a problem orienting the frame, or things flexing in a way which modified the actual torque applied by hanging the weight from the spindle. Lots of possibilities.

So when doing experiments, you always want a "control case". It's better to compare the results of two similar experiments than to compare the result of one experiment with theory.

So I borrowed a wheel, one I suspect works well, and tested that. Here's the result:
test of alternate PowerTap wheel

It's nice having access to a second PowerTap: these things aren't cheap.

You can see from the results that this one tests better in two ways:
  1. the slope is closer to 1
  2. the intercept is substantially closer to zero

But the differences really go beyond this. Before loading the weight on the pedal, I'd zero the torque. Powertap never measures zero torque: it's designed to measure 512 lb-in when unloaded, then measures more than this when torque is applied to the hub. But this "bias" value isn't perfectly controlled, and varies with conditions. Saris considers a normal range for this bias value to be from 500 to 525 in-lb. Outside this range, and Saris wants the hub back for servicing. You can read this "raw" number in the test mode of the Powertap head unit (the Cervo, or "little yellow computer"). So to know what "zero" is, it needs to be told when it's in a zero-torque state. This is normally automatically done by the Cervo when it detects coasting (positive speed with zero cadence). During my measurements, I coast the wheel, but then I manually force a zero after the wheel has stopped but before I apply the brakes in preparation for loading the pedal spindle.

With my wheel, I'd zero the torque, apply the brake, load the spindle, find the pedal orientation which maximized the torque reading, removed the weight, and let off on the brake. So the reading should be zero again, right? Not on my wheel: it would vary, but was as high as 9 in-lb. With the borrowed wheel, it never exceeded 1 in-lb. So that was a huge difference between the wheels. This alone would explain the large difference in the intercept of the regressions done to the data from the two wheels.

But then the question is what would one expect from the test? After all, I'm applying the torque to the pedal, not the hub directly. Maybe all of the torque doesn't transfer. Indeed, when riding, drivetrain losses result in less torque being transmitted to the hub than the idealized equation predicts.

But that's riding. In this test, the rear wheel is prevented from rotating by the brake. There may still be some difference in transmitted torque from the ideal, but many of the effects which might cause power loss in dynamic pedaling aren't present in this case. I admit, it's not obvious, but the consensus is that a static test should come a lot closer to ideal than the fraction of power transmitted to the hub during normal riding.

test of alternate PowerTap wheel

Another plot from this test: here I show the difference between the measured and applied torque versus the rear cog. The deviation seems largest in the 39/12 and 39/13, but other than that, there was a persistent error of around one to two lb-in. Some of this error may have been an inability to nail the optimal orientation of the pedal. The display, after all, only reads to 1 lb-in precision, so errors smaller than this aren't significant. Overall, I'm quite pleased with the results of this test.

Another result: this one reported by John Meyers on BikeTechReview. He did his test a bit differently: he put the bike on a trainer suspended between two desks, used the 39/25 exclusively, and hung precision masses from the pedal. So he varied the weight, while I varied the leverage. In either case we varied applied torque.

test of alternate PowerTap wheel

Nice and linear! His intercept is quite close to what I got with the borrowed wheel, and his slope is amazingly close to one.

So the odd wheel out in this experiment is mine. Something's up. I then did one more test: I put the Cervo into test mode, then alternately monitored the torque for a minute and applied then removed ad-hoc torque to the hub, either by grabbing and rotating the cassette, or by applying force to the pedal.

With my wheel, it started reading 523 lb-in (very close to the upper bound of the normal range, which is from 500-525). Then I applied torque and removed it. Now it was varying from 523 to 525. Again: apply and remove torque. Now it was reading 517.

I tested the wheel a bit more later. Torque would shift when I applied and removed torque. This time, I got numbers which varied from 519 up to 525. It was generally stable when idle: it would shift mode when torque was applied then removed. Seems like something's loose in there, huh?

With the borrowed wheel, applying then removing torque didn't have much of an effect. It generally wanted to hang out in the 505 to 507 range. Sure, I'd prefer it be more stable than this: I'd prefer it be rock-solid in the generally constant conditions of the indoor test. But I'll live with that. 517 up to 525, on the other hand, and we're talking serious power errors.

I informed Saris of these results. I'll see what they say. Hopefully I can get the wheel looked over, so I can trust my power numbers again. On the other hand, it's sort of nice not worrying about power numbers, to be honest.

Monday, February 22, 2010

Powertap torque test

I've been depressed at my inability to produce the same power up Old La Honda as I used to. Times were fine, just power was off. A lot can affect times, but power is rock-solid reliable, right?

So I finally tested my Powertap, using the following procedure:
  1. I filled a bucket, first part way, then later further, with water, in each case measuring the mass (actually, the weight) with my Ultimate hanging scale before and after the test, and averaging. Mass values: first fill line = 9.765 kg, filled further = 12.56 kg, further still = 13.955 kg.
  2. I hung my bike from my Park stand. I shimmed the stand so the frame measured vertical with a plumb bob which hung vertical from the top tube. I wanted the string to just brush the down tube. My bike (Ritchey Breakway) has round steel tubes so this works.
  3. For each gear combination tested:
    1. shift into gear of interest, spinning the rear wheel
    2. bring rear wheel to a stop
    3. zero torque (hold right button on PT)
    4. hang bucket from pedal while holding down rear brake to keep wheel from rotating.
    5. find a pedal orientation which maximizes torque reading (make sure bucket doesn't rub against chain or chainring). My estimate is I was generally able to come within 1 lb-in of the optimal value.
    6. record torque
    7. remove bucket and check torque reading

It was often the case the torque read a significant positive value after removing the bucket. In some cases, I remeasured, but did not discard the previous values. Since the PT doesn't read the magnitude of negative readings, it would bias the results to discard points based on large positive offsets only.

My procedure was a bit sloppier on earlier measurements (the two lighter masses) than on the latest measurement (the heaviest mass). However, this didn't significantly affect the results, so I combined the results for analysis.

I calculate the theoretical torque as follows:

torque = M g L Nr / Nf,

where M g = weight of bucket with water, L = crank length, Nf = teeth on front ring, and Nr = teeth on rear ring.

I plot the theoretical value versus the reported value and fit:

measured versus modeled torque

You can see from the plot two things: First, the Powertap is considerably underreporting the torque. Torque doesn't have much meaning to me, power does. So on the plot I convert torque to the equivalent Old La Honda power. You can see the Powertap underreports power by around 10 watts. Worse, the scatter in the data are huge.

The scatter is highlighted in a plot of the residuals:

measured minus modeled torque

Points are indicated by the gear ratio used to generate them, color coded by the mass load. No obvious pattern: just scatter.

On the last measurements I did, the heaviest load, I recorded the torque the powertap read after I removed the weight. This should have been zero, obviously, as I'd zero'ed it out under the same unloaded condition before putting the weight on the pedal spindle. But it was all over the place. To quote Andrew Coggan, perhaps the expert in the field, who responded to my post on Wattage: your PowerTap isn't 100% "healthy". Here's the residual versus this post-measurement torque offset. There is a clear correlation.

measured minus modeled torque


So back to Saris for torque tube recalibration: it appears the strain gauges have fatigued. My guess is I'm seeing a hysteresis behavior: the powertap isn't just reporting the present torque, but some residual of a previous torque measurement. In test mode, it reports 519 lb-in as the internal offset, which is higher than the nominal 512 lb-in.

So what do I conclude with my power data, going back to 2007? Not much. It's not reliable. Bummer for someone who loves doing data analysis.

The moral of the story: if you're depressed about the numbers your power meter is telling you, it might not be your legs, it could be the meter. So check those things. Don't believe everything you're told, even if its from computer controlled strain gauges. And test those PowerTaps!

Future post: I check Cara's Powertap wheel. It seems to behave much better in informal stomp testing. I look forward to seeing how it responds to the water bucket.

Sunday, February 21, 2010

Summerson's Guide to Climbing, California Edition

John Summerson's book, The Complete Guide to Climbing by Bike, is a wonderful reference to some of the best cycling climbs in the United States. But the US is obviously vast, and a national guide, while useful for planning trips, may be less useful than guides with a more regional focus. So he's started producing a regional series, as well.

First was a book on the Southwest. This is really good, but is mostly a subset of the national guide, with a few more climbs added in. Still recommended.

Far closer to home for me is the newly released California edition. Wow! What a resourse! If you ride in California, this is a must-buy. Even if you find one, just one climb here you weren't aware of, one gem of a climb to add to your "done that" list, the cost of the book is more than justified. And I can't imagine any rider not finding some inspiration here.

Okay, now to some details.

First, the route profiles... at first I was taken aback at the relatively low resolution of the profiles in the first book. In the CA book, the resolution has improved, but still these are below the level one can get from many sources, such as Lucas Pereira's gradiometer page, or from Strava. But the profiles work! The reason is for each segment, he posts the range of grades in that segment. How much information can one process, anyway? Basically you get the feel for the climb from John's representation, where from full profiles, it can be hard to grasp all the detail. So the profiles work for me.

Then there's his descriptions. Some of these are fantastic, really translating John's passion for climbing and ability to find a unique personality in each individual road. But some of the descriptions do fall a bit short, or are insufficient to navigate the road. For example, Mt Tamalpais in Marin or Lomas Cantadas in the Berkeley Hills both get short treatment. Lomas Cantadas in particular is quite a navigational challenge: we did it in the 2008 Low-Key Hillclimbs. But these are the exception: room for improvement in version 2. Most of the climbs are excellently described given the space limitations.

Then the climb numeric ratings. I love numeric ratings, and John's formula strikes a nice balance between simplicity and ease of calculation and accuracy. Now "accuracy" needs to be put in heavy quotes, as each rider has their own idea about how distance, altitude, grade, and steepness each contribute to how hard a climb is. So any formula, no matter how quantitatively precise, is going to have a heavy dose of subjective. I'll comment more on the formula in a bit, but first some background.

First there's the concept of "climb difficulty": gaining 1000 feet in 2 miles may be tough, yet may be less tough than gaining 1000 feet in 200 miles. Distance alone adds difficulty. A climb rating should be for only the "climbing" component. So the 2 mile climb is rated harder than the 200 mile climb, even if finishing the 200 mile climb is far more challenging.

Next a comment on the definition of a "climb". John described a definition in the first book, but liberalized it for the CA edition. And justifiably: Mount Hamilton was considered three separate climbs in the first guide due to the descents, but is included as a single climb in the CA version. I may comment more on this in a later post, but Hamilton is certainly a climb on its own, so I support the liberalized definition. Unfortunately the stricter definition is still listed on page 14 of the CA edition. He just fails to comply with it in the climb selections.

Then on the statistics: John derives elevation data from digital map data using measured GPS coordinates. Unfortunately, small errors in a GPS reading can result in large errors in altitude, so the numbers aren't always good. See, for example, a discussion of Motionbased Gravity, which applies this same technique.

One example is Old La Honda. The canonical number for Old La Honda out of Woodside is 1290 feet, which checks out with Google's "terrain" map. That's Lucas Pereira's number, for example. John has 1344 feet.

Kings Mountain Road is a bigger issue, where Lucas has 1540 feet, which also checks with Google. But John's number, 1691 feet, is clearly too high. John starts the "climb" at Highway 84, rather than at Greer Road or Entrance Way, with the segment between Highway 84 and Greer having negligible grade. So something's up: I'm not sure what. So if you really care, you might want to check altitude numbers against Google Maps.

Then there's the difficulty formula itself. He describes the formula as square root of grade multiplied by altitude difference (net, not gross) multiplied by an adjustment for peak altitude multiplied by an adjustment for the fraction of the road which is unpaved multiplied by a "bonus" if the road meets a certain criterion for non-uniform grade. All good, although I'd prefer the nonuniformity component be continuous rather than all-or-nothing, but I understand the need for simplicity. But then in the listed ratings, he uses the grade itself, rather than the square root of the grade. Obviously he decided the square root failed to give enough credit to truly steep climbs. But I feel square root works better for shallower grades. I may comment on the formula, and propose a compromise, in another post.

But altogether, I love this book. It's a great resource, one which I hope becomes available in every bike shop. Indeed, anyone getting into cycling should buy a copy. And everyone already into cycling should buy a copy, as well.

Saturday, February 20, 2010

Coastal Trail Runs: Rodeo Beach 18 miles: FAIL

Today was the Rodeo Beach 18 mile trail race. Unfortunately, I previewed the course map and I thought I knew where I was going. Sure, the map's crude, but that's no excuse. I have a better map I could have compared it to. Or I could have, err, read the course description. But I thought I knew the route.

But after descending Coastal Trail onto Bunker Road I didn't see any ribbons, which served as course markers. Weird. I ran up the road, where I was fairly sure the course went, and still didn't see any. I scanned to the right, where there was a dirt road paralleling Bunker Road. Nobody there, and no ribbons there either. Finally I saw a ribbon ahead on the left -- cool. As I approached, though, a runner appeared almost out of nowhere. When I got there I saw he'd come off a side-trail onto Bunker Road.

How did he get there? I thought back to the Coastal Trail descent and didn't recall seeing any junctions. Maybe there'd been a frontage trail I'd missed.

So I finished the loop, nominally 13 miles, then ran the 5 mile loop which was the second part of the 18 mile course. I felt decent at at the finish. There, I was told I'd gone under distance, and had to run out and back on the road on which I'd come to make up that lost distance. "Run to the caution sign", I was told. Then come back, presumibly -- I didn't wait around for further instructions.

But I didn't no where the caution sign was. I ran for awhile, to a stop sign, then turned back. Maybe the caution sign was further up the road, but I don't recall ever seeing one there.

Overall what should have been a personal victory, my longest trail run ever, my longest foot race ever, turned into a crash and burn. As far as I'm concerned, going off course is DNF. Maybe they'll give me a time, maybe not. I didn't wait around. I ate a few bagel pieces, filled my bottle, and was gone.

On the way back I stopped at the Coastal Trail to see where I could have missed the turn-off. Sure enough, marked in pink-and-black striped ribbon, was a side trail. I'd been so focused on what I was doing I'd somehow totally missed it. Incomprehensible, really. But I'd made many wrong turns before: at a 300 km brevet in Austin I rode an extra 20 miles. At Davis Double I'd ridden up an extra 8 mile climb, then back down when I realized I was lost. At Climb to Kaiser I'd descended back from the pass using the outgoing route instead of the return route, until I arrived, miles later, at a rest stop. They drove me back to where I'd gone off course. At Kaiser again, later, I missed the turn to the summit itself, until I realized my mistake and backtracked. That one only cost me a few miles -- 15 minutes? A lot of time, still, on what is essentially a race. I'm simply not very good at following directions.

So as a training run, a good day today. As a race: a total, unmitigated failure. I really may as well have saved my entry fee and done a run on my own. This after my last trail race where I forgot to register. Great.

So I ran something approximating 18 miles today. Under different circumstances, that would be a good thing.

Oh, one good thing today. I did a little hike with Cara. That was nice: up to a viewpoint, then back. The first hike since she injured her knee in April 2009. Progress.

Thursday, February 18, 2010

Caltrain weekend service: a proposed schedule

Caltrain in Mountain View

SportVelo held its winter training camp Thursday through Friday. Each of the first two days, I boarded a Caltrain Limited from Sam Francisco with my bike, stopped off in either Starbucks or Peets, and easily made it well in advance of the 9:15 am start time for the training rides. Caltrain does fairly well for commuters on traditional workdays. Honestly I don't understand why so many persist in driving. Indeed, as I type this, I'm on the train back to San Francisco from my office in Palo Alto.

It's the weekends that're the killer. For various reasons I wasn't able to make the weekend rides, but I would have liked to have the option. This is especially true for Sunday's "Queen Stage".

Except Caltrain's earliest train on weekends leaves the City at 8:15. And not only does it leave late, it's slower than any weekday train, stopping not only at all the weekday stops, but additionally at Broadway in Burlingame and Atherton, stops which have been eliminated from the weekday schedule but were kept on weekends for political purposes. Indeed this last point is non-trivial, as each stop adds 2 minutes, and the 8:15 out of San Francisco arrives in Menlo Park at 9:14. Had it been even four minutes earlier, not having made these extra two stops, I might have been able to make the start of the ride. Barely. But 9:14: forget it.

The fact is I've not taken a Caltrain weekend train in months. It's simply not a useful service to me. If I have an activity I can start late in the morning, at a time of my choosing, and can finish at a time also of my choosing, and the hour-plus trip each way from the City to the Peninsula on the super-local weekend trains isn't an issue, then fine. But let's face it: most people aren't willing to deal with these sorts of restrictions.

And it isn't as if people aren't on the road on weekends. Every Saturday and Sunday 101 is jammed with cars below my humble abode on Potrero Hill. Every weekend day 101 is crowded, all through the northern Peninsula, the mid-Peninsula, the South Bay, and down into the Coyote Valley around Morgan Hill: the entire Caltrain route, which extends from San Francisco down to Gilroy. All of these people jammed in their cars are potential customers of the far superior mode of travel which is rail. Here I am now on the train writing this blog entry. Maybe some people can do that while driving on 101 (it wouldn't surprise me), but it's hardly recommended...

The irony is Caltrain views my nonpariticipation in the farce it calls weekend service as a lack of justification for expanding that service. Caltrain isn't very market-oriented. It's more of a government service mindset: that demand is fixed, and its responsibility is to meet that fixed demand. Indeed, Caltrains long-range plan forecasts a demand which varies only with economic activity. It makes no association with quality of service and ridership numbers. If a service cut is met with a decrease in demand, Caltrain's view is that the service cut was clearly wise, correctly anticipated a trend which was totally beyond its control. This sort of simplistic foolishness is no surprise, since few if any members of the Joint Powers Board, Caltrain's controlling body, rely on the train as primary transportation. They all have reserved parking at the Caltrain offices in Santa Clara.

So I decided to propose my own schedule. Rather than ask what is the schedule needed to meet current demand: the supply of present train users, I asked myself what sort of service would be needed to attract more of the true customer base: the set of those parked on 101.

People want to get where they need to go in a reasonable amount of time when they want to go there. So how long are they willing to wait for the next available train? I'd say 15 minutes on average, 30 minutes worst-case, is the absolute max. More than this and people feel they've spent too big a chunk of their day on the platform,. So if you can't provide at least this level of service, I feel, it's almost a waste of time. People will have already made other plans.

Second, you simply can't have every train making every stop along the way. The train need not be as fast as driving: there is a quality factor as well as a quantity factor to the time involved. But the slack people are willing to cut rail is finite: over an hour between San Francisco and Palo Alto is simply too much.

Fortunately, stopping at every station is rarely necessary. For those who are traveling to San Francisco, San Jose, or another key Caltrain "hub", there are typically several stations within close proximity. So whether it's by car, bike, taxi, or (heaven forbid) even foot (a favorite mode of mine for trips of up to three miles), few people are strongly tied to a single station. For example, from California Ave, where I work, I can easily reach Mountain View, San Antonio, California Ave (obviously), Palo Alto, or Menlo Park. By bike, they're all within 15 minutes or so. Similarly from home on Potrero Hill, 22nd St and 4th and King stations are both easily accessible.

So as a result heavy use should be made on weekends on the "Baby Bullet" service which has been so successful on weekdays. the Baby Bullets get not from San Francisco to Palo Alto, but all the way to San Jose within an hour. That's as attractive on weekends as it is during the week.

Then for those requiring local service, the "timed transfer" limited approach which Caltrain uses during commute hours should be extended to the weekend schedule. The idea is that two trains will cooperate, one doing local service on the southern half of the route, the other handling local service on the northern half. Those requiring a local level of service on both halves can then transfer mid-route. For most riders at least one end of the destination can be a major station rather than a minor one, so this isn't needed.

Finally, Gilroy service is currently at a minimal, "forward commute"-only, weekday-only level. Wouldn't it be nice to head down to Gilroy from the City for a weekend day, or even the whole weekend? Or to head up to San Jose or San Francisco from Morgan Hill? Obviously people think so: the traffic on 101 can be horrendous, even on a Saturday or Sunday. So I propose service be extended all the way to Gilroy on an hourly basis.

Finally, note I assign a relatively high priority to Santa Clara. This is because Santa Clara is the connection point to San Jose airport. Surely demand for rail to the airport will increase with an improvement in rail service. I also maintain a priority to 22nd St in San Francisco, as this presently increases parking capacity at the city side. And of course, Millbrae is the primary BART connection point.

Finally then, here's my proposed schedule:

Proposed weekend Caltrain schedule
click on image for PDF version

I know -- not a chance. Too expensive. Yadda yadda. We dump so much money, so many resources, and eat so many hidden costs from our heavy reliance on automobiles. And when we do invest in rail, too often its in financial boondoggles like BART to SFO (better covered by a shuttle bus from Millbrae) or BART to OAK (currently well served by bus service from the Colosseum stop). Instead of focusing on "shovel ready" big-ticket projects, we need to bring the day-in day-out level of service on existing lines up to something resembling a first-world standard.

Wednesday, February 17, 2010

Weight Weenie $/gram record?

Cervelo has announced a new special project frame, Project California, a frame much like the R3 and R3-SL except with a claimed mass at a super-impressive 700 grams in size 54. Announced price for the California (which I'll call the "R-CA"), presumably for stock sizes, is $9000 US.

Project CaliforniaDamon Rinard shows the Cervelo Test Team the new Project California frame

Now, 700 grams is impressive and all. But I judge mass savings based on the marginal cost of attaining them.

Usually mass is confounded with functionality, but on the Cervelo R-series, with identical geometry and similar ride characteristics, I think it's safe to say the upgrade progression is mostly a weight weenie play. So I'll resist the temptation to compare it with the Guru Photon, a fully custom frame of comparable mass at half the price. Maybe the Cervelo is simply better. But comparing to the other Cervelos in the R-series, to which Thor Hoshovd himself claims the ride is similar:

frame
list price
grams @ 54 cm
R3
$3100
867
R3-SL
$4000
803
R-CA
$9000
700


It's all about marginal cost, though. That is trivially calculated:

R3 → R3SL: 64 grams @ $14.07/gram
R3SL → RCA: 103 grams @ $48.54/gram

To put $48.54/gram in perspective, a bottle of Odwalla juice costs around $3.50. But by this gauge, what's the real cost? Suppose I drink a bottle of Wholly Grain I don't need: 400 kcal. That will cause a weight gain of around 52 grams. At $48.54/gram, that's $2500. Wow -- that's a helluva cost multiplier! It's pricier than a typical price of my 860-gram Fuji SL/1 frame. Indeed you can get a SRAM Rival-equipped Scott Addict at Bike Connection in Palo Alto for less, pedals included.

So how will climbing improve, day in, day out? Here's a typical weight variation, measured with a high-end body mass scale (from the DC Rainmaker blog):


Day to day weight for this guy, a highly trained triathlete who carefully weighs himself the same time each morning, is up to around 800 grams. When riding, you sweat around 500 grams per hour on a hot day. Even if you drink from a water bottle to replenish this, the weight is coming off your bike or you, one way or another. In the time it takes to climb Old La Honda, therefore, you're probably dropping around 150 grams: more than the mass difference between the R3 and "Project California". Just think: in the time you climb Old La Honda on that hot summer day, you increase the value of your bike by $7300. That's $6.75 per second of sweating.

And before you tell me weight on the frame is so much more important than weight on the body, why do you carry your water bottles on your frame as opposed to on a belt? Q.E.D.

I love weight weenieism: no question about it! But my limit is around $4/gram. With that as a crude limit I've gotten my race bike down to just below 5 kg. $58/gram? Insane... There's so many cheaper ways to save that mass, off the bike, off your shoes, off your helmet, off your clothing, maybe even off your body. And watch out for those Odwallas.

But I still admit I think it's cool.

Tuesday, February 16, 2010

MegaMonster Enduro 2008-2010 Pacing Analysis

Saturday was the 2010 MegaMonster Enduro, a fun 102 mile time trial which is the invention of Kevin Winterfield, with course assistance from Bill Bushnell. It really is a unique event for the greater San Francisco Bay area: ride 51 miles out on a road (Highway 25), turn around and ride back. Not that there haven't been wrong turns...

Low-Key volunteers ready to go
Low-Key Volunteers @ the MegaMonster (Cara Coburn)

This year we were sponsored by Hammer Nutrition, who sent us some great gel and Enduralytes and Heed. We supplemented this with an additional order of Gel and Perpeteum "liquid food". We were ready! Unfortunately in a bit of a snafu the stuff didn't make it to the checkpoints, which thus featured cookies, pretzels, and water, similar to previous versions of the MegaMonster.

Since it looks like the MegaMonster will be back for 2011, I figured it would be fun to analyze how people did on the incoming versus the outgoing leg, so next year we can see the effect of Hammer Product. Of course, wind is a factor, so a proper experiment would compare Hammer to some sort of placebo (like a noncaloric drink). But we'll live with the data we have.

In 2008, riders started with no supplemental food, and got snacks at the stops. Here's a comparison of the outgoing times to the incoming times:

2008 split analysis

I omitted the two outliers (punctures, perhaps?) then did a regression: on average, riders and teams were 5.1% slower coming back than heading out. Actually adding the outliers back in has little effect: the outliers effectively cancel.

Then 2010. Now riders had Hammer product at the beginning. They could have stocked up to carry enough for the full ride, but we'd told them they'd be able to refuel on the road. Then when they hit the checkpoints: whoops! Sorry guys -- pretzels and cookies for the lot of you! So if Hammer product is effective, you might expect a big sag in the second half. Here's the result:

2008 split analysis

So this year, riders were 8% slower on the inbound. Self-selection bias? Student t-test? P-values? Proper controls? Blahh! None of that. This is a blog, after all, not the Journal of Applied Physiology. But based on this result, like the well-intentioned scientists we are, we're going to propose a hypothesis: that in 2011, this "sag" factor is going to decrease. Just a guess. Weather permitting, of course.

post-ride Hammer Gel
Post-ride Hammer-Gel. It all ended in tears: moderation is key. (Cara Coburn)

Sunday, February 14, 2010

Old La Honda repeats with Sport Velo

SportVeloAt the first day of the SportVelo training camp, Dan Smith had us do four Old La Honda repeats. The goal was to start slow and get faster each time. I was able to accomplish half of this, anyway: I started slow.

My approach was progressive gearing. 34/26, 34/23, 34/21, 34/19. I'd set my PR in July in 36/18. At the same cadence as my PR, this last gear would have gotten me to the top in 18:48. That's a typical good Noon Ride time for me, well at least it was last summer, on my Ritchey Breakaway, which has been my training bike of choice.

Instead my legs were already feeling the 34/26 effort, the 34/23 felt harder, the 34/21 a struggle, and despite a caffeinated Gu to fuel my motivation, I basically collapsed on the final rep. I hate making excuses for myself, so I won't. But if I cloned myself and had to give myself advice, I'd probably tell my alternate self that the 6 running repeats up Potrero Hill I'd done 1.5 days earlier had blunted my legs. I've never ridden well so soon after a hard run. But as I said, I hate making excuses.

I'd ridden the set without my Powertap, so instead used the power-speed equations to calculate my power, pedal force, and then from cadence and crank length I calculated my pedal speed. Power is the product of pedal speed and force, so constant power is a hyperbola on this plot. The repeats are numbered. My July result is indicated with a diamond.

pedal force-speed analysis

My pedal speed was fine, my force was just way off. On the last rep, I just couldn't produce the increase force required to compensate the reduced pedal velocity resulting from the bigger gear, and power drooped. I was obviously a long, long way from July.

It's often said that cycling is dominated by aerobic capacity, that muscle strength has little role. However, I feel this is misleading. Consider for example, this study by Mauger, Jones, and Williams shows that acetaminophen works better than a placebo at allowing athletes to tolerate higher levels of blood lactate at the same level of perceived exertion and to as a result ride a faster time in a 10 mile time trial. Certainly acetaminophen has no effect on aerobic capacity. It's simply a matter of pushing your muscles harder.

So cycling may be an "aerobic" event, but the aerobic system still can only fuel the muscles, which need to do the actual work. So I shouldn't be too discouraged by my Old La Honda times. My body just isn't able to handle hard runs and recover within 24 hours. That certainly takes a toll.

Saturday, February 6, 2010

Bernoulli, stagnation pressure, and barometric altimeters

I never could wrap my head around this stuff.

Bernoulli's_principle basically says that when an incompressible fluid is moving more rapidly, the pressure drops. Air is obviously included in the list of incompressible fluids, leading to all the activity at SFO not so far south of where I live.

In some physics class in my distant pass, the professor justified this by launching into a set of differential equations. You stare at the equations, one after another, and sure enough it's hard to dispute any one of the steps, but that doesn't mean the result actually makes any sense. Differential equations, after all, are just a model. There's no physics in differential equations. The physics is in particles bouncing around: scattering elastically and inelastically, transferring momentum, transferring energy. Scattering, if anything, is the heart of physics.

So I envision gas molecules bouncing around, energy scattering between various degrees of freedom, minding their own business. Along comes a cyclist with his altimeter, a pressure sensor, mounted horizontally on the bars. Gas molecules now start bouncing off the pressure sensor, and these collisions transfer momentum, which implies a force, which implies a pressure. So the sensor measures a pressure. Higher altitude means fewer air molecules means fewer collisions means lower pressure. So the pressure gives me a hint to what the altitude is. The question is: how is the measured pressure affected by the bike's motion through the air?

First consider an idealized case, with the sensor mounted perpendicular to the direction of motion of the bike through the air. Does the pressure against the spring change when the bike is moving?

pressure sensor schematicSimple man's view of a pressure sensor: a spring compresses due to gas molecule impacts. Does the compression reduce if the platform is moving sideways? From this diagram, there's no reason to believe it should. Gas particles have the same kinetic energy in the vertical axis whether or not the sensor is moving, it seems.

Particles bounce off the sensor surface with a particular kinetic energy component perpendicular to the sensor surface. How does this kinetic energy component care if the platform is moving laterally or not? It seems like the pressure associated with these collisions should be independent of lateral motion of the sensor. Really, they don't.

A common view of Bernoulli is "faster moving particles, at a given number density and temperature, exert a lower pressure in all directions." Bzzzt. This can't possibly be right.

The question here: does Bernoulli's Principle imply that the gas pressure will be reduced at the pressure sensor when the bike is moving? According to Wikipedia, the Bernoulli Principle is a statement that total pressure is constant everywhere in the fluid flow, where total pressure includes both the static pressure and the dynamic pressure. It is a statement of conservation of energy: that if gas is moving in a non-random direction, then the kinetic energy remaining for random motion is less.

But nothing about it says total energy is independent of the frame of reference. In this case, changing the frame of reference from one stationary in the gas to one moving through the gas increases the kinetic energy per gas molecule (kinetic energy depends on the frame of reference). It increases equal to the value predicted by the "dynamic pressure: ½ M v², where v is the bike speed and M is the mass of a gas molecule. So the conclusion in this case is the pressure reading will not change as long as the pressure meter is located perpendicular to the direction of motion.

To check this, let's whip out the Navier-Stokes equation, which I'd earlier said the professor had written on the board in class:

ρ ( ∂v/∂t + v·∇v ) = ‒∇p + ∇×T + f,

where ρ is the mass density, v is the fluid velocity vector, p is the pressure, T is the stress tensor, and f is the force on the fluid. Basic Newton's laws combined with mass continuity. Nothing obscure.

And it basically agrees. Pressure changes come from velocity which changes over time (∂v/∂t) or over space (∇v). If the velocity is uniform and constant, and you're not pushing on it or it's not falling, no problem. If it's changing over time because the velocity of the reference frame has a time-dependent velocity (the pressure sensor is accelerating) then you're not using an inertial reference frame, and the equation no longer applies. Use an inertial reference frame.

So that's the case of the ideal horizontal mount. Now let's consider an alternate case: the pressure sensor is mounted horizontally, facing the wind:

pressure sensor schematic, vertical mountThe same pressure sensor, this time facing the wind.

Now this is a different matter completely: it's easy to see how the wind pressure will affect the sensor. This is called the "stagnation pressure", and is exploited in the Pitot Tube, a speed sensor for sub-Mach airplanes.

But nothing is so simple. Any air sensor that's out in the wind is going to be exposed to wind deflected off the rider or by the bike, even if it's mounted horizontally. It's only a matter of how much. Which of course depends on how the sensor is packaged and mounted. Still, there could be an effect: most likely a positive correlation between speed and pressure (showing up as a negative correlation of calculated altitude). But the opposite correlation is also possible.

Next time I'll look at whether this is important, and if it is, how it might be addressed.

Thursday, February 4, 2010

combining GPS and barometric altimetry: correcting the barometric data

Okay, back to altimetry.

After painstakingly constructing simulated altitude data consisting of the following:
  1. true altitude
  2. GPS altitude signal: tends to fluctuate and drop out for periods, never deviates too far from the true altitude, at least in my model
  3. barometric altitude: smoother than the GPS and never drops out, but has a slowly varying offset from the true altitude

So the approach I take is to first identify points at which the GPS signal is good. At those points, I calculate a difference between the GPS and barometic altitudes. I then locally average these distances using my favorite smoothing function, cosine squared:

cosine-squares smoothing function

The key is to pick the time constant. Too short a time constant, and you don't suppress the GPS fluctuations. Too long a time constant and the barometric error may change sufficiently that the correction is no longer accurate. So I picked:

τ = 100 seconds.

When the GPS signal drops out, I don't do the averaging to calculate the correction amplitude: I instead linearly interpolate between the correction altitude of the nearest points before and after the point without the GPS signal. Remember I'm doing this analysis at the end of the ride, so I have access to all of the data, not just the points preceding the point I'm presently analyzing.

Of course, if there's no GPS at the beginning of the ride, for those points I need to simply use the next available calculated offset. At the end of the ride, if there's no GPS, I use the last calculated offset.

Simple. So here's the result applied to my test data, again showing the second hour of the "ride" (the most interesting part):

corrected barometric altitude

It's easy to see the corrected barometric altitude tracks the true altitude very nicely.

Let's look a bit closer: here's a histogram of the errors of the GPS altitude and the corrected barometric altitude, using the entire ride (ten thousand seconds). The error in the uncorrected altitude is too large to fit on the plot, so is omitted:

histogram of error

In addition to the corrected barometric altitude being much smoother than the GPS altitude, the errors at any given time point also tend to be smaller.

The smoothness is examined by doing a Fourier transform of the error. The following shows the result for the barometic altimeter, the GPS altimeter, and the corrected barometric altitude data:

Fourier analysis of error

The short-term variation in the corrected barometic altitude (to the left of the plot) is much lower. The barometic altimeter as enormous long-term swings in its error, but the corrected barometic altitude is immune from these, since it is held in line by the GPS signal. Yet it is relatively free of the GPS signal's short-period errors. Long-term, it tracks the GPS accuracy.

So there it is. My scheme worked nicely. At least in simulation, that is. It would be fun to try it on actual data. Of course with actual data one doesn't have a "true" altitude. The best one can do is to ride the same road repeatedly, and look for consistency in the result.

An intriguing idea was presented to a previous post in comments: that the barometic signal would be subject to short-term errors from air speed changes as the bike speed varies. I'll look at that in a future post, and show how I might deal with that issue.

Wednesday, February 3, 2010

DSE Runners Club Waterfront 10-miler: results comparison

I finally checked out the results of the Dolphin South End Runners Club Waterfront 10-miler I ran a week ago. In fall 2008 I'd done a 10 km race from them on a subset of the same course. Since some runners find 10 milers longer than their preference, this time they also offered a 5 km run.

The runs proceeed along the Embarcadero, southward, towards AT&T Park (the San Francisco Giants home turf). The 5 km route turns around less than half-way to the stadium, the 10 km route turns within sight of the stadium, while the 10-miler continues past the stadium, crosses the bridge on 3rd, continues to hug the water on Terry Francois, then goes left, up a hill, on Illinois. Then it's back the way you came. So in addition to being longer, the 10-miler includes a hill. I figured it would be interesting to compare paces at the different distances.

Here's the result, plotted semi-logarithmically. On the x-axis, the placing normalized from 0 to 1. On the y-axis, the log of the per-mile time (which runners call "pace" for some reason even though pace should be the reciprocal of this value: on a log scale, though, one is just the inverse of the other). Results are shown for the 10-miler this year, that 10-km run from late 2008, and the 5 km run held in conjunction with the 10-miler. I did the former two, so for these, my result is indicated with a red cross.

distribution of paces

To my surprise, runners in the 10-miler were actually slightly faster than they had been for the 10-km. The 50-percentile is fairly close, but the top half in the 10-miler were clearly faster. There was a clump of 10-milers coming in 11-12 minutes per mile, a clump which is missing from the 10-km run, which was closer to log-normally distributed. That's sort of curious, actually, which suggests a group stuck together to enjoy each other's company.

My pace did what I'd expect: it got slower, significantly slower. I try not to let this bug me: I'd done no hard running on the road yet this year, since my left leg has been bugging me (probably ITB) and I'd done no fast running on the road since early 2009. My goal had been sub-7-minutes, but then goal creep sets in.... Anyway, that aside, with the faster pace among the top half of the population on this run, this resulted in a considerable reduction in my placing.

But then look at the 5 km results. There's a huge spread in 5 km pace. The fastest 5 km guys were up there close to the same pace as the top guys in both the 10-miler this year and the 10-km race from 2008. But from this fast head of the pack, there's a huge range of times, much broader than for either of the other two events. Clearly the 5 km race tended to attract runners who were relatively slower, maybe correlated with less endurance or an unwillingness to be on the road for the longer time required running the much longer distance at a slow pace.

So there's a clear self-selection bias here. You can't compare placings on the 10-miler to those on the 5 km race. Had the 5 km race not available, some, but clearly not all, of the 5 km guys would have done the 10-miler instead. But still there'd be a self-selection bias. Nobody forces anyone to do a DSERC race on a given weekend. If the race isn't what a particular runner is after, he does something else.

An interesting question is how this extrapolates out to the marathon distance. Well, clearly the trend wouldn't continue, as my 10-mile pace (6.95 minutes/mile) would put me at 3:02:40 for a marathon. That scores a lot better than 19.3-percentile. But then the marathon tends to be a "goal" event for people. For many marathoners, it's the only race they do in a given year. The goal is to finish, not finish fast. So the selection bias is different. And it's clear from these plots I would not do a 3:03 marathon right now. My pace would be slower at the 162.2% longer distance.

Anyway, my goal is still a sub-40 10-km. For that, I need to go 8.2% faster over 62% of the distance. Can I manage that? I think so. I just need to get my legs accustomed to running at faster than my usual 8-minute-per-mile pace.

Monday, February 1, 2010

combining GPS and barometric altimetry: generating random altitude data

I'll now describe the model I used for the various altitude signals. This is probably a bit more elaborate than it needed to be, I admit. But I like realism.

First, the altitude versus time, as this was the most complicated. I started with Fourier coefficients generated using normal random magnitudes each chosen with an rms value proportional to a Lorentzian factor 1 / [1 + (s / λ)²], where λ is a reference distance of 10 km, describing the approximate length of a typical climb. This distribution is nice because it keeps enough of the high-frequency component for things to be interesting, but while allowing the low-frequency components to generate nice continuous climbs. The phase for each component was then randomized from 0 to 360 degrees.

But this doesn't represent a realistic profile, since the random Fourier components yield peaks and valleys of the same shape. So I transformed the altitude using the following:

z → (50 meters) ln [ 1 + exp(z / 50 meters) ],

which you can see if z is suffiently positive, simply uses z, but if z is negative, makes z a small positive number. This generates rolling valleys mixed in with tall peaks, which is more realistic.

It's still not as good as I'd like, however. Real roads are built to a certain design grade. If the hill gets steeper than this, the road traverses, with switchbacks if necessary, to bring the grade down to the design limit. This tends to result in climbs which maintain close to a steady grade for long segments. To account for this, I applied a transformation to local road grades:

grade → grade / [ ( grade / grademax )4 + 1 ]1/4,

which is easy to see clips the magnitude of grade at grademax. I applied this grade transformation by stretching the road length relative to that generated by the transformed Fourier composition.

Now, if I really wanted to get fancy, I could recognize different roads tend to have different grades, and roads tend to change at the bottom of descents and sometimes at the top of climbs, so I could mix up the maximum grade. Enough, already, though! I left it at this, with a maximum grade of 10%. Maybe I'll take it further if I ever were to write a course generator for, for example, the Pedal Force Simulator.

So the parameters I ended up using to generate the road amplitude was a characteristic length of 10 km and a maximum grade of 10%.

Then I needed to map this altitude as a function of distance to an altitude as a function of time, which is what altimeters measure. So for that, I used a simple model based on the bicycle power-speed equations. I assumed the following parameters:

v0 = 9 meters/sec,
vam0 = 0.35 meters/sec,
acc0 = 2 meters/sec²,
Crr = 0.5%,

where v0 is speed on the flats (without rolling resistance), vam0 is the maximum climbing rate (without wind or rolling resistance), acc0 is the acceleration on level ground (without rolling or wind resistance), and Crr is the coefficient of rolling resistance.

Then starting with an initial condition of velocity (v) = 0, I calculated for each point in the route, I calculated from a basic power-speed equation the rate of acceleration based on the grade and the present speed. The simplified equation of motion is the following (easier than mucking about with the traditional power-speed equation):

∂v / ∂t = g (vam0 / v) [1 ‒ ( v / v0 )³ ‒ (grade + Crr) v / vam0],

where I exploit the fact that the maximum climbing rate is related to acceleration by a factor g, the gravitational acceleration. This doesn't do well for exploding off the line, where power is typically higher, but it for the rest of it it works nicely. I then cap acceleration at a maximum value using

∂v / ∂t → 1 / [ (1 / ∂v / ∂t)² + (1 / amax)² ]½

I then integrated this acceleration twice to get the position as a function of time, using a single pass. Then I evaluated the altitude to get the altitude as a function of time. This worked fairly well.

That was altitude. Now I had to construct signals for the errors of the GPS and barometric altimetry components, and also for how the GPS signal tends to drop out for awhile, then come back. For this, I created three additional data sets, each constructed using randomized Fourier components with Gaussian (as opposed to exponential) envelopes. Each of these signals was generated as a function of time, not position or altitude. I assume barometric errors are due to changes in the meterological conditions, while GPS errors are due to air density fluctuations, neither of which care if the rider is moving or not. Here they are:
  1. GPS error: a broad-spectrum (time constant = 5 seconds), low amplitude error between the GPS reported altitude and the actual altitude. This moves around quickly (time constant 5 seconds), never straying more than a few meters from zero
  2. barometric error: a deviation of the barometric altimeter from the actual altitude. This varies slowly (time constant = 20 minutes), with a much larger amplitude.
  3. GPS valid signal: This varies with a time constant of 30 seconds, the the range of approximately ±1. If it exceeds 0.2, I consider the GPS signal to be invalid. As a result, the GPS signal tends to drop out fairly often, typically for up to a few minutes at a time.

An important point is that errors in both the barometric altimeter and in the GPS altitude tend to be a fixed difference (measured in meters), not proportional to altitude (measured in %). Neither barometry nor GPS location triangulation has any idea of what "zero altitude" means. Pressure is proportional to the exponential of altitude, and therefore a fractional error in pressure produces a fixed error in altitude. Similarly, with GPS, you're triangulating position based on the delays of signals from satellites. If an atmospheric perturbation, for example, changes the relative delay in an unpredictable fashion, then this produces a fixed error in position. It doesn't matter if you're at sea level or 100 meter altitude: a 1 meter error is a 1 meter error. So I model errors in both the GPS and in the barometric altitude with an offset, not a multiplicative factor. Another way of looking at this is there's no "preferred" altitude. Physics doesn't recognize sea level as special Well, it does, once you get far enough below sea level that the gravitational force starts dropping, but I assume altitude changes are small in comparison to the 6.4 Mm radius of the Earth.

Here's the second hour of a multi-hour ride simulation I constructed in this way, showing the actual altitude and then the altitude reported by the barometic altimeter and the GPS:

portion of randomly generate altitude data

You can see how the barometric altimeter smoothly tracks the actual altitude but with a large offset. On the other hand, the GPS altitude is always close to the actual altitude, but jumps around a lot, and the signal disappears for extended periods. The goal is to construct an aggregate signal which exploits the smoothness and reliability of the barometric altitude, but also the accuracy of the GPS signal.

I'll describe how I combine the barometric and GPS signals next time.