Saturday, December 31, 2011

posting slackitude and SF2G

For the first time this year since I started my blog in 2008, total posts have dropped.

Here's a plot of the rate of net posts during the year since 2008, when Cara convinced me to start the blog:

posts per year

I started a bit slowly this year, then rallied, catching my 2010 schedule. But then I started losing ground, finishing in a dead heat with 2009.

So what happened?

The answer is I started riding more. This is evident from the plot on my Strava page. Barely surviving through April, then my hours start to take off in May. Normal commute means I am in the train around 100 minutes with my laptop. SF2G means I leave home early, then have only my cell phone for my train ride home. More riding = less posting.

And that is a very good thing.

Here's another look at my SF2G schedule. I counted the number of Strava activities I had with SF2G in the title. Before the period of the plot I didn't have GPS so I'd need to check my old training logs. I may have been occasionally remiss in tagging my rides SF2G, so there may be a few not included here. But you can definitely see I started riding into work a lot more often.

SF2G rides

Through September 2010, at a previous job, I was more readily able to combine both, since I was more able to squeeze in lunch rides and even the occasional post-commute morning ride. I was on flex time, and could often do actual work on the train since much of what I did didn't require an internet connection (Caltrain has none). Now I need to be on the secure network to do "work", and in any case "face time" is more important, so working on the train then blowing out to join one of the Wednesday morning groups simply doesn't work any more. Working on the train in the morning often meant I spent the evening commute on the blog, but now if I ride in both commutes are blogless.

It's all good: I try for quality, not quantity. What really runs up the numbers is when I get hooked onto a topic, as I did with the Metrigear Vector. Serial posts on a topic become more efficient to get out.

A bit more on the SF2G rate: my SF2G's peaked in August: good weather, plenty of daylight for early starts. September tailed off, in part because I needed train time to prepare for the upcoming Low-Key Hillclimbs (scoring code, web pages). Then October-November my SF2G's were suppressed by not wanting to go into Saturday's Low-Key Hillclimb fatigued. December included holidays, and was also off.

For 2012? Who knows how the blog will go? I'll try not to set any goals. At some point I will have said enough and feel it's time to move on. But I'm not quite there yet.

Thursday, December 29, 2011

Running the Rocky Steps

This weekend I was in Philadelphia for the first time since I was a small child, visiting Cara's family. The house was only a mile from the Philadelphia Art Museum, and that got me very excited. Sure, we went to the museum (or rather the Perelman Building extension: a fascinating exhibition of Zaha Hadid's architecture), which was nice, but arguably far more famous than the museum itself is the steps leading to the front door of the main building, for these were the steps Rocky Balboa used to prove his fitness in Rocky, the Best Picture Acadamy Award winner of 1976. Here's a link to a YouTube version of the inspirational scene. You know you like it!

As I began the short run to the museum, I readied myself for the ridicule of bystanders. Here would be an adult living out a scene of a 35-year-old film: Heh! Look at that bozo! "Go Rocky!" Heh.

First, the statue. I had to take a photo. There's a long story on the statue, which was commissioned for Rocky III for the top of the steps. Some objected this was inappropriate for a museum, and it was moved away. But popular demand brought it back to the museum, to the side of the bottom of the steps: less intrusive. However, it still attracts a steady stream of tourists taking photos: I was just one of many during my time there. I never saw anyone taking pictures, or even looking at, any of the several other statues on the museum grounds.

Rocky statue

It was time to run the stairs! I felt strangely nervous. What if I couldn't make it? What if I doubled over, out of breath, like the "before" version of Rocky in the film? The horror... And then there was the mockery I was sure to suffer from surrounding tourists. I readied myself to ignore it.

But as I walked over the the center of the bottom of the steps, I was amused to find that several others were already on the steps, all running. And not just running: it was pro-forma upon arriving to the top to raise fists in the air, jumping up and down.

I had no plans for that, so felt safe. And up I went: taking the stairs two at a time, the biggest challenge to me was to not become disoriented and miss-step. This happens to me on stairs: I'm fine at first, but as I proceed, I feel increasingly out-of-phase with the stairs, until I'm fighting just to keep running without tripping.

I hadn't reviewed the video before running, and so had been mistaken that Rocky runs all the way to the museum doors. After the first steps, there's a broad mezzanine around a fountain, then a final short flight of steps to the front doors. This made a better interval for me than to just do the initial set. Yet others around me stopped upon arriving at the mezzanine. But better safe than sorry: I ran the whole way.

After reaching the top, I had a nice view of the city.

view from the museum doors

Once simply wasn't enough! I had to do it again. I decided three would be a good number of reps, then continue with my run. But after three, I reset my goal to five... then eight... then finally ten. After my tenth repetition, which I did single-step instead of double as an experiment, I decided I'd had enough, decended, and then began running along Kelly Drive of Cycling Classic fame. During the entire time I was on the stairs I was never the only one running there: there were a constant stream of runners, most slower, one faster, running the steps. It was incredible.

full step times
Strava times up full steps. It's unclear from data whether slow time on last iteration is due to running steps one at a time or due to fatigue.

I was convinced that surely I'd outrun Rocky. To check, I uploaded my data to Strava for segment timing. To my shock nobody had defined a segment, so I defined two: one to where Rocky stopped, another to the museum door.

So then I timed the videos. I already linked to the original film. I timed him there at close to 13 seconds. I was crushed: my best time via the Strava segment, which starts and finishes on the steps to provide some room for error, is 18 seconds (note only a few of my attempts match the segment, since the stairs are wide relative to the stair length, and I started from various positions). But then there's a sequel where Rocky comes back, now a star, fitter and faster than ever. Here's a link to that sequence. His time there: 10 seconds by my timing. I wasn't even close.

The key is while I took the stairs two at a time, Rocky takes them 4 at a time. Impressive! Obviously I have my work cut out for me if I want to challenge Apollo Creed myself.

Monday, December 19, 2011

San Bruno Hillclimb: Jan 1

It is traditional that the top 3 men, top 3 women, top 3 juniors, and the Endurance Award winner of the Low-Key Hillclimb series are all awarded a free spot in Pen Velo's long-running San Bruno Hill Climb, held every year on Jan 1. I finished 4th, just out of the "money". But yesterday I signed up anyway, paying the entry fee: it's not often I have fitness and opportunity this time of year to do the climb.

The USA Cycling page has it listed as a "time trial", but that's incorrect. It's a mass-start race, riders starting in waves from the base of Guadalupe Canyon Road near Bayshore, climbing to the "saddle point" marking the top of Guadalupe. Then from there it's a sharp right into the state park, down a short descent past the ranger kiosk, an immediate right turn over rough pavement a short straight, another right, pass back under Guadalupe Canyon, then the narrow, sometimes rough climb to the summit. Here's the profile:

profile

The rating listed on the plot uses my algorithm, which is normalized to Old La Honda being 100. The rating is lower because of the relatively lower average grade. However, given the distance, the times aren't that much faster than Old La Honda road, the difference depending on conditions (San Bruno is much more exposed to the wind).

It opens with surprising steepness, then levels out for a traffic light. Past this light (annoying on non-race days when it's too often red) the road climbs somewhat steeply again until the grade lessons, then flattens completely at the Guadalupe Canyon "saddle". Radio Road passes underneath, visible to the right, while to the left is a clear view to the summit. Then comes the tricky bit: the net 270 degree turn onto Radio Road and back under Guadalupe Canyon. You absolutely don't want to fritter away any seconds here, but given the poor road condition, taking a corner too hot is a real danger.

Once under Guadalupe, Radio Road really comes into its own. This climb begins moderately, but it becomes progressively harder to hold onto a gear until at a sweeping left turn the grade hits its peak of almost 10%. Now it's the end game: first a right turn which appears it might be the end, but following that a second sweeping right leading to the finish line at the radio towers (watch out for the 6-inch gap in the metal plates on the road!)

mind the gap

Here's a Strava segment:

As I write this, I'm 18th, a result I got in a training ride on November 13th. I'd like to improve that time on Jan 1. Unfortunately a lot of other riders are also likely to post good times on Jan 1, so if I can just hold my position in the rankings, that wouldn't be too bad! But I think I can reasonably hope to just crack the top 10 when the day is done, conditions permitting.

It's a fun race to watch as well as well as ride. 2007 was an off year for me: I'd ridden well at the Low-Keys in fall 2006, but IEDM and other factors got in the way of me keeping enough fitness to make riding San Bruno worthwhile. Instead I took photos. Here's spectators near the summit watching riders pass below:

spectators at San Bruno

The view of Guadalupe Canyon Road below is nice:

view

Here come the leaders! It's newbie Chris Phipps leading hillclimbing Low-Keyer Tracy Colwell! San Francisco is in the background. Chris would go on to take the win.

leaders

The next year, after I had done the Youth Hostels International "Christmas" bike tour out of San Diego, returning the day before, and was tired but at least had some fitness. So I gave it a shot. I did not ride a smart race. Here's the results.

On Guadalupe Canyon, I resisted the surge of riders near the front, preferring to ride a steady power the whole climb. This works for a climb which is a steady grade in consistent conditions, but San Bruno has neither. First there's the false flat where being in a pack is a benefit, so the faster lead group will gain time on slower trailing groups. Then there's a chance to recover on the descent and turn, so it pays to be slightly in the red going into this section. The worst part was the wind: it was a strong head wind on Radio Road that year and so whatever group you were in, you stayed in. Bridging gaps was extremely hard. The real race was to the park entrance, and I arrived there with way too much left in the tank.

In 2009 I was in Southeast Asia. But this video shows conditions aren't always ideal on San Bruno mountain:

San Bruno Hillclimb 2009 from Chris Stastny on Vimeo.

2010 I missed as well (more focused on running, and the forecast had been for rain), then in 2011 I was woefully out of shape due to 11 hours days at work at my then-new job.

But back to 2012....

As I noted, it's tough coming into Jan 1 ready to go. As is the case most years, this year I'm traveling to the east coast to visit family for Christmas week, during which time I won't have a bicycle. I'll probably run a bit and go to a YMCA or other local gym where I will ride a stationary bike and do some light lifting and core work. Hopefully things go well.

Thursday, December 15, 2011

Fairwheel Bike's Project Right @ NAHBBS in March

I'm very excited about the 2012 North American Handbuilt Bicycle Show in the first weekend in March. (I was tempted to ride the Death Valley Double Century, but the two conflict.... Death Valley can wait.)

Fairwheel Bikes, which has been stealing a lot of attention from the big boys at Interbike the past few years with their ultra-light project bikes, is delivering a new project to the show: "Project Right". I just came across this today, having seen their latest blog update. Here's the frame:

Check that out: the left side of the rear triangle is completely missing, and there's no seat stays. This is really only an incremental change from the trend taken by Cervelo and Pinarello. Cervelo reduced the seat stays to mere formalities to provide vertical compliance, while Pinarello has been a leader (in marketing at least) in focusing material on the right chainstay, since that's the side where force is transmitted via the drivetrain.

Here's the Cervelo R5 in its most expensive form (the R5-Ca). As an aside, I finally came across one of these "in the wild", on the bike path in Sausalito. It looked really nice. I've stopped obsessing over how outrageously expensive it is. If people want to squander money, bikes is a fairly harmless way to do it. Anyway, notice the super-skinny seatstays:

You may as well cut those out: they're hardly providing structural support.

So that's what Fairwheel is doing: no seatstays, only one chainstay. Seems like a progression of a trend. But it's hardly new.

Chris Boardman's Lotus Superbike had a similar concept:

Nothing of interest on the left side; all support for the rear wheel is on the right. This bike was based on the Windcheetah, which Mike Burrows designed in 1982:

But the idea goes way, way further back than that. Documented by Jan Heine in his book, The Competition Bicycle, here's a Labor bike from 1906:

The design was to allow for quick wheel changes.

I really look forward to seeing the Fairwheel Bikes project "in the flesh" at the Hand Built Bike Show.

Monday, December 12, 2011

Old La Honda Road: another PR

I had a good year at the Low-Key Hillclimbs this year. Often by the time October rolls around fatigue from the year is starting to kick it. Last year I came into the series fairly fresh and fit, but then started a new job and my fitness went straight downhill from week 3 (my first) onward.

I finally started exercising at a reasonable rate again in April this year, a mixture of running and some cycling (mostly long commutes to work), and surprised myself with a sub-1:31 half marathon in August: I considered that good given my lack of formal running background. So I knew I had some fitness but wasn't sure about how I'd do on the bike. I did a few climbs of Diablo before the series, just to get some climbing legs, and was surprised my times weren't so bad. But for the Low-Keys, you've got to be better than "not so bad". Everyone seems to raise their game for the series.

But despite my worries I did pretty well. So as the series wound down, Tim Clark asked me if I was going to try for an Old La Honda time. Old La Honda is, to me, the most prestigious climb for times. More riders know their Old La Honda times than any other.

I've been adding some volume to rebuilt my aerobic base prior to the San Bruno Hillclimb on the New Year: during the series it was more about working hard Saturday to Tuesday then being fresh for Saturday.

So I got in some solid work Thanksgiving weekend, then the following weekend. I was pretty tired on Monday, so went super-light on my usual Monday weight room workout. Tue I felt a bit better, but a work meeting kept me from either riding in or riding at lunch. So I was starting to feel fresh again Tuesday night: a good change to give Old La Honda a shot on the Wednesday noon ride. It wasn't optimal: a few days of light work following my recovery would have been better. But I valued the chance to ride with the group rather than try a solo effort.

My last PR attempt on Old La Honda had been successful: in July 2009 I had set out to break 17 minutes and was delighted with my result, 16:49. It was a redemption against having failed miserably at the Diamond Valley Road Race the weekend before, where failing to follow through on my previous two strong races there, I'd been dropped on the first lap. For that attempt I'd ridden a very steady pace, turning my 36-18 gear until near the top, where I upshifted to my 17-cog.

It might seem surprising I'd not tried again. But a good run at Old La Honda requires that I be fit and rested, and that I'm willing to bring my Fuji with its climbing wheels on the train. The key thing here is fit and rested. Devoting a three o r four day block to a good Old La Honda time is a luxury in which I rarely indulge. So most of my rides up the hill are for training only, typically on my Ritchey Breakaway with clincher wheels, carrying water bottles, a heavy tool bag, and a pump. Almost always I've I'd done some sort of hard ride the day before, as well.

So I risked bringing my Fuji SL/1 with it's light wheels and 180 gram Vittoria Corsa time trial tubulars on the morning train and from there to work. At lunch, I met Mark Johnson and we rode along glass-strewn Central Expressway on our way to the ride start: a serious risk for the ultra-thin tires and no spare. But we made it, and I felt fairly good.

Doing good OLH's from the Wed Noon Ride used to be compromised by a fast pace over Arastradero and Alpine Road, then jamming to the sprint at Woodside. But no longer: the focus is now, thanks to Matt Allie's leadership, strictly on the climb. So the ride to the base is at a nice warm-up pace.

Conditions were fairly good for the attempt. The road is in historically good condition, most notably the upper portion which was repaved within the past few months. And winds were forecast to be light, from the north, a tailwind (winds are generally blocked by the trees on the climb, but every little bit helps). Cool temperatures had me carrying more weight in clothing than I'd prefer, but as we approached the climb and were stopped by construction crews on Portola Valley Road, Peter Tapscott offered to carry my jacket and empty water bottle for me. My excess mass was in my long-sleeve undershirt, my heavy long-fingered gloves, my compression tights, additional calf compression socks,and the wristwatch I'd forgotten to remove before leaving my office. All of this adds up, probably worth 2-3 seconds on the climb, the wind resistance from the gloves maybe an additional second. But I couldn't pick the weather and I wasn't going to help myself by under-dressing. Some recent emergency repairs to my bike due to a broken Power-Cordz added some additional weight: at least another second there. It all adds up.

As we hit the base of Old La Honda I made a tactical error. My plan had been to start a bit back, then move up to the early pace-setters, essentially shaving a second or two. But this back-fired, as nobody took control of the pace, and I was stuck in traffic. I went to the extreme left of the road, moved up, and went to the front myself. Not the best way to start: I should have hit the bridge marking the start of the climb at speed and carried momentum up the initial slope.

I was joined by Chris Zappala, who'd mentioned he'd commuted to Palo Alto from San Francisco earlier that morning. Despite the hard ride in the cold, he was ready to ride, so I got on his wheel for early pacing. As I climbed I looked down at my cassette to assess my gear. I had a 12-23 cassette, so tried to count the number of cogs between the chain and the end of the cassette and figure out which gear it was. Was it 17 or 18? It felt as if we were going fast, in either case, but I doubted if I was spinning the 17 that I would be able to sustain that pace.

I came across James Porter and Greg McQuaid climbing together. I'd expected them on the noon ride itself but they'd clearly decided to leave early on their own. Tim Clark was going to be with him but I didn't see him. Greg was recovering from a broken collar bone so I didn't expect him to be in top fitness right now, so I focused instead on riding with Chris.

Chris eventually faded, as expected given his morning commute, and I moved ahead. I hit the first mailboxes in 5:50-something, the fastest I'd ever done. That didn't seem to necessarily be a good thing. Then I noticed that I was indeed in my 17 cog, so downshifted to the 18. This put me back to where I wanted to be, gear-wise, but not where I wanted to be feel-wise. I was becoming distressed, struggling to maintain pace through the steeper corners, and there was still a long way to go. In 2009 I'd still be climbing seemingly effortless at this point.

I came upon Tim Clark, whom I was able to catch. For awhile he served as a nice rabbit, but eventually I caught him and he cheered me on as I passed. But I was feeling a growing sense of despair. I tried to suppress all negative thoughts, to simply push onward, but I couldn't deny that I was struggling.

I shifted down into my 19.

Now it was official: I'd gone from "get back onto pace" to "minimize losses". Sub-17 was still possible, I told myself, keep up the pressure, don't let up. Even if I were to miss 16:49, I still wanted to make Strava page 1, which required 17:08. This was no time to indulge in self-pity!

I looked down at my computer and saw 16-even with a few turns left. I knew that these turns always eat up way more time than expected: it always seems time accelerates here. But I kept pushing, all hope of smooth form now gone.

There it was: the mailboxes, then the stop sign. As I approached I looked again at my computer, the first time since I'd seen 16. 16:35 it said just as I went to hit the lap button at the stop sign. I'd done it: a new PR. Official time (from the Garmin) = 16:36.31.

It was with interest that I uploaded the data from the Garmin, and sure enough, the numbers tell me what I already knew from my gear selection: I faded.


To generate this plot I differentiated my altitude versus time, then applied a bi-exponential convolution with a 5-second time constant to smooth the data. To avoid issues with the smoothing at the beginning and end, I excluded the first and the final 500 meters. This generated the data with the red points. I then did a non-linear regression, fitting a decaying exponential to the result (the exponential versus distance, as opposed to time, but the two yield essentially equivalent results). This fitted function is plotted with the dashed curve. On the left of the plot is the rate of ascent represented as meters/hour ("VAM"). On the right I show the projected Old La Honda time for each VAM. The green background is sufficient to break my previous PR of 16:49. Red implies I'm losing ground versus that result.

You can see I obviously started way too fast: a 15-minute Old La Honda pace. Some people can do that (Ryan Sherlock beat 15 minutes the weekend before my ride, although Eric Wohlberg has the unofficial self-timed record at 13:50). I cannot. So as the climb progresses, the trend line has my climbing rate decrease at 3.4% per kilometer. Finally at around 4.5 km I crack fairly badly, my pace dropping even below this decaying trend. But I rallied toward the finish and regained some of that lost time.

Honestly: if I'd known my pace was going to be this far from uniform, I'd have said no way would I have been successful. But I was successful, which tells me that I am able to go out fairly hard, too hard, and still retain enough to do well for this interval. This is important for the San Bruno Hillclimb on 1 Jan, where there are considerable draft advantages for staying with a pack near the beginning, and given my experience here, it seems I can afford to go into the red to hold onto early wheels there. San Bruno is a relatively tactical climb, as there's an unavoidable recovery half-way as the route descends and turns under the main road on which it begins. Then from there the winds can make it very difficult to bridge gaps which may have formed on the first portion. So holding onto a good group is really important.

Anyway, it will be fun to see how it goes. I'm hoping for good weather on Jan 1.

Monday, December 5, 2011

Level of Service Analysis and Bus Rapid Transit

Here's a good story: from the excellent StreetsBlog San Francisco, What's the Hold Up for Van Ness BRT?.

San Francisco has no meaningful intra-city subway, and most of the street car lines were ripped out early in the 20th century. Street cars were mostly privately owned, and they couldn't compete with the massive public support provided to car infrastructure under political pressure from the car companies. So en masse, street lines were paved over, except for a skeletal few: there's a few key MUNI lines, and a few cable car lines which cater to tourists. BART runs through downtown, but just makes a few stops in the city, designed primarily to connect San Francisco to the East Bay. It's original goal of surrounding the Bay was gutted by first Santa Clara County opting out in 1957, then San Mateo County in 1961, each preferring to focus on expressways; Marin dropped out soon after [Wikipedia] (what an unbelievable tragedy).

So San Francisco is stuck with buses for the vast majority of its area as public transit. Yet despite being carless, I take the bus only a few times a year. Why? Because it's increadibly slow and unreliable. If I decide now I want to get somewhere, more often than not I can actually run to a destination before I'd arrive by bus. It wouldn't be uncommon that I could arrive at my destination before I'd even stepped on the bus, since delays of up to 40 minutes for the next bus are not at all uncommon with buses running on a nominal 20 minute interval. Even with an even start I can often outpace the bus by foot. And by bike it's simply no contest.

So San Francisco proposes to give buses priority lanes. This is called "Bus Rapid Transit", the idea being to package bus lines like more expensive "light rail". You step on the bus, it zips down its own lane triggering traffic lights as it goes, and presto-magicko, you're at your destination in no time. Awesome! Public transit problem solved!

Streetsblog image: BRT Geary
BRT Geary (Streetsblog San Francisco)

Solved, except that infrastructure projects must go through environmental impact analysis (CEQA) and a big part of environmental impact analysis is the effect on automobile traffic. This makes sense since automobiles are responsible for an enormous environmental impact: noise, pollution, resource depletion, and public safety. No brainer in this case, right? After all, an efficient bus line will take drivers off the roads by providing a timely, more efficient alternative, reducing the environmental impact of cars! No-brainer, let's go!

But wait! CEQA uses an analysis based on a fixed "level of service" (LOS). The assumption is a certain number of cars will use the road each day, and if you slow these cars down, they'll generate more congestion, their engines idling all the while. With this logic, the more car lanes, the better, as the fixed number of cars on the road will then zip to their destinations with a minimum of "impact".

This is obviously absurd. Build more lanes, more people drive, those who drive drive more often, and when they drive they drive further. This is seen time and time again in the data, in study after study. Cara was in Amsterdam recently, reporting on the enormous number of cyclists using the bike paths, cycling providing clearly superior local transport the cars, which are relegated to narrow roadways. There's a nice video here.

The LOS logic? Pave over the bike lanes: extend the roads by adding lanes. This will have the cars generate less exhaust during their trips. Sure, bike riders will be slowed as they struggle to find space on the public roads, but there's not much environmental impact from a slow-moving bike, while slow-moving cars are spitting out exhaust fumes and making noise every extra second of their trip.

I'm sure the Dutch would laugh with well-deserved contempt at this stupid demonstration of the pathetic state of U.S. public education were we to suggest this to them. And they'd have a point.

The problem with the LOS standard has been well known in San Francisco since the 2006 injunction against all new bike infrastructure in San Francisco, an injunction which was lifted only last year. The bike program, it was decided, needed to provide an environmental impact analysis, and analysis which was not able to claim that if you make a city better for cyclists maybe, just maybe people would replace car trips with bike trips, reducing the need for car lanes. The bike program was eventually able to survive even that silly standard, and is progressing well today due to the unrelenting political pressure of the San Francisco Bike Coalition. We can only hope BRT is able to get past the LOS hurdle as well, and even better that the LOS standard be quickly and definitively revised to recognize that driving a car a certain number of times each day is not an inevitable fate.

Sunday, December 4, 2011

Strava power estimation: Cortland Hurl

The Cortland Hurl is the only significant climb on the SF2G Bayway route. According to Strava, it gains 25 meters in 400 meters, an average grade of 6.3%, although the grade is non-uniform. It starts out fairly gradual, then steepens, then gets gradual again towards the top. I like to make a good effort here when I'm feeling good during morning commutes. Typically I'm behind at the top of the steep bit, but I tend to do fairly well on the final gradual portion. If I'm having a good day, depending on who's there and how they're riding, I have a chance to be first to the top.

I've not ridden with a power meter for a year now. I sort of lost interest: I just like riding my bike and I don't care what the power meter data are, so why carry around a heavy, expensive Powertap wheel? Strava gives me a fairly good idea how I'm doing with its segment timings.

However, in addition to speed numbers Strava also produces power estimates. In fact, it will use these estimates for reporting a rider's best effort over different time intervals. I have suggested to them this is a mistake: only power meter numbers should be used for this purpose, since their estimate is unreliable.

Since I had a lot of data for Cortland (28 rides + one I reject because I dropped my keys along the way and had to turn back to fetch them), I figured it would be interesting to compare Strava power to PowerTap power, plotted versus VAM which is considered a decent surrogate for power/mass ratio on climbs. I also compare these with "hand calculations" using the usual power-speed model. Here's the result:

plot

The analytic calculations assume constant speed and constant grade and no net acceleration. Assuming start and finish speed are the same, they are thus a lower bound estimate given the assumptions used. For assumptions, I show three. One is using a "realistic" estimate for total mass and for CdA, the coefficient for wind resistance power. For this one I set my body mass to 57 kg, my bike mass to 8 kg, but then added in 4 kg for equipment and clothing and what I was carrying on my back. I assume a 0.5% coefficient of rolling resistance. CdA was set to 0.5 meters squared. For a second estimate, I eliminated the clothing + equipment mass and reduce CdA to 0.4 meters squared, assuming no back pack. In a final estimate I eliminate all wind resistance.

As can be seen in the plot, the Powertap measurements are almost always more than the Strava estimates. Strava estimates start out fairly well aligned with the zero-equipment-mass ("naked") estimate of power, except for my fastest runs where the Strava estimates become much lower, dropping as low as the zero-wind-resistance estimates. There's a few Strava estimates which are clearly anomalously low.

The Powertap data fall above even my highest analytic power estimate. This is as they should, since as I noted my analytic estimate assumed uniform speed and grade, and thus underestimate wind resistance near the beginning and end of the segment. Wind resistance is superlinear, so underestimates during faster than average portions are of a lower magnitude than overestimates during slower than average portions.

So what do I learn from this? Basically you shouldn't trust Strava power estimates. But if you do care about them, you should make sure clothing + equipment mass is included in bike mass. I had not done this. It all adds up: toolbag, pump, water bottles, clothing, shoes, helmet: you'd probably be surprised at the result if you bundled it all up and put it on a scale.

Monday, November 28, 2011

Low-Key scoring algorithm: addition of variance normalization

As always happens in fall, the Low-Key Hillclimbs have taken up a large chunk of my time, leaving less time for blog posts. But it was worth it: the series was an unqualified success, with every climb coming off well, the last few finding valuable seams in the weather. At Hamilton riders experienced moderate rain on the descent, and for some towards the end of the climb, but it was warm enough that the long descent was still tolerable in the wet.

One aspect of the series worthy of revision, however, is the scoring system. Never before were artifacts in the median-time-normalized scoring more obvious. So for 2012, I am finally overcoming inertia and changing from the median-based scoring we've essentially used since 2006.

I've described in preceding posts a scheme to calculate a reference "effective" time for each climb. With this scheme, instead of taking a median each week, we take a geometric mean where effective times for riders (adjusted for male, female, hybrid-electric) are adjusted by the rider's "rating", which represents how the riders tend to do relative to the reference time. It's an iterative calculation which is repeated until rider ratings and reference times are self-consistent, weighting means by heuristic weighting factors to give higher priority to riders who do more climbs, and climbs with more riders, since these provide better statistics.

Here's a comparison of this approach with the median-based system used this year. I plot on the x-axis each rider's rating and on the y-axis that rider's score for each designated week. In this case I used weeks 5 (Palomares Road) and 6 (Mix Canyon Road). These climbs are at opposite ends of a spectrum: Palomares is short with plenty of low-grades, while Mix Canyon is relatively long with extended steep grades.

Here's the plots. I've omitted riders who did only one climb, as for them their rating from the one climb they did is equal to their rating.

2011 scoring

With the 2011 scoring scheme, you can clearly see that there is a lack of low-weighted riders relative to Palomares. As a result, moderately-rated riders in particular were given low scores, since the median rider was, relative to the entire series, above average (rated over 100). In contrast at Palomares there were more low-weighted riders.

So then I replace the median time with a reference time, adjusting each rider's effective time by his/her rating. Now you can see the scores for Mixed Canyon have been boosted:

reference time

But there's an issue here: the curve for Mix Canyon is steeper. So relatively slower riders score lower, while relatively faster riders score higher, then they did or would at Palomares. So I added a bit of complexity: I compare the spread in scores with the spread in rider ratings and I make sure that the ratio of these spreads is the same week-after-week. I call the adjustment factor the "slope factor". The result is here:

reference time + variance normalization

Now the curves line up nicely! Sure, each rider may score in a given week more or less than his rating, but the overall trend is very similar.

I'll add in the other weeks. First, here's the 2011 formula:

2011 scoring

You can see distinct curves for different weeks. Some weeks a rider of a given ability is more likely to score higher, some lower. This isn't what we're after, as we want riders to have the opportunity to excel on any week.

So I add in the adjusted effective reference time, and then the slope factor, and here's what we get:

reference time + variance normalization

All of the weeks have generally overlapping curves. No more fear of turning our for a tough climb or a climb in difficult conditions, and have your score buried in obscurity because there's a disproportionate number of fast riders. Or similarly, no more volunteering for a week only to have your volunteer score end up lower than riders you finish ahead of week after week, simply because the median times were relatively long due to rider turn-out.

To me, this system looks like it's working nicely.

Monday, November 14, 2011

week-to-week stability of proposed 2012 Low-Key scoring formula

In two previous posts, I described an attempt to revise the scoring code for the Low-Key Hillclimbs. The scoring has placed a priority on simplicity. At first, we normalized times to the fastest man and woman each week. But then everyone's score was exceptionally sensitive to the fastest rider. Then I switched to using the median time for normalization, first separately for men and women, then using combing them with an empirically determined conversion factor for women. But while median is less sensitive to any single individual showing up, nevertheless the most challenging climbs tend to attract fewer beginner riders, deflating the scores for these weeks. So the alternative approach is to iteratively rate each climb using a reference time based on the rating of riders who show up, and assign each rider a rating based on the reference times (and their results) of the climbs they do.

A concern about this approach is that if I use all available information equally, I re-rate each rider and each climb after each week's results. This yields scores for previous weeks changing each time results for a new week become available. This is in principle an undesirable feature. It could be avoided by forcibly freezing ratings for climbs each week, rating only new climbs using results including those which preceed it. You might call this approach causal scoring (nothing is affected by future events). However, before taking such a compromize, I wanted to test whether pragmatically this is a problem. Obviously if relative scores from previous weeks are highly volatile then it makes tactical decisions difficult. For example, your previous scores might all be better than the scores of another rider, then you mark him and out-sprint him in this week's climb, but afterwards you've fallen behind in the standings because of a re-evaluation of the reference times for previous weeks. This is something of a pathological example, but it's in principle possible, so it needs to be tested using realistic data.

So I ran the scoring code for 2011 data, which exist for seven weeks of climbs. Two climbs, Kings Mountain Road and Mount Hamilton Road, have not yet occurred.

After week 1, Mountebello Road, there is only one climb on which to determine a reference time, so I revert back to using the median time. I could also use the geometric mean, which would be closer to what I do when there's multiple times, but the median works well so I stick with that. The climb's field is then by definition average. There is no adjustment for the strength of the field.

Then I add data from week 2, Sierra Road. Now we see that some riders did both weeks. On one or the other week, using median times, these riders would score lower (it turns out they generally would score lower on Sierra Road). I then assume that on the week they score lower the average rider was stronger, and adjust the climb reference time so riders who did both, on average, score the same (using geometric means). Then each week other riders are scored relative to these repeat riders. This causes a re-evaluation of the reference time for the first week: it's no longer the median time.

Now I add week 3, and I can use all data from riders who did at least two of the climbs to rate the fields of riders relative to each other. These riders are used to re-establish reference times for the first two weeks.

And the process continues until I have added data from all seven weeks.

plot 1

Here's the test results. First I plot the ratio of each week's median time to its reference time. So if this number is more than 100%, that means that the reference time will be less than the median time, and riders will score lower than with the 2011 algorithm. This adjustment is because, according to the algorithm, on average there were more slower riders and fewer faster riders on that climb. The plot shows this ratio for each climb plotted at the end of each week. After one week, there is only one point: for week 1, Montebello, and of course since that climb uses the median time it is at 100%. After two weeks there are two points: one for Montebello and one for Sierra Road. That curve is orange. Here Montebello is around 102.5% and Sierra Road is around 97.5%, so there were stronger riders at Sierra Road. Week 3 was Page Mill and that came out between the first two climbs. You can see how each week the reference time for each climb is adjusted, generally upward since as the series has continued later climbs have attracted on average stronger riders, it seems. So each week scores from Montebello, week 1, would tend to drop a bit as the algorithm assigns relatively higher scores to riders with a similar relative placing at later climbs.

This seems like it might be a problem, having things change over time. And this is true for someone who has a particular score goal, like 100 points. They may have 100.1 points after Montebello only to find that has dropped to 95.1 points later in the series. But for the standings, all that matters is how points from one climb compare to points of another. For example, if after two weeks rider A, who climbed only Montebello, scored 101 points there while rider B, who climbed only Sierra Road, scored 100 points there than rider A is ahead of rider B. After week 3 perhaps rider A's score for week 1 drops to 99 points and rider B's score drops to 98 points, but that's okay as long as the gap between the two doesn't change much.

plot 2

So next I plot the ratio of a climb's reference time to the reference time for Montebello. If the two change by the same proportion this ratio doesn't change, and a comparison of riders between the two climbs won't change much. As hoped, this ratio doesn't change much as new results are added to the analysis.

plot 3

The resolution of that plot is limited, so in the next plot I how much each of these ratios changes after each week of results is added. Using the example before of riders A and B, for rider A to keep his 1-point gap over rider B, we want this ratio to be stable to around 1%. From the plot you can see that none of the comparisons between any of the weeks and week 1 changes by more than 0.5%. The biggest change is between week 2 and week 3, but still these change relative to each other by barely over 1%. So scores shifting relative to each other over the course of the series doesn't seem to be a big problem. So the scoring system seems to work pretty well, at least if you don't mind scores drifting a bit together.

Saturday, November 12, 2011

testing 2012 Low-Key Hillclimbs scoring code

I seem to have debugged the new Low-Key Hillclimbs scoring algorithm, so tested it on 2011 data for the completed first six weeks.

Recall the method is to calculate a rider's rating (not used for overall rankings) based on the natural logarithm of the ratio of his time each week to that climb's reference time. Meanwhile the climb's reference time is calculated as the average the natural logs of the times of the riders in the climb, subtracting their ratings. These "averages" are weighted by heuristic statistical weights which assign more importance to riders who did more climbs, and to a lesser extent to climbs with more riders. Each of these factors depends on the others, so the solution is done self-consistently until it converges, in this case until the sum of the squares of the reference times changes by less than 10-6 seconds2. This took 8 iterations in my test.

To avoid contaminating the results I check for annotations that a rider has experienced a malfunction or wrong turn during a climb, or that he was on a tandem, unicycle, or was running. These factors would generally invalidate week-to-week comparisons for these results, so I don't use them. So a rider whose wheel pops out of true during a climb and is forced to make time-consuming adjustments before continuing won't have his rating penalized by this, assuming that incident makes it into the results data.

All times here are adjusted for division (male, female, or hybrid-electric), as I've described.

week 1 median    = 2149.50
week 1 reference = 2054.26
week 1 ratio     = 104.636%
week 1 quality   = 0.0398
week 2 median    = 1760.50
week 2 reference = 1762.51
week 2 ratio     = 99.886%
week 2 quality   = 0.0096
week 3 median    = 2614.00
week 3 reference = 2559.27
week 3 ratio     = 102.139%
week 3 quality   = 0.0237
week 4 median    = 2057.50
week 4 reference = 2119.96
week 4 ratio     = 97.054%
week 4 quality   = -0.0140
week 5 median    = 1237.50
week 5 reference = 1246.35
week 5 ratio     = 99.290%
week 5 quality   = 0.0310
week 6 median    = 2191.00
week 6 reference = 2322.56
week 6 ratio     = 94.335%
week 6 quality   = -0.0254
Here the week "quality" is the average rating score of riders in the climb. You can see in general the ratio of the median to reference times tracks this quality score, although one is based on a weighted geometric mean, and the other is a population median.

In general less steep more popular climbs (1, 3, 5) have rider "qualities" which are positive, meaning times were somewhat slower, while steeper, more challenging climbs (4 and 6, but to a lesser extent 2) tended to have negative "qualities", indicating riders were generally faster. The exception here is week 2, Sierra Road. While this road is considered cruelly steep by the Tour of California, apparently Low-Keyers have a higher standard of intimidation, and it still managed a positive quality score with a ratio quite close to 100%. It essentially fell between the super-steep climbs and the more gradual climbs.

A side effect of this, even if I don't use this analysis for the overall scores (this year's score algorithm can't be changed mid-stream, obviously, although it's tempting, I admit...), is I get to add a new ranking to the overall result: rider "rating". This is a bit like the ratings that are sometimes published in papers for rating professional teams, not a statement of accomplishment, but a guide to betters on who is likely to beat whom. Don't take these results to Vegas, though, as they're biased towards riders who did steeper climbs, which produce a greater spread in scores. I could compensate for this with an additional rating for climbs (how spread the scores were), but I'll leave it as it is. I like "rewarding" riders for tackling the steep stuff, even if it's only in such an indirect fashion.

For the test, I posted the overall results with the official algorithm and with this test scoring algorithm so they can be compared. One thing to note is only this single page is available with the test algorithm, any linked results will be the official score:

  1. 2011 scoring algorithm
  2. 2012 scoring algorithm

Riders who did both Mix (week 6) and Bohlman (week 4) really benefit from this new approach. Coincidentally that includes me and my "team" for the series (Team Low-Key, even though my racing team is Team Roaring Mouse, which I strongly support).

Friday, November 11, 2011

proposed 2012 Low-Key Hillclimbs scoring algorithm description

The whole key to comparing scores from week-to-week is to come up with a set of reference times for each week. Then the rider's score is 100 × this reference time / the rider's time, where times have first been adjusted if the rider is a woman or a hybrid-electric rider. Presently this reference time is the time of the median rider finishing the climb that week. But if riders who would normally finish in more than the median time don't show up one week, for example Mix Canyon Road, everyone there gets a lower than normal score. That's not fair. So instead we can do an iterative calculation. Iterative calculations are nice because you can simplify a complicated problem by converting it into a series of simpler problem. The solution of each depends on the solution of every other. But if you solve them in series, then solve them again, then again, eventually you approach the self-consistent solution which you would have gotten with a single solution of the full, unsimplified problem, except that problem might be too difficult to solve directly. So here's how we proceed:
  1. For each climb, there is a reference time, similar to the median time now. The reference time is the average of the adjusted times for riders doing the climb.
  2. For each rider, there is a time adjustment factor. The time adjustment factor is the average of the ratio of the rider's time for a week to that week's reference time. So if a rider always does a climb 10% over that climb's reference time, that rider's adjustment factor will be 1.1.
We have a problem here. The climb reference times depend on rider adjustment factors, and rider adjustment factors depend on climb reference times. We need to know the answer to get the answer. But this is where an iterative solution comes in. We begin by assuming each rider's adjustment factor is 1. Then we calculate the reference times for the climbs. Then we assume these reference times are correct and we calculate the rider adjustment factors. Then we assume these are correct and we recalculate the climb reference times. Repeat this process enough times and we get the results we're after. Once we have a reference time for each climb, we plug these into the present scoring algorithm where we now use median time, and we're done. The rest is the same. One minor tweak: not everyone's time should contribute equally to a climb's reference time, and not every climb should contribute equally to a rider's adjustment factor. This is in the realm of weighted statistics. Riders doing more climbs get a higher weighting factor, and climbs with more riders get a higher weighting factor. The climb weighting factor depends on the sums of the weighting factors of riders doing the climb, and the rider adjustment factor depends on the sum of the weights of the climbs the rider did. So this is another part of the iterative solution. But this tweak is unlikely to make a significant difference. The basic idea is as I described it. There's an alternative which was suggested by BikeTelemetry in comments on my last post on this topic. That would freeze scores for each week rather than re-evaluating them based on global ratings. That I haven't had time to test, but the code for the algorithm described here is basically done; just ironing out a few bugs.

Wednesday, November 9, 2011

San Francisco: City of Passive-Aggressive Losers

The San Francisco mayor's election was yesterday, and it looks like Ed Lee won it with around 30% of eligible voters voting.

Quoting the San Francisco Examiner, referencing critics:

"...the career bureaucrat would be nothing more than a shill for powerful City Hall insiders. Lee also was dogged by accusations of voter manipulation by an independent expenditure committee that supported the mayor and other backers laundering campaign donations, which prompted a District Attorney’s Office investigation..."

He attracted a huge number of donations, driving up the amount the city needed to pay in public financing. His donations were largely from out-of-city donors, many laundered through low-income workers to circumvent the $500 donation limit. Then there were the nominally unaffiliated supporters, for example those who produced and distributed free copies of the book of his life story. Meanwhile, he violated the law by refusing the disclose details of public contacts within the required time limit. This was so obviously fee-for-service it couldn't have been any clearer.

There was massive fraud in the early voting. Housing managers in Chinatown were reportedly collecting the absentee ballots of tenants and filling them in en mass. Shady "voting booths" were set up where voters brought their absentee ballots for "assistance". "Helpers" were filmed filling ballots out for people, and in other cases stencils were handed out so voters could fill in Lee's slot without the risk of voting for any other candidate. No doubt these absentee ballots were procured with further "assistance"

Lee sat out most of the series of mayoral debates, instead choosing to go hang out at bars and get close to his people. His campaign slogan, "Gets it Done", couldn't have embraced mediocrity any more, mediocrity where the city desperately needs vision and leadership to get it back on track to fiscal responsibility. Indeed, even the deal he claims to have brokered himself to get high-income city employees to contribute to their pensions and health care, Proposition C, he himself is gutting by promising these same workers (SFFD and SFPD) a compensating pay raise. Not only does that cover the pension contribution, but actively increases the city's unfunded liability further by increasing pension payouts, which are proportional to salary. Oh pity the poor police officer, most of whom make well over $100k/year, and can retire on that with full health benefits at an age when public-sector workers are typically mid-way through their careers, wondering how they're going to be able to retire.

Lee supports Proposition B, which pays for road maintenance with debt. Why is debt needed? Because the budget for road work has been diverted to other things, like the massive city worker salary & benefits budget. Lee, "who gets it done", was the Public Works director from 2000 to 2005, the one most directly responsible for road infrastructure. He's part of the reason the roads are in such sorry condition, and a Proposition B is claimed to be needed.

Lee is a shill, a puppet, a tool of the money machine. It was clear as can be he had to go. Nobody I've interacted with admits to supporting him, and I even walked door-to-door in Mission Terrace as a David Chiu volunteer. Yet how can someone who is so clearly unpopular, so clearly corrupt, so clearly a tool of outside money, be elected?

Well, it doesn't help that the ten-thousand-member San Francisco Bike Coalition campaigned for him. SFBike endorses candidates based on a member survey on which members were explicitly asked to rank candidates based on responses to a lame curve-ball set of questions, and Lee finished 3rd. Yet surely the stories of corruption and fraud which followed that survey could have tempered the zeal with which SFBike daily bombarded the internet with messages encouraging a position for Lee on the 3-slot ranked-choice ballot? I even asked an officer of the coalition: had Lee made a racist comment, would you continue to support him? He admitted probably not. Yet naked corruption and fraud is okay?

And I'm also sure he got support from many of the 27 thousand city employees, 3% of the San Francisco population. The number of city employees has reportedly increased under Lee's brief tenure as mayor, and he's taken no steps toward making the efficiency improvements which are so desperately needed (efficiency means fewer workers and people working harder and learn new skills; none of these are popular with existing workers).

But the real culprit here is the lethargy of the voters. Turn-out was weak. It was reportedly very low even among protesters at Occupy San Francisco. Perhaps many of these protesters were from out-of-the-city, I don't know. But if you're going to smugly call for the downfall of the Man in the streets and then not exercise your legal obligation to vote for representative government, you're destroying the integrity of your own message. The reason the banks are able to get away with so much is voters aren't providing adequate oversight of their elected representatives. And there's no better example of that than yesterday's election. This is especially true because several candidates made an explicit point to support and defend the rights of free speech and public assembly of these same protesters.

San Franciscans will continue to whine and complain about how the city is in such a sorry state, how budgets are unsustainable, how public transit is little more than a fiscal tar pit, how there is no vision for how the city is to move forward. Yet many of those same felt they had more important things to do with their time than participate in yesterday's election. Those people are the definition of passive-aggressive, an affliction which is in epidemic proportions. These people deserve mediocrity, corruption, and insider deals. They deserve Ed Lee.

If you live in San Francisco and didn't vote, either yesterday or absentee, I extend my finger in your general direction. You, my friend, is what is wrong with this city.

Tuesday, November 8, 2011

Natural Selection Voting Theory

Vote!!!

In nature, if you can't do what it takes to survive, you die, your genes are eliminated from the pool, and someone else takes your place.

Maybe what takes your place is better, maybe not. But if not, it will also die, be eliminated, until eventually something able to do what it takes comes along and so, by this process, things generally improve over time.

This is my theory of voting. Rule #1: if the incumbent isn't doing a good job, vote them out.

Natural Selection

So often in elections I hear about the "lesser of two evils". "I don't like the incumbent XXX, but he's better than YYY." Sorry: the rule of natural selection says I vote XXX out of office anyway. Maybe YYY is even worse. But then I vote YYY out at the first opportunity.

Eventually corrupt and unqualified candidates will stop running. Eventually you get someone good in office.

But if you vote "lesser of two evils", things will never change. You'll always have candidates who suck, just slightly less than the competition. We'll remain mired in the corrupt stagnation which we've had at all levels of government for as long as I remember.

So my first rule is if I don't like the way things are going, the incumbent doesn't get a vote. I pick from the alternatives.

Monday, November 7, 2011

New scoring scheme for Low-Key 2012?

Low-Key scoring has gone through various phases.

In the 1990's, we scored based on fastest rider. The fastest man and the fastest women each week would score 100. Those slower would score based on the percentage of the fastest rider's score. This was super-simple, but when an exceptionally fast rider would show up, everyone else would score lower than normal. Additionally, this was frustrating for the fastest rider (typically Tracy Colwell among the men), since no matter how hard he or she pushed himself, the result would be the same 100 points.

So with Low-Key 2.0 in 2006, we switched to using the median rider (again treatng men and women separately). The median is much less sensitive to whether a particular individual shows up or not, so scores were now more stable. However, there was still an issue with women, and most especially with our hybrid-electric division, since smaller turnouts in these again made the score sensitive to who showed up.

So in 2010 I updated the system so now all riders were scored using a single median time, except instead of actual time, I used an "effective mens's time" using our history of Low-Key data to generate conversion factors from women's and hybrid electric's times to men's times. Mixed tandem's were scored by averaging a men's and a women's effective time.

This worked even better. Now if just a few women show, it's possible for them to all score over 100 points, as happened at Mix Canyon Road this past Saturday.

But the issue with Mix Canyon Road was because the climb is so challenging, and for many it was a longer than normal drive to reach, the turn-out among more endurance-oriented riders was relatively poor. The average rider at Mix would have scored over 100 points during, for example, Montebello (data here). It seems almost everyone who did both climbs had "a bad day" at Mix. That is far from the truth!

There is another scoring scheme I've been contemplating for many years. It's one which doesn't use a median time for each week, but rather compares the times of riders who did multiple weeks to come up with a relative time ratio for each climb. So if, for example, five riders did both Montebello and Mix, and if each one of them took exactly 10% longer to climb Mix, then a rider on Mix should score the same as a different rider on Montebello as long as the Mix rider's time was exactly 10% longer than the Montebello rider's time, once again after adjusting for whether the rider is a male, female, or hybrid-electric.

So why haven't I made this switch yet? It sounds good, right?

Well, for one it's more work for me. I'd need to code it. But that's not too bad because I know exactly what I need to do to make it work.

Another is it's harder to explain. It involves iterative solution, for example. I like things which are easy to explain. Median time is simple.

But another is it would mean scores for any week wouldn't be final until the entire series was complete. So a rider might celebrate scoring 100.01 points on Montebello, only to see that score drop to below 100 points later in the series. Why? Because the time conversion factor for a given climb would depend on how all riders did on that climb versus other climbs. And it's not as simple as I described: for example if rider A does climbs 1 and 2, and rider B does climbs 2 and 3, then that gives me valuable information about how climb 1 compares to climb 3. In effect I need to use every such connection to determine the conversion factor between these climbs.

But while scores might change for a climb, the ranking between riders during the climb would not. That's the most important thing. Finish faster than someone and you get a higher score. The conversion factor between men and women, for example, would stay the same. That's based on close to 10 years of data, so no need to continue to tweak that further.

I'll need to get to work on this and see if I can make progress. I'll describe my proposed algorithm next post.

Sunday, November 6, 2011

Riding the Diabolical Duo at Mount Vaca



Cara Coburn photo
approaching the Low-Key finish (Cara Coburn photo)

Yesterday I rode the Diabolical Duo at Mount Vaca.

First: Mix Canyon Road. Coordinator Barry Burr did an excellent job organizing this one, definitely the "road trip" ride for many in the 2011 Low-Key Hillclimb schedule. For my car pool it wasn't a big deal: one hour from San Francisco, even stopping for gas. Rides like Alba Road, Bonny Doon, Henry Coe, Jamison Creek Road, and Hicks Road we've done in the past are all substantially further, with plenty more of comparable distance. But most of our riders live closer to San Jose than to San Francisco, and for them the trip was further.

But even from San Jose this trip was worth it. A big part of it was our Strava event: The Diabolical Duo. The Low-Key Hillclimb covered just the first part of this: to complete the Duo, riders needed to also climb nearby Gates Canyon Road.

Inspiration for the Duo event came from The Toughest Ascent Blog. I won't even try to describe these roads: the blog already does an excellent job. All I'll say is they are seriously tough climbs. Take Mix: after a mellow start, it hits riders with a few steep pitches. Wow, that was tough, I thought, and then I hit the chalk "Hammer Gel Time" coordinator Barry had written on the road, marking the beginning of the "tough part". Tough? What had we just gone through?!?!

But what followed was truly impressive. When I hit the tight switchback from the Toughest Ascent Blog, I had to laugh. This was just insane!

Mix Canyon Road

But I got through that in my 34/27. Coming out of these switchbacks, you might expect some relief, but no luck: the road continues steeply until a following switchback. In the climb yesterday we had a photographer in that corner. Cool, I thought, the end of the steepness.

However, after turning that corner my hopes were crushed like a walnut shell under a cycling shoe. Not only was the steep stuff not over, but the road actually got even steeper still beyond this corner. This final part of the climb has been suppressed from lactic acid poisoning of the short-term memory centers of my brain. It wasn't even the raw grade numbers, but all that we had been through leading into those numbers.

Lisa Penzel photo
Cara at the happy face marking the approaching finish (Lisa Penzel photo)

But like all climbs, this one finally ended. After the finish line for the Low-Key, Barry had set up a second line further up Blue Ridge Road, a gravel road which Mix Canyon intersects. This road was a bit wash-boarded but ridable even in my 20mm 182 gram sew-up tires. It took only a few minutes to hit this second line, the true summit of this climb. But I'd been wasted from my effort leading to the top of Mix, so I my pace here was far from impressive, especially given the distraction of the rough surface.

Returning to the Low-Key finish line at the top of Mix, I was a wasted shell of a human. I was still warm from climbing, but eventually the chilly 12C air penetrated, and I started to shiver. Fortunately I had warm clothes in the volunteer car, so was able to rectify this. I then got some fruit from the event refreshments. I was done.

Done, except that Ammon (with whom Cara and I had carpooled that morning) was waiting for me so we could do the second climb in the Duo, Gates Canyon Road. I couldn't disappoint him, I figured, I had to try.

So eventually we were off back down Mix. Ammon descends much faster than I can, especially with my flaky carbon Edge rims with their unsmooth braking surface which makes them prone to skidding, and my narrow tires pumped to 140 psi hardly inspired confidence. But eventually I made it down where Ammon and other riders were grouped. We set off together towards Gates Canyon, a few miles away. It was a nice ride along well-named Pleasants Valley Road.

Gates is interesting. It starts out almost flat, barely ascending from the valley. Then soon after the proper climbing begins, still not steep, one hits the road indicating the unpaved road. This is steep enough that grip is an issue. Ammon had zoomed ahead, and I was riding at that point with Low-Key regular James Porter, and while he was able to ride this, my tire skidded on the gravel. I could have let out some air pressure, but didn't want to risk damaging my rims if the pressure got too low, so just walked here. It was slow going. James was by now gone.

In sections, the road became just plain dirt. This had not been indicated on the Toughest Ascent Blog, so represented a recent change. I worried about clogging my Speedplay cleats, but they were fine.

But the dirt didn't last too long, and beyond it the pavement was very good, surprisingly good for what seemed to be a road to nowhere. But as soon as the pavement re-appeared, it bent into a disturbing, highly vertical angle. This road was even steeper than Mix had been! I was barely turning my 34/27, grinding away up the ferocious grade.

Many riders reported this had been tougher than Mix. Honestly I hadn't felt that way. Sure, it was steep, but I wasn't in nearly the same hurry, and I could focus on just getting up the thing rather than every second getting the most of out my legs. So here I was substantially less traumatized when I reached the end of the pavement, the end of the Strava segment. I then walked a bit on the gravel road which continued on to Mix Canyon Road, but just far enough to get a good view, then turned back.

As I got ready to descend, a local rider I'd passed arrived at the top, then soon after a group of Low-Keyers descended the dirt road I'd just walked down. The Low-Keyers had climbed to the high point of that road, almost to Mix, but had turned back. One had crashed and was bleeding. Just a flesh wound, though...

The local told us how he climbed Gates Canyon every Saturday, Mix Canyon every Sunday. He lived in Vacaville, he explained, and these were the local climbs. I was in awe. Deja-vu from Maui, where riders would climb 10 thousand foot Haleakala every week or two. No matter how extraordinary a climb, there are those for whom it becomes the ordinary.

We said goodbye to the local rider, descended together to Pleasants Valley Road. The descent wasn't bad. The fine gravel which I couldn't grip was fine descending, and the deeper dirt I could easily run. It was fun. At the bottom we then split up as we rode to our respective rides home.

Ammon, it turned out, had completed the root to Mix Canyon, and descended that instead of Gates. Very cool.

I didn't hear a single rider complain the longer-than normal drive hadn't been worth it. This day absolutely made the 2011 Low-Key series. If every one of the three remaining climbs is canceled from rain, I'd still say the series was a success. I'll never forget climbing these two roads.

Results of the Mix climb are posted here.

Monday, October 31, 2011

Low-Key Hillclimbs: over the hump


The 2011 Low-Key Hillclimbs
are over the hump, with 5 of the 9 scheduled events in the bag. Each one has had near-perfect weather, with warm sunshine without being hot. It's been supernatural, almost.

Week 1 is always stressful: after a long "off-seson", Low-Key returns to Montebello Road. I traditionally coordinate this one, more to take responsibility for the outcome rather than due to being qualified. Honestly, organization is not my strong point, and every year something gets overlooked. But I've had excellent assistance from Howard Kveck these past few years, and he helps keep things in shape when I stumble. Sometimes there's a bit of next-day revision needed on the results based on email feedback, but in the end we typically get them fairly good. This year things went even smoother than normal.

Week 2: a late swap with Barry Burr for week 6 (more on that) had me coordinating Sierra Road, as well. Biggest trick on Sierra Road is the start, which is in the suburbs. But nobody objected to our presence, and it was a great day on this road now made classic by the Tour of California stage race.

Week 3 and I was nervous again, not only for pulling off the climb of Page Mill Road in Palo Alto, but also because it was my first one I'd ride. I sort of recruited Janet LaFleur for this one after nobody volunteered from the Low-Key mailing list. And oh, my, did Janet come through! All the organizational skills I lack she has plenty. Everything came off with precision. It helped that we started riders in groups of around 15 rather than all together: the lower portion of Page Mill is too narrow, really, to send well over 100 riders at it in one pack. To top it off the day I really surprised myself with how well I rode. I knew my running had been going well: I was putting out good training runs after recovering from my August half-marathon. But would that translate to cycling fitness? For hillclimbing, apparently it did.

Week 4 and we tackled the first of the two brutally steep climbs in the series: Bohlman Road in Saratoga. Well, not really Bohlman, more precisely Bohlman-Nortan-Kittridge-Quickert-On Orbit-Bohlman. Again, we had a wonderful coordinator in James Porter, who like all excellent coordinators never gets flustered and never lets things get out of control. I was worried about the record turnout for this road: typically numbers fall off on the super-steep stuff. This time, that wasn't the case, and we maxed out our rider limit. Yet it turned out to be no problem, as residential density on these roads is low, and all of the drivers we encountered were amazingly patient. I think living on such steep roads they accept that high speeds aren't in their plan. So having to go around some cyclists isn't a big deal. Of course initially we filled the entire uphill lane, but soon after starting we turned onto the steep slopes of Norton Road and that strung things out very quickly. It was a great climb: I prefer it to alternate approaches up the mountain due to the steepness of the lower portion, the excellent pavement, and the very low car traffic. For now, at least, I think we'll stick with this version for future rides here.

Week 5 was a recovery week of sorts. This one had been Howard's suggestion: Palomares Road near Fremont and Union City. I had been sceptical: the climb wasn't steep enough long enough to break up groups, I feared, and so results would be challenging. But Howard is the results coordinator and it had his call to do this one, so that was his problem! To help, we used small groups (15 or so) of riders as we had at Page Mill. Unlike Page Mill, though, here the first group, of the self-assessed fastest riders, failed to break up. Pretty amazing: it's been too long since I've done a road race, and this had that feeling. Too afraid to pull, I was trapped in the vortex, at the mercy of the pace of those in the front: Tracy Colwell, a renewed and stronger Tim Clark, Nils Tikkanen, Jacob Berkman. Then I was freed from my trance by the 200 "paces" sign: the finish was near. Then it exploded, Tracy and Keith Szolusha off the front, the others scrambling for minor placings. I was fourth.

Amazingly the finish line crew managed to get most of us, and I went to Pat Parseghian's excellent finish video for the rest.

The other groups were less well matched, and the biggest sprint after ours was probably four riders. A few riders looked puzzled when we asked their numbers as they crossed the line (we avoid jersey numbers) until I realized we forgot to mention at the start that people shout their numbers at the finish, and people tend to be fairly brain dead at the end of a hard climb, even short ones.

It was another gorgeous day, and I was super-happy the series was going well. Three more weeks to go, then Thanksgiving @ Mount Hamilton, which is special.

But it's hard to look past this weekend: Mix Canyon Road. It stands to be by all accounts the hardest climb Low-Key has ever done. It's simply inhumane. But I'm the one who put it on the schedule, so no whining allowed from me...

I love it how this series magically comes together and works. Most people would say you couldn't do this sort of thing, but every year we do, and every week people have a good time.

Wednesday, October 26, 2011

Instant Runoff Election Simulation: Exhausted Ballots versus vote count in the 2011 San Francisco Mayor's race

The 8 November election in San Francisco will have 16 candidates contesting the mayor's position. The city will, for the first time, use instant runoff voting for the city-wide mayor's election, avoiding the need for people to cast multiple ballots in the likely scenario that no candidate would get at least 50% of the votes.

Instant runoff works by creating virtual "rounds" of voting. Voters get a number of votes on their ballot. They list their first choice, then their second, then their third for the mayor's position. In principle there could be enough votes to rank all candidates (one less than the number of candidates), or more if you want room for write-in candidates. First, all candidates receiving at least one first-place vote are ranked, and the one (or multiple) candidates receiving the least number of votes are eliminated (assuming there's at least one candidate left). Ballots which had an eliminated candidate as a first choice have lower-ranked choices promoted until either the first-place vote is still in the race or the ballot contains no more candidates within the race. A ballot with no more candidates who have not yet been eliminated is considered "exhausted". The election continues until a single candidate has a majority of the votes from unexhausted ballots.

People have argued that exhausted ballots are a sign of the failure of the system. They claim that voters who submit exhausted ballots haven't had their voice heard. Of course this is false: the voters have had their voice heard but the candidates they supported failed to garner enough support. However, what an exhausted ballot implies is that among the final candidates in the race, that voter failed to have his preference counted. Not counting write-in candidates, for a voter to be guaranteed that his ballot will not become exhausted, he needs a number of votes equal to the number of candidates minus one. This allows specification of a preference between every pair of candidates, covering the every possible final round.

But San Francisco doesn't provide this many votes: not even close. Due, it is claimed, to limitations of the ballots used in city elections, only three votes are offered. I view this as a bit of a farce: of course it is possible to provide space for more than three choices, and there are plenty of examples of other nations which do so. Three is a long, long way from the fifteen choices which would be necessary to avoid ballot exhaustion.

I'll assume voters pick their honest top choices, no matter the perception of candidate viability. Of course this is an incorrect assumption, voters may prefer their ballot not get exhausted early, and so may want to pick at least one candidate considered likely to get a large number of votes. This "safety net" pick would likely be in the final spot on the ballot, since picking it earlier would likely render "long-shot" choices ranking below the pick irrelevant. But I'll assume here that there's no safety net strategy, and voters pick their preference every choice. I also assume voters vote for the full number of slots on their ballot: if they have six choices, for example, they don't vote for three and leave the final three slots blank.

In this case, exhausted ballots only make a potential difference if the winner, after all other candidates but one have been eliminated, fails to receive at least half of the total ballots submitted (not half of the remaining unexhausted ballots). If the winning candidate receives over 50% of all ballots, then even if all exhausted ballots would have had their next vote go to the second-place candidate, that second-place candidate still would have received fewer votes than the winner. And that's an extreme case: it would be virtually impossible that every one of those voters with exhausted ballots would have preferred the loser over the winner. So unless the number of exhausted ballots is sufficiently larger than the difference in votes received in the final round by two surviving candidates, it is very unlikely those exhausted ballots would have switched the result if the voters had had more picks.

So how many exhausted ballots is acceptable? I'll toss out a proposal that 1% is okay, assuming there is a "cost" associated with putting more choices on the ballot. But more than 1% and I think it's fair to say the reduced number of votes per ballot is seriously in danger of affecting the outcome.

If I want to estimate how many votes I need per ballot, with 16 mayoral candidates, to avoid at least a 1% exhaustion rate I need to make further assumptions. With 16 candidates, if one candidate is vastly more popular than the rest, then he'll get most of the first-place votes, and no ballot will be exhausted, since the election end on the first virtual round.

In the other extreme, if each candidate is equally popular, such that a voter chosen at random will have ranked candidates in essentially a random order (assuming candidate preferences are uncorrelated, which is clearly unrealistic), then the fraction of ballots which will be exhausted can be calculated fairly easily: suppose there are C candidates and V votes per ballot. Then the number of ways to vote or the C candidates with V votes = C! / (C ‒ V)! ("!" is the "factorial" operator), while the number of ways to vote without including the final two candidates = (C ‒ 2)! / (C ‒ V ‒ 2)!, assuming C ‒ V > 1. Therefore the probability a random ballot will exhauste among many random ballots =

[ (C ‒ 2)! (C ‒ V)! ]/ [ C! (C ‒ V ‒ 2)! ].

This can be simplified to the following, eliminating the factorials:

(C ‒ V) (C ‒ V ‒ 1) / [ C (C ‒ 1) ]

But this case is unrealistic. Some candidates are more popular than others: the difference in votes isn't just due to randomness. So I'll make an assumption somewhere between the two cases of one super-popular candidate and all candidates equally popular. I'll assume the most popular candidate gets 20% of the first place votes. Then the second candidate gets 20% of the remaining votes. Then the third candidate gets 20% of the votes remaining after votes have been assigned to the first and second candidates. Etc. So the most popular candidate gets 20% of the first-place votes. The second-place candidate then gets 16% of the votes. Third place then gets 12.8% of the votes. This goes on to the least popular candidate, who gets 20% of the votes which haven't been assigned yet to that point, which is 0.7% of the total. If a voter has given his vote to none of the candidates after this round (2.8% of them), I try again starting with the most popular candidate. For second-preference votes and beyond, I do the same game, except a voter can only vote or each candidate once. So less popular candidates have a better chance of getting lower-ranked votes than they have of getting first-place votes.

This vote distristribution is simplistic, obviously. In a real election there will be pairs of candidates who will be close to each other: the gaps between canddidates won't be so uniform as they are in this model. But that's not so important. What's important is that the votes tend to be clumped toward the head of the field, rather than distributed uniformly over all candidates.

So I ran 100 thousand randomized votes using this approach, and I compared it to the "worst-case" where each of the 16 candidates is equally popular. Here's a plot of the percentage of exhausted votes versus the number of votes per ballot.

simulation results

The result is for the equally popular candidates you need 14 votes per ballot to keep the exhaustion rate down to 1%. This is only one away from the 15 needed to fully rank the 16 candidates. For the candidates with different popularity, you need 9 votes to get the exhaustion rate down to close to 1%. In each case allowing only 3 votes per ballot will result in high exhaustion rates: 65% and 25%. With only 3 votes per ballot, the number of votes is affecting the outcome: either a lot of ballots get exhausted or voters, to avoid ballot exhaustion, will deviate from their true preferences by engaging in the self-fulfilling prophecy of voting for whom they think are "viable" candidates.

And since everyone agrees interim Mayor Ed Lee is a viable candidate, I don't like where this leads.

So in summary: I really like ranked choice voting, but surely we can do better than this. With 16 candidates, I want at least 9 slots to rank candidates, and would prefer 15, but would even live with as small a number as 6. 3 is obviously way too few, however.

Thursday, October 20, 2011

San Francisco Mayor's election votes

I've been following the mayor's campaign as I've been able, and these look to be the candidates who will get my three votes:

First is David Chiu. I wrote about him yesterday, about his ride on SF2G. David's been on my virtual ballot all along, either first or second. I don't think we agree on much on the proposition ballot, to be honest. I take a fairly hard line on bonds, while he was a principal supporter behind Prop B (street maintenance bond). So I asked him about this directly at the Potrero Hill street festival, noting that my "undergraduate-level economics" tells me funding ongoing maintenance with debt is a bad idea. He agreed, but claimed our present situation is an exceptional emergency, and the bond is needed to avoid much higher costs down-stream. I still question the city's discipline to remedy the revenue imbalance when bonds are provided as a cop-out, but I respect his response. We clearly disagree on Prop D, public employees funding pensions. To me it's clearly superior to C, which Chiu supports. But as I noted I'll vote for both, because I don't want vote splitting to leave us with neither. But we agree the size of city government is a serious liability for San Francisco, especially given the city's virtual inability to prune out unproductive employees. What seals the deal with Chiu is our shared perspective on the critical role non-automotive transport plays in quality of life, that the best cities in the world are those that provide the best environment for pedestrians and cyclists as opposed to car drivers: New York, Boston, the great cities of Europe. To the contrary, cities which dedicate the most resources to cars (wide roads and large parking lots) are among the most unattractive: think Los Angeles, Atlanta, or Phoenix Arizona. And he was a serious proponent of bringing the 2013 America's Cup to San Francisco, despite the NIMBY forces who prefer that San Francisco be a bedroom community from where they can live their private lives as unencumbered by other people as possible.

Next is Herrera. Initially there was no way I was going to support him. As public attorney I felt he'd dragged his heels when the city's bike program was blocked by Bob Anderson's lawsuit claiming that the city should have filed an environmental impact report (EIR). I felt, as did at least one member of the Board of Supervisors, that a "safety case" could be presented for many if not all of the projects which would have exempted them from the EIR. But Herrera's view was that the EIR should have been filed from the beginning, that he'd advised one be filed, and since the bicycle program had neglected to do so there was no reason for his office to scramble to argue for projects one-by-one. Okay, so that's water under the bridge. I think Herrera clearly supports the bike program as it moves forward from here, the EIR done and approved. More importantly he seems an organized, businesslike guy who has what it takes to keep the city's business plan moving in the right direction. And he gets points for haven ridden with SF2G, which I unfortunately missed. Herrera is also endorsed by the Potrero Hill Boosters Club and the Potrero View newspapers, both representing my neighborhood.

And my #3 is Jeff Adachi. Like Herrera, Jeff also seems intelligent and business-like. He's the one behind Prop D, and has been a long-time proponent of getting the public employee pensions under control. I really admire his stand here, not out of some Republican-like vitriol against public workers, but because the public pensions are so clearly disproportionate to private sector and because they are so clearly unsustainable economically. Politicians are all too ready to push off liabilities onto the next generation and the public pensions are a clear example of this. Jeff also seems to take a reasonable stance on other issues and has handled himself very competently in the debates I've seen.

So that's it... on my top 3:

  1. David Chiu
  2. Dennis Herrera
  3. Jeff Adachi

Filling out my top ten are Dufty, Avalos, Alioto-Pier, Rees, Ting, Hall, and Yee, roughly in that order.

Of these candidates Hall is an interesting case, clearly the most conservative of the bunch. For example, while most candidates are against Adachi's Prop D as either unenforceable or going to far, Hall argued it doesn't go nearly far enough. It could be argued, given our fiscal issues, he's the candidate we need even if he's not the candidate we want. But I'm not quite ready to go there, not just yet.

Another interesting candidate is Baum. I like a lot of her views, for example her stance against Prop B, but she's essentially a full-blown socialist running as a Green. Since the American Green party was an off-shoot of the Socialist Party, this isn't atypical. I'm a big supporter of the Green Party's environmental agenda but it's socialist agenda is naive. For example, Baum wants a city-run bank and a massive increase in public housing. I simply don't trust this city to efficiently and effectively provide high-value services. I can't think of any examples, none, where a major government in this country has done so in the past. I'm not sure if I'd vote for Baum or Lee in a head-to-head. Maybe I'd write in Tony Kelly instead. Lee's proven to be allegedly corrupt, sure, but Baum's programs would be a magnet for future corruption.

Avalos gets big points for his strong support of cycling in the city, and rode with SF2G. He also gets points for supporting the right of the Occupy San Francisco protesters to hold their protest (under Lee there have been open police raids against the protesters). But he also has a socialist tendancy which I don't think is consistent with the realities of city government.

So what's the forecast? This election has turned into an Ed Lee versus the world contest. Lee seems like a nice guy to most voters. He hasn't totally screwed up anything obvious. And he was even endorsed by the San Francisco Examiner as their #1 pick while also making it into the Chronicle's second-tier behind David Chiu. But I don't trust him. Soon after announcing his entry into a race he'd previously promised to not enter to avoid conflict of interest (he was appointed interim mayor when previous mayor Gavin Newsom moved up to state lieutenant governor), he quickly raised more money than all of the other candidates had, combined. Many of these donations were suspect, at the donation limit coming from lower-middle-class donors. For example employees of a shuttle bus company which had benefited from a questionable decision on shuttle parking at SFO which clearly benefited the company were later reported to have been coerced by management, an unveiled money laundering scheme. Lee eventually returned the donations but only well after it was obvious something was amiss. Additionally, Lee's office has failed to disclose city contracts to the ethics commission within the 5 day window required by law. According to Dennis Herrera, Lee had missed this deadline 67 times as of 4 October. So Lee stinks of allegid corruption. Despite this, he is strongly supported, including by the same San Francisco Examiner which is reporting these stores. In elections, especially local elections, the inertia of incumbency is very, very strong. Voter ignorance and and laziness is rampant. And while most potential voters sit out local elections, too many vote without adequate preparation, rubber-stamping incumbents based on a lack of obvious reason to vote against. That inertia, especially with such a diverse field splitting the remaining vote, will be very hard to beat.

However, I really hope it is beaten. Any one of the other 11 candidates (those invited to the KQED/League of Women Voters debate) are preferable to Lee's proven record of corruption.