Posts

Showing posts from April, 2014

algorithm for variable-lifetime maximal power curve derivation

The traditional problem with maximal power curves is they may contain rides so old that the power generated in those rides has little relevance to the present. So it's useful to truncate the data: to set an age limit for points used. But there may be a considerable data set to process for this purpose. Calculating a maximal power curve from scratch makes little sense. When calculating a curve from an entire data set, you start at the oldest activity, calculate the maximal powers for each duration for that set, then that's it. For each additional activity you compare the average power for each duration, extending the existing maximal power curve to longer durations if needed (constant work), then adding in points from the newert activity (extending that if needed). But with an adjustable age limit, it gets more complicated. Instead I need to retain, for each duration, all points which are the maximum power for their age or less. These numbers can be saved, for example, as a

Maximal Power curves: thoughts on deweighting, depreciating, or retiring old data and the "do no harm" principle

Back in January and February I had a series of posts here on fitting maximal power curves using a heuristic model and a weighted nonlinear least-squares fitting procedure to fit to do an envelope fit, the curve passing through a series of "quality" power points rather than passing through the middle of points. The argument was that we don't produce our best possible power for all durations, but rather typically for a few durations, and so to predict what our maximal power is for a given duration, we need to interpolate or extrapolate based on the durations for which our efforts represented the best we could do. The weighting scheme was to assign a high weight (for example, 10 thousand) to errors of points falling above the modeled curve, and a lower weight (1) to points falling below the curve. This caused the curve, after an appriate number of iterations, to float to the top of the data point cloud, where it would essentially balance on the points consistent with the

Dan Martin's crash in Liege-Bastogne-Liege

Image
Yesterday, full of self-loathing, I sat addicted to my laptop watching the final 45 km of Liege-Bastogne-Liege. I can't help it. I'm addicted. But the spring classics are done now, right? I'm free, right? I'll be able to resist the daily lure of the Giro, right? Sigh. But it was an exciting finish. Caruso and Pozzovivo got a gap which looked like it might just hold... then Daniel Martin of Garmin-Sharp, last year's winner, bridged up, passing the weaker Pozzovivo and just about reaching Caruso. One more corner, then 300 meters to the repeat victory... But amazingly, he crashed in the corner. Jonathan Vaughters, his manager, reacted: Heartbreak. — Jonathan Vaughters (@Vaughters) April 27, 2014 Here's the video on YouTube . As reported by CyclingNews , Daniel's post-race comments were: “It’s one thing to make a mistake or know what you’ve done but we figure that there’s a patch of oil or something. I think I had tears in my eyes before I even h

testing speed limit algorithm with real data

Image
In previous post I proposed an algorithm for automated detection of speed limit violations from GPS data . Then in a later post I tested this on simulated data . Here I apply it to data which was collected by someone else during a short drive in San Francisco. I've a general 40 kph speed limit within the city (also previously here ), except on the interstate highways. This car trip didn't include any interstate highways, so it will be interesting to see how it would do with such a restriction. Here's the speed versus time. As you can see, it was a short trip, only around 8 minutes: It's mostly slow going, except for one stretch on Bayshore Boulevard, which is faster. That portion is generally 50 to 65 kph, Sure enough, this triggers the algorithm as speeding for a 40 kph limit. I assign a fine in dollars equal to the excess distance ahead of a "pace car" beyond the 50 meters allowed by the algorithm, at a ratio of one dollar per 100 meters. So if

San Francisco anti-smoking law and sidewalks

Waiting in line last night on the sidewalk for the Bicycle Film Festival , people tightly packed along the building waiting for the doors to open, a chatty fellow behind me lit up a cigarette. Cigarette smoke is something I'm rarely exposed to for more than a few seconds, and despite the steady wind, the smell of the fumes was physically sickening, not to menton a tangible health risk. In fact, just typing this the next morning reminds me of the queasy feeling in my stomach as I tried my best to maximize my distance from him without losing my place in my line. What to do? Politely ask him, as a favor, to stop? Inform him that such behavior isn't acceptable (that option probably doesn't work so well, in my experience). I probably should have done the former. But instead I waited it out, resolute that if he tried another, I'd speak up. Fortunately one was enough for him. San Francisco likes to vigorously pat itself on the back for being anti-smoking, and indeed i

letter to Caltrain on bike capacity with electrification

Image
electrified Caltrain rendering With the Caltrain electrification process slowly, oh so slowly, moving forward, the organization is making important decisions about infrastructure investment which will affect capacity for decades. It's much easier and cheaper to do things right the first time rather than try to remedy poor decisions later. For example, when the Gallery sets were purchased in the 1980's, the cars were provided with only a single boarding site per train car, creating a choke point for boardings and deboardings which slows the train at every one of the many stops along the line. And when the Bombardier sets were introduced in the 2000's, on board bike capacity available with that car design was substantially restricted, which given the strong demand for bikes on board has made these newer, nicer cars unsuited for the express trains which handle the peak load during commute hours. Now Caltrain is moving ahead with electrification, and it's important les

First Noon Ride of the year: Old La Honda

Image
On Wednesday I did my first Old La Honda since last November. This was a bit of an immersion lesson in cycling again. I'd been so focused on running in preparation for my trail race 10 days before, I'd only started doing any "training rides" the previous Friday. On that ride, doing my favorite combination of Montebello-Peacock Court, my legs felt like sludge and I pushed my old Trek 1500 to the top of the climb in over 7 minutes, an unimpressive time. I felt fat and slow. But things got a bit better from there. On Saturday I went to the Headlands for four Hawk Hill repeats. Sunday, mountain biking plans fell through and instead I did a solid 20 km training run. Monday was just basic commuting, but then Tuesday was my first SF2G in months, and my first Skyline route since last year. That went well, but instead of energizing me for the day, I felt depleted. But I felt well enough by Wednesday to indulge my urge to do the Noon Ride for the first time since 13 N

Training metrics and post-Woodside Ramble recovery

Image
Today, as I'm typing this, is the Boston Marathon. If I'd been 40 seconds faster @ CIM 2012, I'd have had a qualifying time and maybe I'd be running the streets of Boston instead. But rather than running Boston I'm a week after meeting a New Year's resolution, to run my first ultra, the Woodside Ramble 50 km last Sunday. I'd had a training plan for the race, and it crashed and burned when 3.5 weeks out oral surgery left me fatigued and prone to allergies. My running came to a virtual halt for 12 days, leaving just enough time for a brief test run and 3 decent volume days before I had to taper for the final week before the big day (indulging in a 17-km test run 3 days prior). This completely blew my plan to do big volume up to 2 weeks before, then taper in: maintaining CTS the second-to-last week, then letting it slip slightly that last week to come in with an optimized combination of freshness & fitness. CTS is a metric of chronic training stress,

Garmin Forerunner 610: 1 second mode isn't for ultras

In the Woodside Ramble 50 km my Forerunner 610 was powered up and recording for almost exactly 5 hours before it powered down, 4:47 into the race and 32 minutes before my finish. This was, needless to say, a downer. The watch is rated for 8 hours, not 5. So what went wrong? Battery fatigue? False marketing? Accelerated draw due to a challenging GPS environment? A personal curse that I shall always get sucky batteries? Well, this last option has proven likely with various cell phone and laptop batteries in my personal history. But in this case, I stumbled across the simpler explanation when reviewing DCRainmaker's "in depth review" of the Forerunner 610 . One-second mode. I had the unit on one-second mode to give better time-resolution on Strava segments. I looked at the Smart Sampling mode for the Forerunner 610 here . Sampling times blew out to as long as 7 seconds in smart sampling mode (a few longer, but those intervals may be due to signal loss), while

Woodside Ramble 50 km report

Image
After months of carefully tracking my training metrics to ramp my volume up to where I thought it needed to be for the Woodside Ramble 50 km race, my first race over marathon distance, my training had been diverted off-course by 12 days of fatigue after a relatively innocuous tooth extraction. This had left me just a week and a half until the race. So after an test run on a Wednesday, I did a solid 3 days on Thu-Fri-Sat, the last of these a 31.5 km run through the Marin Headlands. This was a very important test for me, as it included 1300 meters of descending (and, less importantly, climbing), and my legs survived running all of it. This was 90% of the descending I'd need to do in the 50k, where I'd have the additional advantage of fresher legs. But that was it: my last chance for training. Instead of a controlled taper in the last two weeks, I had one week to get my legs into something resembling race shape. I did a series of short runs until Friday, when feeling fat an

Paris-Roubaix statistics

Results for Paris-Roubaix, which was yesterday, are available on CyclingNews . I was generally occupied getting to the Woodside Ramble for my 50 km race, so missed it, catching only the final results before my race began. My hot pick Taylor Phinney flatted out of contention on Carrefour de l’Arbre with 15 km remaining, so so much for prognostications. But I find it interesting to compare some team and nation stats that aren't reported by the organizers. For each, team and nation, I calculate the number of starters (8 for teams), finishers, points, and time, using a Perl script to parse the CyclingNews results, which are the most parseable on the web. For points, I use the placing (reported by CyclingNews) of each team's top 3 placers, and sum. For time, I take the sum of the times of the top 3 finishers relative to that of the winner, Niki Terpstra . Some interesting results: while United Healthcare finished last in both points and time, they were tied for first with mo

Low-Key sticker design: revision

Image
One day until my 50 km race... After my 12 day out-of-action following my oral surgery, then 4 days of running, then a last-week taper, I feel woefully fat and out of shape, and indeed I'm a solid 2 kg over my cycling "race weight". Some of this is probably leg muscle from running: my legs are looking a bit bigger. But that's not all of it. I definitely need to lose that weight before the Diablo hillclimb on 11 May. As a distraction, some sticker design revisions. First, I updated the square design to provide two options, one with a smaller cyclist, steeper hill, and squarer aspect. Then that freed room on the 2.13 inch by 2.75 inch template for a text sticker then additionally a cyclist-only sticker. Then a design for a circular sticker. The black border is not part of the sticker. The "sky" is transparent: for some reason I'm not able here to use my usual trick of putting a colored table behind the transparent image to change the backgroun

Low-Key Hillclimbs sticker design

Image
I'm working on a possible sticker design for Low-Key Hillclimbs. Here's the design. It has a transparent region so it's important it has decent contrast against any color bike. So I preview some colors here (if the backgrounds don't work, check this link ). Maybe I can get these printed up by Sticker Guy or someone similar. The smallest size StickerGuy sells are 2.75 inches by 2.13 inches. But I could always do 4 stickers on one die. Then I'd have 1.375 inches by 1.13 inches, plus a margin, getting the printed area down to something more suitable for a down-tube. So here's how that would look:

Soquel Demo Forest "flow trail" project

Image
A friend of mine is working on this project. Really cool: a "flow trail" in Santa Cruz. Honestly I thought the US was way too litigious for anything like this to come together, and there was too much anti-bike sentiment among California NIMBYs: I thought you had to cross the border up north, to Vancouver maybe. They're competing for a grant from Bell Helmets. Consider voting for them. Their project page is here . Rumor is we may be able to ride it in the opposite direction for Low-Key Hillclimbs at some point. While you're on the Bell site, make sure to also check out the video for the Stafford Lake Bike Park .

low-drop pro bike

Image
Cyclismo-Espresso showed this photo of a United Health Care bike with a Pioneer Power meter mounted. Okay, big deal: I've seen Pioneer power meters on pro bikes before. Apparently they work well enough. What interested me was the remarkably modest handlebar drop. Okay, riders are sometimes limited by the geometry of commercially available frames, but this one has a -6 degree stem and even a spacer under that. This one would get the big reject from SlamThatStem . So I measured the drop. I do that by the following sequence: Load the photo into GIMP . Level the wheels. I used the top of the wheels for this. I had to rotate the photo by 1.0 degrees, according to a measurement with the GIMP measurement tool. Measure the height of the front wheel. This gives me a coversion between distance and pixels. I know the rolling circumference of the wheel is around 210 cm, so this height, in pixels, gives me a conversion. Put horizontal guides at the "saddle point" of the

Contador: 2241 VAM @ Vuelta al Pais Vasco stage 1

Image
Yesterday at stage 1 of the Vuelta al Pais Vasco, Alberto Contador won , finishing 14 seconds over Alejandro Valverdi and 34 seconds over Michal Kwiatkowski. The route included 8 rated climbs, including two ascents of the steep Alto de Gaintza, which gains 290 meters in only 2.3 km (see stage preview ). Four riders have uploaded the stage to Strava, Here's Kenny Ellisonde's activity . He was 40th, @ 2:42 down on Alberto. Here's the report on Alberto's time up the last ascent of Alto de Gaintza, the final climb of the race: #Itzulia , Stage 1. Alto de Gaintza (last 1.69 km, 15.33 %, 259 m) Alberto Contador: 6 min 56 sec, 14.62 Kph — vetooo (@ammattipyoraily) April 7, 2014 If these numbers are good, that works out to a VAM of 2241 meters/hour. Assuming a CdA of 0.32 meters squared (recommended by Vetooo based on extensive comparison of VAM and rider-reported SRM data, and coincidentally measured by Tour magazine in a wind tunnel), an air density of 1.15 kg/m 3 ,

training crash & burn, then running the Marin Headlands

Image
Training for the Woodside Ramble 50 km race, while not without hiccups, was going nicely to plan. I'd plotted a gradual increase in my simplified running form of CTS, ramping 0.43 km/day per week, and I was sticking to that schedule, initially by increasing the length of my runs, but later transitioning to an increase in frequency. I was tired, of course, but a good tired. I was pushing my limits, as one must to get them to shift, but it seemed to be working. The problem with simplified training stress metrics is they exclude stress from other sources, and in this case the big ugly stress source was oral surgery. A tooth broke and the dentist gave me the sobering news the next day: it should be removed and replaced with an implant. So the tooth came out, a process involving the expenditure of extremely few kilojoules on my part, but which nevertheless left me very, very tired. Initially I persisted in my plan to run every day. Length didn't matter, speed didn't mat

simulating vehicle speeding detection algorithm

Image
Last time I proposed an algorithm for detecting speeding vehicles. In summary, the algorithm was to set an actual speed limit 5% over the nominal speed limit, then compare the car's position to a "virtual pace car" going that speed limit, where the pace car would slow to avoid getting behind the car in question. If the driver got 50 meters ahead of this virtual pace car, he'd be fined proportional to how far ahead he got. If he kept pulling away, the fine would keep increasing. I tried various approaches, but I think the best one is relatively simple. I assume a car starts at rest, then accelerates at 3 m/s 2 (0.31g, or 0-60 in 8.94 seconds) to some final speed, then holds that speed. There's a posted speed limit of 20 meters/second (72 kph), implying an enforced speed limit of 21 meters/second (75.6 kph) which the driver reaches in 7.00 seconds, having traveled 73.5 meters. If the peak speed is no more than 21 mps, the driver is never cited. But if the p

proposed automated speed limit enforcement algorithm

Last time I argued for automated speed limit enforcement using GPS receivers installed in all new vehicles sold. I would be negligent in doing so without at least proposing an algorithm. So the algorithm is this: Determine the present speed limit. If GPS is used to monitor speed, the GPS coordinates would need to be mapped to a street map to determine a local speed limit. This seems complicated, but in many urban areas the speed limit is the same on all roads in a local grid, so you'd basically just need to identify if the driver might be on an expressway or an interstate based on position and direction. If the speed limit was varying wildly from one road to the next, this would become more complicated, and only the maximum of the local speed limits could be enforced this way. GPS has only a certain position precision. Set a true speed limit 5% higher than the nominal. This gives some margin for error. GPS doesn't have systematic error nearly this large, but it makes se