Monday, February 23, 2015

Tour magazine light bike test March 2015

Upon returning from a trip from KAUST in Saudi Arabia last week I connected through Frankfurt Airport. I was a bit grumpy because security had just confiscated my package of delicious Jeddah figs claiming they were a liquid hazard. It wasn't just that I was to be denied of the chance to share what I'd been finding so delicious during my 6 days at KAUST, but just as equally it seemed wrong for such fantastic fruit to go to waste, all over a ridiculous policy bourne of international paranoia. But my mood was slightly lifted when I spotted in a newsstand across from my gate the golden apple of German journalism: Tour Magazine. Depite a painfully long and even more painfully slow line at the stand, I risked being late to my boarding to buy it.

And I wasn't disappointed. Edition 2015.03 of Tour had a quantitative review of 3 of the lightest frames you can buy:

  1. Trek Emonda: The Trek Emonda line is actually relatively heavy. Only the top-of-the-line SLX with the pricey vapor coat option is particularly light, and it's spectacularly so. It was the lightest frame in the Tour test, an impressive result given the competition. They tested it with the full-bore weight-weenie version, which breaks the 5 kg barrier w/o pedals.
  2. AX Lightness Vial Evo: The Vial Evo is a stock design from AX Lightness, known for its super-expensive super-light marginally functional components. But when you want the lightest possible bike, you can't let money or functionality get in the way. The Vial Evo actually isn't particularly expensive in comparison to the competition here. It tied for second place on frame mass with the Cervelo RCA.
  3. Cervelo RCA: with the exception of its relatively short reach geometry, the Cervelo RCA may be the epitomy of frame design. In addition to being spectacularly light, it's also fairly aerodynamic, and continues the Cervelo tradition of pencil-thin seat stays for vertical compliance to yield a smooth ride.
  4. Rose X-Lite: This frame was substantially heavier than the other three, and I wasn't exactly sure why it was in this test. Perhaps it was for purpose of comparison.

In addition to these Über-bikes, they added 8 bikes "under 1500 Euros": bikes typically outfitted with Shimano 105 or even some Shimano Tiagra designed for more entry-level riders. It was an interesting comparison of what you get for the mega-bucks, for bikes costing at least 8 times as much.

The interesting thing about Tour is they tend to use a static point scale. So if they're testing super-light bikes designed clearly for people who put a very high priority on mass, they still use a point scale on which only a minority of the points come from mass. This is arguably okay, but what ended up happening is Tour discounted mass completely.

This happened because Tour uses a "school grade" system for rating components of a bike's merit. So instead of using mass in some unit, such as grams, they assign it a number from 1 to 4, comparable to grades in a German school. And like the German school grades, grades don't come in any arbitrary value, but rather a relatively small number of discrete values.

Here's the mass ratings for the frames plotted versus the total frame mass:

The 4 super-frames are the three symbols each rated 1. This goes from the Emonda, the lightest, to the RCA and the Vial Evo, which are tied, then to the Rose X-Lite. Ignoring these 4 points, I fitted a straight line through the points representing the cheaper bikes. The coefficients for this fit are shown. The X-Lite falls under the trend line, indicating its 1.0 rating on mass is somewhat favorable. On the other hand the Emonda sits above the trend line. It's getting relatively cheated by that same rating.

image

Not surprisingly, the X-Lite won the rating contest. If you put more carbon on your frame you can make it stiffer, and Tour magazine loves stiffness. Additionally added carbon allows programming in compliance while maintaining strength. But it's clear that the rating system is artificially compressing the weights of these these four frames into a ssingle value. It's silly, really.

Wednesday, February 11, 2015

petition to ban smoking from San Francisco sidewalks

I've been super-busy with a new research position at the University of California at Berkeley. This has been fun, but leaves little time for blogging. But I made time to do my first-ever change.org petition, for the Board of Supervisors ban smoking on San Francisco sidewalks.

I've written about this before when I was exposed to smoke while waiting in line at the Bicycle Film Festival. Well, it turns out smoking is already banned for waiting in lines on sidewalks, so I was already justified there in asking the smoker to take it elsewhere had I chosen that approach. But in many cases exposure to smoke on sidewalks is just as unavoidable, in particular for small children who aren't free to go where they want. In some cases the child is even in a stroller being pushed by a smoking parent.

This has become a bigger issue with me since I started running more. When running on a sidewalk, running the gauntlet past a group of sidewalk smokers is unpleasant in addition to profoundly unhealthy. There's one particular section, on Stanyon near Haight, where the smoke is particularly bad, and this has nothing to do with Big Tobacco, or any tobacco, for that matter... but it's generally a problem.

It's really time for acceptance of exposing other people to toxic gases to stop. This petition is my little attempt to do something about it. Maybe it will help, maybe not.

Thursday, January 29, 2015

San Bruno Hillclimb: pace comparison 2014 vs 2015

Follow-up to San Bruno Hillclimb report:

I plot here my relative time to distance for 2015 versus 2014. To generate this plot I exploited the fact I'd hit the lap while waiting on the start line each time. When I check the profiles starting from the lap starts they are essentially the same within approximately 20 meters of distance. However, if I were to have started the times at the start of the lap, the times would have been dominated by the delay until the race start in each case. So instead I interpolated the starts from 20 minutes after the actual lap start.

To match times at distance, I interpolated times onto 20 meter intervals for each year's data. Since the course is close to 6 km (5.96 km here), this yields close to 300 points for the full course. So then at each distance I can calculate the difference in times: the 2015 advantage, which I plot, is the 2014 time minus the 2015 time.

Here's the result:

plot

An aside: this is the first plot I've posted here done with gnuplot. In the past my plots have generally been from xmgrace or occasionally xgraph. But I've had problems dealing with the Motif toolkit required by xmgrace.

Last year I was dropped on the bottom part, but latched onto a group with Alexander Komlik. This year I hung with the led group to the turn-off (km 3.3 in the plot). The pace was fairly high, with a bit of a tailwind, but the not enough tailwind that the effect of draft was rendered insignificant.

After the drop off there was a short climb to the gate where I was dropped. The plot is perhaps consistent with this. Relative to last year, I was riding hard in 2015 to stay with the group to the start of the climbing again, but then I gave up and settled back unto what I considered to be a sustaninable pace. As a result I gained on 2014 time but then lost 7 seconds quickly.

Both years the climb of Radio Road was a matter of going as hard as I could as long as I could. I was around 5 seconds faster in 2014, perhaps due to the prominent headwind this year, although that was most observable near the top where I held even this year.

So while I was around 15 seconds faster overall this year, this time was gained all on Guadalupe Canyon From the turn-off to the park entrance I actually lost a net of 2 seconds.

I noted that the profiles were in agreement to only around 20 meters of distance, the difference varying during the race. Lines taken and therefore distance traveled tend to vary a bit, especially near curves. At my average speed it takes around 3 seconds to go 20 meters. So near where the speed changes a lot, for example where the climbing starts again after the descent, there might be an error of a few seconds due to registration error. For example, if I was riding 10 meters per second on a flat part, then slow to 5 meters per second on a hill, in one case I might have segments at 2 seconds, 4 seconds, and 4 seconds, while in the other case the points at the same distance would be 2 seconds, 2 seconds, then 4 seconds, yielding a shift of 2 seconds difference since the former data hit the climb 20 seconds sooner. That's not due to difference in fitness, just due to differences in distance to that point in the ride. But the time differences seen in the plot are apparently significant (substantially larger than 2 seconds).

The conclusion? Not much. I should have been fitter. But a race date of 1 Jan makes that difficult. I'd like to believe at my present fitness (29 Jan) I'd have done better.

Wednesday, January 21, 2015

San Bruno Hill Climb 2015: report

After the 2014 Low-Key Hillclimbs came to a successful conclusion thoughts naturally turn to the San Bruno Hillclimb which comes five weeks later. I'd ridden fairly well at the Low-Key series. After running my first ever 50 km trail race at the Woodside Ramble in April, it had been a decidedly down summer for me fitness-wise, neither doing any running races, bike races, or any endurance rides of note, with the single exception of the Memorial Day Ride (4 days from San Jose to Santa Barbara). I was really in a rut, sort of treading water without making any progress.

But spending all of September and the first 12 days of October based in Basel, Switzerland gave a nice boost to my training load. The first two weeks weren't much, and indeed the entire second week was spent working without much training, but weeks three through six were a solid mix of long bike rides and decent runs. This gave me a bit of a cram course in base which had me feeling better on the bike than I had all year.

Coming back to San Francisco, though, the European experience had remarkably little effect on my Old La Honda times during the few Wednesday Noon Rides I was able to attend. That was a bit of a mystery to me, since I felt that surely all the volume and a bit of welcomed weight loss that came with it should have helped. But I didn't see it in the numbers.

Still, I was able to carry a larger training load out of Europe than I did going into it, and so that had to be a good thing.

My first Low-Key Hillclimb should have been week 3, Welch Creek, but I missed that due to recovery from oral surgery. Week 4 was a short-hill ride which was good training but not really to my strength, which is longer, steady-state climbing. Week 5 Felter Road, which went fairly well, but with its highly variable grade was a road which tended to favor the power climbers. Week 6 was Umunhum, and that went very well. Week 7, was coordinated by Cara and I and so I didn't ride. Week 8 was dirt, never my strength. Then week 9, Mount Hamilton, went fairly well for me, but not as well as I've done at my best.

All in all, though, the competitive climbing should have put me in a good position for New Years. But six weeks is a long gap to span. I was at IEDM in San Francisco for 3 days, then traveled to the East Coast for 6. I was able to get some decent running in, however, on the East Coast trip, and the rest of the time I rode to work a few times, as well as getting in my final OLH Noon Ride for the forseeable future (my employment in Mountain View ending at the end of the calendar year). The Noon Ride was my last chance to make a statement on OLH before San Bruno, but it didn't happen. My time wasn't even the best of the year.

So I went into San Bruno feeling my preparation had been better than for the 2014 race, at least, if not to the rather excellent level I'd reached for the 2012 race when I'd done very well in the 2011 Low-Key series. I really didn't know what to expect. Would I pull a great climb out of the hat? I always like to think this will happen. Of course it rarely if ever does: fitness doesn't just crystallize out of the void. You get solid hints of it beforehand. I'd had no such hints.


image

I lined up with the 45+ group, formally two fields, one for categories 1-2-3, the other for 4-5. I was in the former, where the favorite was Kevin Metcalfe. I saw Kevin's bike as I was approaching the Porta-Potties soon before the start. The Porta-Potties are an essentially element in my weight weenie protocol for hillclimbs, the last and cheepest option to save a bit of weight. Urine is heavy at close to 1 gram per cc. Far better to not carry any extra up the hill.

But to my horror I saw Kevin's bike had not only a GoPro camera on the front, but a red blinky light in the rear, the light associated with a rear-facing camera hidden in the light body. He was documenting the race, he said. In Kevin's case, the rear camera would be doing a lot more of that than the front.

Soon after I was lined up at the staging area. I'd done a decent warm-up, climbing to the top then descending back to the start area, then riding a little loop around there. My warm-up was notably lacking in intensity, however, and in retrospect I should have made a few hard efforts before the climb. San Bruno isn't known for it's relaxed "ease into it" starts.

image
Warm-up

I made the first turn onto Guadalupe Parkway Road fine, but after that things got painful. There were several periods of leg-searing accaleration where I was sure I was about to be dropped, but then every time, just as I was contemplating my impending failure, the pace let up, allowing me to move up a few precious places in the line. Eventually we crested the second, steepest portion of the road, leaving only the gradual run-in to the turn-off. That I knew I could handle. I just had to survive the pair of turns onto Radio Road.

Right turn, blast down past the entry kiosk, then a right turn (I had decided ahead of time to take this wide, due to rougher pavement on the inside line). I survived both of these. But next up was the gate. In my pre-ride, the right half of the road had been blocked by a gate, and we'd been warned at the start line that to get to the left. But the group was veering right. I couldn't really see, but decided to stay left. But riders from earlier groups were descending on that side, so I didn't want to be too far to the left. Then I saw the gate was open after all...

But while all this was occurring we'd gotten to the rough pavement section, and the leaders had clearly just hammered over this. With my time trial tires pumped to 150 psi (a bit less by this point) I was hardly optimized for this "Paris-Roubaix" portion. The little gap grew bigger.

If I'd been strong this wouldn't have been much of a problem; I could have shut the gap down using climbing speed. But I wasn't -- my legs were approaching serious protest mode from the accelerations on Guadalupe. I was forced to setting into damage control.

image

The leaders were gone, but I still hoped to catch some stragglers, perhaps deterred by the headwinds I'd seen earlier approaching the top. And I did catch one rider in my group. I latched onto him and a rider, perhaps a junior, from an earlier group. Approaching the finish the junior, with perhaps too much left in his tank, took off, but I was able to pass and drop the other guy from my group. Trying my best to ignore the pain, I came across the finish in 7th, my same placing as last year. Out of 12 finishers in the 45+ 1-2-3 group this put me below average, but I'd take it. It's a tough field.

Compared to last year I was 15 seconds faster, but over a minute slower than 2012. Ah, well.

image
Finish area

With Low-Keys over and now the San Bruno Hillclimb done, it's the end of the climbing season, at least until the unsanctioned Mount Diablo climb in May. But I still want to test myself, so want to find a way to get down to Old La Honda Road and make a solid effort there. I guarantee I won't get a PR. But if I could at least get close to 17 minutes, for example under 17:30, that would be a very good time for me right now.

image

Here's an analysis of the power I potentially generated for the climb:

image

I modeled the baseline power assuming a coefficient of rolling resistance of 0.6% (the pavement is rough, particularly on Radio Road) and a coefficient of wind drag of 0.36 m2. This is a bit higher than that for a typical pro cyclist, but my position isn't quite pro, and my clothing doesn't fit as well. I also plot a result for a 25% reduction in wind resistance, consistent with drafting. I drafted for most, but not all, the portion to the turn-off (the turn-off evident by a substantial drop in power while I descended and turned). For most of the rest I was solo.

The conclusion is my power was much higher on the bottom portion than the top. But this isn't a running race: you've simply got to stay with a group on that bottom part if you can, then hope you can hang on Summit.


Kevin Metcalf, 45+ winner, video with front and back cameras


different view: 55+ race. Note the strategy of staying over 10 mph. Not sure this is optimal given the course profile.


Sunday, January 18, 2015

Dolphin South End Runner's Club 10 miler

Today was my first running road race in quite awhile. First since CIM? Maybe. Time really flies.

Anyway, I did the Dolphin Southend Running Club's 10 miler at Sierra Point, running out and back along the Bay Trail, past Genentech. The run was in the morning fog, reducing visibility to only a few hundred meters. The trail would appear ahead without long-range context. We started at the Marina, but the boats were only visible on the water well within the final kilometer.

The race was a certified 10-mile course. That means if you run the trail, hitting all the tangents, running the minimum distance consistent with remaining on the trail (which was mostly paved, the sole exception being a very short dirt section), you went 10 miles to measurement precision. This was a bit of a fiction, however, since with the out-and-back course, near the turn-around point there were oncoming runners, so basic courtesy required staying to the right of the path. This added a small bit of distance: 2π times the half-width of the trail times the number of full circles of turning. The path was frequently turning as it followed the shoreline, so if it turned for example 10 full circles, which seems a reasonable estimate, and if I had to run half of the time 2 meter from the inside radius, that would be an extra 62 meters run. I recorded 16.2 km on my iPhone, which was 100 meters over 10 miles.

The run went okay for me. I let the first group go, then tried to latch onto a second group, since my rough target was to finish around 10th and this put me close to that position. But that projected placing was optimistic, and I had to let that group go as well when a glance at the phone display showed that I was on the fast side of 4 minutes per km, which is way too fast. My goal was 70 minutes which is 4:21 per km.

After the fast start, I had to slow down a bit to get into something closer to a pace which would get me to the second half intact. This was payback time for the quick start. But that's okay -- as long as the payback wasn't bigger than the original loan. I've got the cyclist's view of the importance of staying in a good group. The drafting advantage, especially in still air like we had here (the water was like glass) in running is small, but still non-negligible. With my 70 minute target time, that's 4.2 seconds for every 0.1% saved in running power over the course of the full race. With running taking place in a full upright position, although speeds are relatively low, wind resistance is still not to be dismissed.

But the draft benefit didn't last long before I had to crack back the pace. I hit the first mile at 6:35 on my timer, which 4:07 per km. This is pretty much the point beyond which I lost the group.

The water stop was at mile 3.4, meaning we hit it again at mile 6.6. Both times I got Gatorade, both times I did my typical 6 step stop, walk 2 steps before the water, grab the water, drink for 4 steps, drop the cup and run again. This seems to serve me fairly well, and is much less hassle than trying to grab and drink in a running stride. The Gatorade was a bit strong for my taste, and there was less in the cup than I'd prefer, but I think the drinks were beneficial even in the cool (mid-50F's) conditions.

At the turn-around the lead man was followed very closely by Molly, the lead woman. She'd end up falling back a bit off his pace, or else he pulled away from her. In either case she ended up second overall. I was close to 20th at the turn-around, which was further back than I'd expected from my observation of the first mile.

There was a junior runner not far ahead of me, and ahead of him a group of two. I focused first on catching the junior, who visibly slowed not too far after the turn-around. Then there were the two ahead of him. I caught one, but the other sped up when I approached him from behind. He veered to run around a puddle in the trail. I went straight through, passing him, and I thought that would be it, but perhaps it was a mistake to look back at him and smile at what had happened. He soon repassed, decisively, and put 20 seconds on my. I thought that might be it, but didn't give up. I ended up hearing someone approaching from behind. I was now worried about both chasing the guy ahead and not being caught from behind.

But then my rabbit slowed. The distance closed quickly. I passed, trying to keep the pace up: it was important to not let my success in passing him reduce my concentration and get caught from behind.

Not too long after, however, I heard footsteps rapidly approaching from behind. I assumed it was the guy who'd been chasing me, but when the runner passed, with considerable speed advantage, it was that guy I'd passed.... twice now. He just didn't give up. But I'd caught him twice, surely I could do it again.

But no luck. Although he couldn't keep up the insane speed he demonstrated repassing me, I wasn't able to close the gap. I crossed the line in 17:27.8, 18th overall out of 79 finishers, 4th in my age group.

Pace analysis is here. I clearly went out too hard. I also finished not far from my starting pace. There's a prominent dip in the pace outbound and inbound where I climbed the only significant hill in the course, once from each side. The outbound leg was 4 seconds faster than the inbound leg, Interestingly Strava has me running at a 4:18 per km average, which is 69:13 for 10 miles, although it notes I have my 2nd best "10 mile effort" at 69:52, which is 4:20 per km. Note Strava won't let me view what my best 10 mile effort is. It allows that for some distances but not for 10 miles. I'll have a suggestion to address this.

Monday, January 12, 2015

Cyclocross Nationals Postponement in Austin

Cyclocross world championships, scheduled yesterday in Austin TX, was initially canceled, later postponed to Monday on a "compressed schedule", nominally due to concern over root exposure on heritage trees in Zilker Park. Here's a comment I posted to a Statesman article on the matter:

The key questions are: 1. is the threat of sustained damage actual or just perceived? Cyclocross events are held in the mud world-wide. Sure, some work needs to be done afterwards, but that was anticipated when the permit was issued, and it's why fees were charged. People ASSUME exposing tree routes is a problem. Is it really? 2. Was the rain that unusual? Park officials claimed 2 inches, but the NWS showed less than an inch. By Austin standards, that's hardly rare.

Some would disagree that a permit should have been issued, but the permit WAS issued, and athletes, support, and fans made substantial sacrifices financially and time-wise to come to Austin. Sure, extreme events happen which get in the way of any outdoor event, but < 1 inch of rain is hardly rare or extreme. Unless there is the real, and not simply perceived, threat of sustained damage than you cannot simply tell them their activity is frivolous for political expediency, for the appearance that you value the trees in the park. Given the long international history of cyclocross racing, sometimes in far more extreme conditions, it's reasonable to fear that the threat in this instance was far overstated.

For many of these racers, in particular the juniors, national championships is their chance to prove themselves for later opportunities at the top level. For others it represents the culmination of months of sacrifice in training. Failing to give that proper respect is a huge disservice.

So how much rain really fell?

Here's weather station data for Zilker Park in Austin:

So 0.28 inches on Saturday and 0.10 inches on Sunday, a total of 0.38 inches. Normal rainfall in Austin for January is 2.0 inches. So this rainfall was 19% of normal for January. Is this the sort of extreme, unforeseeable precipitation, an "act of God" like earthquakes or hurricanes? Hardly. It was just a solid winter rain.

So then the question is whether the potential damage from the race was lasting, in particular to the heritage trees. Root exposure is a normal state for large trees, so it's not in itself a problem. Now I'm not going to comment on this particular case, as I'm not a horticulturist in any way. However, I'll indulge in some healthy skepticism over how much of this action was politically motivated and how much it was motivated over an actual threat.

Sunday, January 4, 2015

Strava: 2015 KOMs and too many segments

Strava just released annual KOMs in addition to all-time KOMs. I think this is great. KOMs have gotten harder and harder, and this makes it more realistic for people to get to share in the glory. The major goal of Strava is to provide incentive for competition, and as all-time KOMs get increasingly optimized, a chance of placing top 3 in segments (and getting a medallion) ceases to be an incentive.

One proposal I made was to award medallions for placing top 10%, top 5%, and top 1% of those ranked on a segment, in addition to the present top 3 (a preference given to which one is tougher). Ranking would be using the VeloViewer metric: placing 1st out of 10, counting yourself, is 10%. This recognizes that the more activities with efforts on a particular segment, the more the top ranking in that segment is worth.

But people have complained about the 2015 KOMs. Part of the problem is of course that it's only the first week in January and these rankings are very soft. Come March the complaints will be less. But the real problem is simply there's way too many segments. If the number of segments was much less, there'd be less clutter.

So how can the number of segments be reduced?

I don't propose deleting any existing segments except perhaps those which have been auto-generated (auto-generation was turned off last year, but the damage in a clutter of poorly designed segments remains). Instead, I prefer a verbosity model.

Suppose I'm writing software and I want to provide the user with feedback, notifying him of various events of interest and of various parameter settings. There may be many events, and many parameters. One approach is to just dump them all someplace, and let the user wade through all of them. But this creates excess clutter, and the more important messages may get easily overlooked. A superior approach, if it can be done, is to assign an importance to different messages. For example, I might assign a number from 1 to 5, with 1 being reserved for only the most important messages (like notification of errors which must be corrected for the program to run successfully), then 2 very useful but less critical, all the way to 5 which are messages which might be only of rare interest. Then the user can pick what level of "verbosity" is desired: from 1 (wanting to see only the most critical messages) up to 5 (show me everything).

This approach could be applied to the segment list. Indeed, it already is, but with only 2 levels: more popular segments are shown by default, and less popular segments are hidden by default, where popularity is implicitly determined by how often the segment page is requested. This is okay, but it needs to be taken further.

First, there should be more than 2 levels. The most selective setting should show only the most popular segments. For example, I did the following ride today:

If there's one segment which should be shown, it's probably this one, Kings Mountain via Huddart, which is assigned an exceptionally low index of 1201 marking it as one of the earliest Strava segments to be defined:

Why do I pick this one? It's essentially the longest climb. I say "essentially" because there's some vertical gain before the start of the segment and some after, but this one begins at a gate and ends at a T intersection and so it has well-established traditional end-points. If you asked me to vote for a segment, this is the one for which I'd vote.

There's other factors I'd consider. A segment needs to be defined with an appropriate start point (not too close to the actual start, to give a margin for GPS error) and end-point (same deal). It should consist of good reference GPS data. And not to be overlooked, it should be appropriately named (I should be able to tell what it is from the name).

Whether I click on it is a good test. But I'd additionally like the option to explicitly vote. For example, I'd like to be able to vote on a scale -2 to +2: -2 meaning I think the segment is especially poor, -1 meaning it's flawed in some way, 0 is the default, 1 means I like it and would pick it versus competing options for the same portion of road, and +2 means I especially like it and would always want to see it. Votes other than the default 0 could add to or subtract from the score of a segment, in addition to the number of clicks it gets.

But the voting score and the clicking scores are raw scores: simply using these alone would give preference to segments on more popular roads and doom those on less popular roads to obscurity. This probably isn't what's wanted. If I ride a rarely ridden road and there's a good segment there I probably want to see it. So the click score should be divided by the number of efforts for the segment and the voting score divided by the number of athletes who've generated efforts for the segment. These are normalized click and vote scores. Strava very likely already does this with click scores. I can combine the normalize scores into an aggregate score.

Once I have an aggregate score for each segment, I need to figure out what my thresholds are for displaying them. A simple approach would be for the user to set a sliding scale which establishes how many segments are shown. The scale maps to a score threshold, and only those segments with scores above the threshold are shown. The threshold range on the slider could be based in part on the range of scores of segments matched by the activity.

I'm tempted to go further, to propose additional criterion which should be considered in scoring segments, like the presence of overlapping segments or distance of altitude gained or a climbing difficulty score like the Fiets metric. But I'll leave that to the voting.

I think a few simple measures will help a lot.