Strava: 2015 KOMs and too many segments
Strava just released annual KOMs in addition to all-time KOMs. I think this is great. KOMs have gotten harder and harder, and this makes it more realistic for people to get to share in the glory. The major goal of Strava is to provide incentive for competition, and as all-time KOMs get increasingly optimized, a chance of placing top 3 in segments (and getting a medallion) ceases to be an incentive.
One proposal I made was to award medallions for placing top 10%, top 5%, and top 1% of those ranked on a segment, in addition to the present top 3 (a preference given to which one is tougher). Ranking would be using the VeloViewer metric: placing 1st out of 10, counting yourself, is 10%. This recognizes that the more activities with efforts on a particular segment, the more the top ranking in that segment is worth.
But people have complained about the 2015 KOMs. Part of the problem is of course that it's only the first week in January and these rankings are very soft. Come March the complaints will be less. But the real problem is simply there's way too many segments. If the number of segments was much less, there'd be less clutter.
So how can the number of segments be reduced?
I don't propose deleting any existing segments except perhaps those which have been auto-generated (auto-generation was turned off last year, but the damage in a clutter of poorly designed segments remains). Instead, I prefer a verbosity model.
Suppose I'm writing software and I want to provide the user with feedback, notifying him of various events of interest and of various parameter settings. There may be many events, and many parameters. One approach is to just dump them all someplace, and let the user wade through all of them. But this creates excess clutter, and the more important messages may get easily overlooked. A superior approach, if it can be done, is to assign an importance to different messages. For example, I might assign a number from 1 to 5, with 1 being reserved for only the most important messages (like notification of errors which must be corrected for the program to run successfully), then 2 very useful but less critical, all the way to 5 which are messages which might be only of rare interest. Then the user can pick what level of "verbosity" is desired: from 1 (wanting to see only the most critical messages) up to 5 (show me everything).
This approach could be applied to the segment list. Indeed, it already is, but with only 2 levels: more popular segments are shown by default, and less popular segments are hidden by default, where popularity is implicitly determined by how often the segment page is requested. This is okay, but it needs to be taken further.
First, there should be more than 2 levels. The most selective setting should show only the most popular segments. For example, I did the following ride today:
If there's one segment which should be shown, it's probably this one, Kings Mountain via Huddart, which is assigned an exceptionally low index of 1201 marking it as one of the earliest Strava segments to be defined:
Why do I pick this one? It's essentially the longest climb. I say "essentially" because there's some vertical gain before the start of the segment and some after, but this one begins at a gate and ends at a T intersection and so it has well-established traditional end-points. If you asked me to vote for a segment, this is the one for which I'd vote.
There's other factors I'd consider. A segment needs to be defined with an appropriate start point (not too close to the actual start, to give a margin for GPS error) and end-point (same deal). It should consist of good reference GPS data. And not to be overlooked, it should be appropriately named (I should be able to tell what it is from the name).
Whether I click on it is a good test. But I'd additionally like the option to explicitly vote. For example, I'd like to be able to vote on a scale -2 to +2: -2 meaning I think the segment is especially poor, -1 meaning it's flawed in some way, 0 is the default, 1 means I like it and would pick it versus competing options for the same portion of road, and +2 means I especially like it and would always want to see it. Votes other than the default 0 could add to or subtract from the score of a segment, in addition to the number of clicks it gets.
But the voting score and the clicking scores are raw scores: simply using these alone would give preference to segments on more popular roads and doom those on less popular roads to obscurity. This probably isn't what's wanted. If I ride a rarely ridden road and there's a good segment there I probably want to see it. So the click score should be divided by the number of efforts for the segment and the voting score divided by the number of athletes who've generated efforts for the segment. These are normalized click and vote scores. Strava very likely already does this with click scores. I can combine the normalize scores into an aggregate score.
Once I have an aggregate score for each segment, I need to figure out what my thresholds are for displaying them. A simple approach would be for the user to set a sliding scale which establishes how many segments are shown. The scale maps to a score threshold, and only those segments with scores above the threshold are shown. The threshold range on the slider could be based in part on the range of scores of segments matched by the activity.
I'm tempted to go further, to propose additional criterion which should be considered in scoring segments, like the presence of overlapping segments or distance of altitude gained or a climbing difficulty score like the Fiets metric. But I'll leave that to the voting.
I think a few simple measures will help a lot.
Comments
I'm Andrew. I'm a member of the Product Team at Strava. Segment noise is definitely problem in dense areas like San Francisco and London, and it's only getting worse as Strava heats up in other regions. We're well aware, and we'll be working on a few solutions to the noisiness over the next few weeks/months. Some of your suggestions are very close to what we've already got on our to-do list; others are new and quite insightful! Stay tuned!
Thanks,
Andrew
I'm actually working on this project literally quite as we speak and I'd be really interested in talking with you directly (or perhaps writing a guest post?) when we implement some of the things that Andrew has talked about.
At a high level, we're approaching the noise problem as a combination of a clustering problem and a scoring problem - the idea is that we cluster similar segments together by some idea of a curve similarity metric (right now we're playing with a few: Frechet, Hausdorff, and Dynamic Time Warp similarity weighted by a gaussian). We then produce the largest possible cluster given these curve similarity metrics, which is fed into a scoring function as a parameter.
The idea, ultimately, is that any scoring function that operates on a single segment really ought to take environmental context into consideration, so we've been playing with functions of the form score(segment | cluster contents) rather than pure functions of a segment's details (altitude, length, intersection issues, etc, age) by itself. One issue, of course, is that we don't have a great training set against which we can run regressions to determine the weights on these various features - there's no "canonical perfect segment" database that anyone possesses, so it's mostly a matter of unsupervised learning.
I'll probably write a more detailed post soon about the process, but suffice it to say, we feel very similarly to you here, and we're definitely working on the problem.
Matt Redmond
Not mentioned (but absolutely a catalyst for noisiness in my area) is how challenging the 'flag as hazardous' situation has become. I have a 28 mile commute along a bike path. New segments are constantly created then flagged, rinse and repeat.
Hoping some of the opportunities to refine the raw (segment) data will also address why there are so many duplicative segments. The tug of war between those who want segments and those who prefer to flag segments anonymously is tough on those of us who just want the data.
But at least they now allow viewing of rankings on flagged segments if you agree to a waiver. This is a nice dose of sanity on their part. It's good to see. After all, nothing is truly safe, and any attempt at speed can be abused.
But at least they now allow viewing of rankings on flagged segments if you agree to a waiver. This is a nice dose of sanity on their part. It's good to see. After all, nothing is truly safe, and any attempt at speed can be abused.