Tuesday, January 24, 2017

Do I need a power meter?

On the Slowtwitch forum recently, a frequently asked question was posted about whether one needs a power meter. This question has come up regularly on cycling and triathlon forums for the past decade or more and there have been a number of posts and articles written by plenty of power meter advocates over the years (including myself) that have laid out the case.

In the specific thread there were a number of responses, mostly with a heavy focus on training to a specific intensity, pacing, that sort of thing. All of which are fine, but in my opinion these responses are not overly compelling reasons to use a power meter.

It's actually a really good question, and I don't think many people have adequately answered it. So I suspect it might be a theme worthy of blogging about from time to time.

I'm not going to delve into it deeply today, but thought I'd keep a copy of my forum response here on the blog for easy reference, and perhaps in future posts I'll explore some of the reasons given by myself and others and whether they stack up as sound and valid for using a power meter versus an alternative.

Here is the question posed in that thread:
I have been read the time crunched triathlete. Carmichael makes it sound like you can get pretty good result from a HR monitor. Sooo do I really need a power meter
My response is reproduced below:

You don't get good results from any device. You get good results from executing sound basic training principles of consistency, frequency, progressive overload with recovery as needed, specificity and individualisation of your training and development needs.

Most half decent training plans and basic monitoring tools (a watch, RPE, HR and even power meter used in a really basic manner) will get people some way towards executing these principles, e.g. it's very rare that someone I give a 2-3 month training plan to and who executes it does not improve, however such plans sacrifice some level of load management optimisation, specificity and individualisation.

Power meters (good ones at least) and the data they produce provide you with objectivity in assessing the training you are actually doing v. what you think you are doing. Neither RPE or HR can do that.

I mean far more than monitoring your work rate at any particular moment but right though to considering what you are doing in a more global sense. How what you are doing now (or previously, or this week/month etc) fits in with and impacts your season and even your entire athletic career.

Power data also helps one to better understand their current and historical physiological capabilities and its relationship with and response to your training, your physical attributes (e.g. aerodynamics), the specific demands of your races or goal events, and can help assessment of some riding/racing skills/execution, which leads to individualising and optimising your training and development program to suit your specific needs.

As a communications and logging tool, the objective power data balances the subjective feelings about how you are going. Both matter and it's more useful when subjective and objective are assessed together.

And interestingly, and somewhat in opposition to what many seem to think, power meters can actually provide you with a lot of freedom in the way you go about your training since once you recognise what's actually important you realise there are many ways to skin the training cat. Applying good training principles does not automatically imply overly regimented training.

To use power meters wisely and to a reasonable proportion of their potential for performance improvement requires an investment on your part to learn how to understand and apply the data.

Or you could just use it as a fancy speedo, effort monitor and ride logger. If that's all you intend to do though, I'd save your money and just follow a half decent plan and keep things fun.

Read More......

Monday, October 10, 2016

Kona power meter usage trends: 2009 to 2016

Update for 2016 based on the Lava Magazine bike count data. Previous posts links showing trend data up to 2013, 2014 and 2015 are here:


Here are the numbers for 2009 through to 2016 (click  on images to see larger versions):

And below is the breakdown showing proportion of bikes with and without power meters, and the split for each power meter as a proportion of all bikes. e.g. the slice of pie for the Powertap is 175 Powertap power meters which is 7.9% of the 2,229 bikes in the the Kona bike count.

2016 continued the long term trend of an increase in use of power meters by Kona IM athletes, and for the first time ever a majority of bikes (57.4%) were fitted with a power meter.

So the pie is getting bigger for all power meter manufacturers. at least as a share of Kona athletes. How indicative these numbers are of broader power meter trends is hard to say.

So how are they all doing as a share of that increasing Kona power meter pie slice?

Below are the year on year trends, ranked by total share of power meters:

Quarq and Garmin Vector maintained their lead as the most used power meters and like most brands each saw a small increase in their share of the total power meter pie. However their relative share of the bikes fitted with power meters took a hit with Quarq dropping 3.4% to 23.7% and Garmin Vector down 3.0%, to 17.8%. These were the biggest falls in relative share of all the major power meter brands. While this continues Quarq's trend from the previous year of a decline while still maintaining top place, it's a reversal of fortunes for Garmin Vector who showed strong year of year relative share growth last year.

The big mover up the rankings was Powertap which like most brands improved its share of all bikes but more importantly their share of bikes fitted with a power meter was up 6.4% to 13.7%  (nearly doubling their 2015 share). This is no doubt due to the introduction of Powertap's new power meter models, in particular the P1 pedal based meter, which complements their well established hub-based and new C1 chain ring-based power meters.

This reversed the trend in recent years for Powertap, whose numbers were probably a little under represented as the Powertap hub is the one that most likely to be used as a training wheel for some athletes but not as a race day wheel. Unfortunately the Lava Magazine data does not parse the Powertap data into model sub-categories so we can't know exactly the trends for each model, however the pedal count shows 82 bikes with Powertap P1s, which means hubs and chainrings (if any) make up the 93 remaining Powertap models. In 2015 Powertap hubs numbered just 78 units.

Rotor and Pioneer also saw their share of all bikes and all power meters improve, although from a smaller base.

Stages share of the Kona power meter pie has stabilised after strong growth from 2014 to 2015, with a slight drop in their relative share of power meters.

Power2Max is declining in their relative share of power meters used at Kona and this is the second year they have experienced such a decline.

SRM continues its slow drop in relative share on all bikes and of those fitted with a power meter.

A few new power meter brands make a guest appearance but none have really exploded onto the Kona scene.

Overall observations

These numbers continue the broad trends of previous few years:

i. Power meter usage as a proportion of all bikes used at Kona continues to rise at a rate of nearly 6% year on year. This has been a consistent trend since 2009. If the trend continues, we should expect that in 2017, approximately 63% of all bikes will be fitted with power meters.

ii. Most growth in usage comes from newer power meter models.
For 2016 the majority of growth came from Powertap with 45% of the growth, Rotor 21% and Stages 11%, with the rest making up the remaining quarter of the growth (SRM being the only model with negative growth).

iii. after an initial period of growth, models tend to stabilise their Kona athlete market share for a year or so before beginning a gradual decline in share

iv. no power meter model dominates Kona athlete market share. Quarq maintains its place as the lead choice being fitted to 23.7% of bikes with power meters.

Some caveats:
- obviously this is a sample of athletes that qualified and participated in Kona and hence we can't simply project these trends as necessarily being representative of the overall market.

- the  athletes that qualify obviously changes from year to year.

OK, so that's the latest on power meter usage trends from Kona. See you in 2017!

Read More......

Thursday, August 11, 2016

Looking under Froome's hood. Again.

I posted this item in December 2015 after some data on physiological testing of Chris Froome was made public in a mostly PR piece. Have a read there first if you haven't already done so.

Today I saw the published science paper was released and from the abstract I pulled out a few extra pieces of information, namely Froome's gross efficiency (23% at ambient conditions), power at blood lactate level of 4mmol/l (419W). His reported weight for the test was 71kg, which is likely above his racing weight.

So I thought I'd do up another chart, this time fixing the gross efficiency and VO2max values, and plotting the curve of aerobic power in W/kg terms versus fractional utilisation of VO2max:

The relationship between aerobic energy yield per litre of oxygen, gross efficiency, VO2max, fractional utilisation of VO2max and power output is outlined in this earlier blog post.

So what can we make of this?

1. A TdF winning cyclist has the physiology you'd expect of a TdF winning cyclist. That should be hardly surprising.

2. Froome has both high VO2max and high gross efficiency, which is a killer combo. Neither represent out of this world values. What that means is Froome's sustainable aerobic power output is then a function of his fractional utilisation of VO2max, and FUVO2max at threshold is a highly trainable aspect of one's fitness, more so than gross efficiency or VO2max.

3. The sustainable power as measured in this test was at a blood lactate level of 4mmol/litre, which is an arbitrary level for such testing. What any individual rider's BL level is at their actual "threshold" is quite variable, often somewhat higher.

4. It would seem that Froome's fractional utilisation of VO2max at this power level was ~86-87%. That's a pretty reasonable value for longer duration efforts of at least an hour for highly trained cyclists and it can quite feasibly be higher than that at threshold power, and certainly higher over shorter durations, e.g. 15-20 minutes.

5. The testing was also conducted at high humidity (60%) and temperature (30C) and somewhat interestingly Froome's gross efficiency was higher (23.6%) than when tested at ambient temperature (20C) and humidity (40%). That would add ~0.15W/kg at threshold, a very handy result for hot days. The reported his sustainable power was 429.6W at high humidity and temperature versus 419W at ambient temp and humidity. That power difference of 10.6W / 71kg = 0.15W/kg.

6. Weight. I'd expect Froome's race weight would have been a few kgs less than at the time of testing. e.g. 67kg at same power would add 0.35W/kg to threshold power.

Doping? Once again, this sort of data tells us nothing about any rider's doping status.

Read More......

Thursday, July 28, 2016

TdF Speed Trends 1947 - 2016 - take 2

Following on from yesterday's post, here's another take on TdF speed trends post WWII:

As usual, right click to see larger version.

It should be pretty self explanatory. Each year's speed and distance is shown and colour coded by decade so it's easy to see the general trends. Progressively the tour has been getting shorter since it recommenced after WWII, and speeds have in general been rising.

So when someone points out that speeds are increasing and wants to assign a causation there are of course a myriad of possible reasons, however one of them is clearly an overall reduction in distance ridden. even so, one needs to be careful when seeking to assign possible causal factors to this relationship, e.g. doping.

The idea for this chart was stolen from a post Robert Chung presented on Stack Exchange examining the TdF speed trends. Robert's original post and charts can be found here:


It's a good read and goes into a bit more depth as well as examining the trend line and residuals and why it's not so smart to immediately jump to conclusions about causal relationships.

Year to year variation, and possibly "era" to era variations are influenced by many things, the parcours is the most obvious example with some tours being more mountainous than others, while better/lighter/more aero equipment keeps coming along, influence of doping, better training and preparation, more dedicated focus on the tour, better pay attracting better athletes overall, general weather/environmental conditions (e.g. warm and dry vs cold and wet), changes in race strategy and tactics, and so on.

The data I used comes from the Tour de France online archivecomes from this wikipedia page, although there are minor differences in race distances listed for some years between that and the distances listed on he TdF's own website archive, but not enough to change the visual. I may update the chart to satisfy my own anality if I can nail down the discrepancies.

Here's another way to view the same data, which plots the same average speed trend line in yesterday's chart overlaid with the trend in race distance:

The inverse trend showing increasing overall speed with reducing race distance is as apparent.

And for the sake of completeness (of stealing Robert Chung's plots that is), here are the residuals of speed on distance by year:

This plots how far above or below the speed v distance trend line the actual race speed is for that year. Also shown is a 5-year moving average of the residuals so a general trend above/below trend can be seen. IOW if there were some causal factor (e.g. doping) in the 1990s and 2000s that resulted in above trend speeds, then we'd also need to explain the above average speed trend in late-1950s and early-1960s as well.

When I looked at this yesterday, it was to point out some logical fallacies presented in a Facebook posts I saw, i.e. that the 2016 tour was faster than Armstrong's 2000 tour, and of course the (fallacious) logic that it implied doping was a bad as back then.

Well I thought it use to examine that non-sequitur and example of cherry picking data to suit a narrative.

For a start, yes the 2016 tour was faster than the one in 2000. Just. By 0.05km/h, but it was the fourth slowest tour since 1998, and only the 16th fastest since WWII.

So as a case of cherry picking, it was a poor effort. Once you looked at all the data then it is placed in better context.

Cherry picking is bad enough, but the non-sequitur was that the average speed tells you something about the doping status of the winner. It doesn't. In other words we really can't infer much either way about doping of riders in general, let alone an individual, from such data.

And while I'm at it, here's a chart plotting the trend in average stage length, which has been steadily dropping. It's similar to the trend in total stage distance but there are slight variations as the number of stages varies between 20 (on many occasions) and 25 (in 1987).

And another one, this time plotting Average Speed v Average Stage Distance:

Read More......

Wednesday, July 27, 2016

TdF Speed Trends 1947-2016

Just putting down a placeholder for this chart for reference since I had cause to look at the data recently.

I'll come back to this chart later.

Tour de France general classification winners
         Year   Cyclist Distance Time/Points Average Speed (km/h)
1947   Jean Robic 4,642 km (2,884 mi) 148h 11' 25" 31.32
1948   Gino Bartali* 4,922 km (3,058 mi) 147h 10' 36"  33.44
1949   Fausto Coppi* 4,808 km (2,988 mi) 149h 40' 49"  32.12
1950   Ferdinand Kübler 4,773 km (2,966 mi) 145h 36' 56"  32.78
1951   Hugo Koblet 4,690 km (2,910 mi) 142h 20' 14"  32.95
1952   Fausto Coppi* 4,898 km (3,043 mi) 151h 57' 20"  32.23
1953   Louison Bobet 4,476 km (2,781 mi) 129h 23' 25"  34.59
1954   Louison Bobet 4,656 km (2,893 mi) 140h 06' 05"  33.23
1955   Louison Bobet 4,495 km (2,793 mi) 130h 29' 26"  34.45
1956   Roger Walkowiak 4,498 km (2,795 mi) 124h 01' 16"  36.27
1957   Jacques Anquetil 4,669 km (2,901 mi) 135h 44' 42"  34.40
1958   Charly Gaul 4,319 km (2,684 mi) 116h 59' 05"  36.92
1959   Federico Bahamontes* 4,358 km (2,708 mi) 123h 46' 45"  35.21
1960   Gastone Nencini 4,173 km (2,593 mi) 112h 08' 42"  37.21
1961   Jacques Anquetil 4,397 km (2,732 mi) 122h 01' 33"  36.03
1962   Jacques Anquetil 4,274 km (2,656 mi) 114h 31' 54"  37.32
1963   Jacques Anquetil 4,138 km (2,571 mi) 113h 30' 05"  36.46
1964   Jacques Anquetil 4,504 km (2,799 mi) 127h 09' 44"  35.42
1965   Felice Gimondi 4,188 km (2,602 mi) 116h 42' 06"  35.89
1966   Lucien Aimar 4,329 km (2,690 mi) 117h 34' 21"  36.82
1967   Roger Pingeon 4,779 km (2,970 mi) 136h 53' 50"  34.91
1968   Jan Janssen 4,492 km (2,791 mi) 133h 49' 42"  33.57
1969   Eddy Merckx 4,117 km (2,558 mi) 116h 16' 02"  35.41
1970   Eddy Merckx* 4,254 km (2,643 mi) 119h 31' 49"  35.59
1971   Eddy Merckx 3,608 km (2,242 mi) 96h 45' 14"  37.29
1972   Eddy Merckx 3,846 km (2,390 mi) 108h 17' 18"  35.52
1973   Luis Ocaña 4,090 km (2,540 mi) 122h 25' 34"  33.41
1974   Eddy Merckx 4,098 km (2,546 mi) 116h 16' 58"  35.24
1975   Bernard Thévenet 4,000 km (2,500 mi) 114h 35' 31"  34.91
1976   Lucien Van Impe 4,017 km (2,496 mi) 116h 22' 23"  34.52
1977   Bernard Thévenet 4,096 km (2,545 mi) 115h 38' 30"  35.42
1978   Bernard Hinault 3,908 km (2,428 mi) 108h 18' 00"  36.08
1979   Bernard Hinault 3,765 km (2,339 mi) 103h 06' 50"  36.51
1980   Joop Zoetemelk 3,842 km (2,387 mi) 109h 19' 14"  35.14
1981   Bernard Hinault 3,753 km (2,332 mi) 96h 19' 38"  38.96
1982   Bernard Hinault 3,507 km (2,179 mi) 92h 08' 46"  38.06
1983   Laurent Fignon# 3,809 km (2,367 mi) 105h 07' 52"  36.23
1984   Laurent Fignon 4,021 km (2,499 mi) 112h 03' 40"  35.88
1985   Bernard Hinault 4,109 km (2,553 mi) 113h 24' 23"  36.23
1986   Greg LeMond 4,094 km (2,544 mi) 110h 35' 19"  37.02
1987   Stephen Roche 4,231 km (2,629 mi) 115h 27' 42"  36.64
1988   Pedro Delgado 3,286 km (2,042 mi) 84h 27' 53"  38.90
1989   Greg LeMond 3,285 km (2,041 mi) 87h 38' 35"  37.48
1990   Greg LeMond 3,504 km (2,177 mi) 90h 43' 20"  38.62
1991   Miguel Indurain 3,914 km (2,432 mi) 101h 01' 20"  38.74
1992   Miguel Indurain 3,983 km (2,475 mi) 100h 49' 30"  39.50
1993   Miguel Indurain 3,714 km (2,308 mi) 95h 57' 09"  38.71
1994   Miguel Indurain 3,978 km (2,472 mi) 103h 38' 38"  38.38
1995   Miguel Indurain 3,635 km (2,259 mi) 92h 44' 59"  39.19
1996   Bjarne Riis[A] 3,765 km (2,339 mi) 95h 57' 16"  39.24
1997   Jan Ullrich# 3,950 km (2,450 mi) 100h 30' 35"  39.30
1998 **  Marco Pantani 3,875 km (2,408 mi) 92h 49' 46"  41.74
1999[B]   Lance Armstrong 3,687 km (2,291 mi) 91h 32' 16"  40.28
2000[B]   Lance Armstrong 3,662 km (2,275 mi) 92h 33' 08"  39.57
2001[B]   Lance Armstrong 3,458 km (2,149 mi) 86h 17' 28"  40.07
2002[B]   Lance Armstrong 3,272 km (2,033 mi) 82h 05' 12"  39.86
2003[B]   Lance Armstrong 3,427 km (2,129 mi) 83h 41' 12"  40.95
2004[B]   Lance Armstrong 3,391 km (2,107 mi) 83h 36' 02"  40.56
2005[B]   Lance Armstrong 3,593 km (2,233 mi) 86h 15' 02"  41.66
2006   Óscar Pereiro[C] 3,657 km (2,272 mi) 89h 40' 27"  40.78
2007   Alberto Contador# 3,570 km (2,220 mi) 91h 00' 26"  39.23
2008   Carlos Sastre* 3,559 km (2,211 mi) 87h 52' 52"  40.50
2009   Alberto Contador 3,459 km (2,149 mi) 85h 48' 35"  40.31
2010   Andy Schleck#[D] 3,642 km (2,263 mi) 91h 59' 27"  39.59
2011   Cadel Evans 3,430 km (2,130 mi) 86h 12' 22"  39.79
2012   Bradley Wiggins 3,496 km (2,172 mi) 87h 34' 47"  39.92
2013   Chris Froome 3,404 km (2,115 mi) 83h 56' 20"  40.55
2014   Vincenzo Nibali 3,660.5 km (2,274.5 mi) 89h 59' 06"  40.67
2015   Chris Froome* 3,360.3 km (2,088.0 mi) 84h 46' 14"  39.64
2016   Chris Froome 3,529 km (2,193 mi) 89h 04' 48"  39.62

** 1998 Stage 17 was abandoned (Festina Affair rider protest)

Update 28 Jul 2016:
Chart has been updated

The 1998 value was revised down due to removing 149km from total distance since stage 17 was abandoned. Calculated average speed is still a little higher (40.14km/h) but not much over what the official site reports (39.983km/h) but I can't seem to work out why.

It equates to about 14.4km missing from total reported distance or about 21 and a half minutes in total duration being unaccounted for. Prologue that year was 5.6 km and I've included that and time bonuses for stage wins by Pantani wouldn't account anything like that much.

Read More......

Saturday, July 23, 2016

Bemusing aero equipment choices at the Tour de France

2016 Tour de France. Stage 18 ITT Megève. GC contenders giving away time with bike set up choices. Why?

Here's the course profile:

Here's a table with the aero choices made by the 20 fastest riders on the day. As far as I can tell all rode using a skin suit (although some of the suits were not exactly a good aero fit). Note - I've updated Mollema's entry a few hours later as he was using a rear disk wheel.

Note: Rodriguez swapped from a time trial bike to a road bike part way along.

This suggests all these riders recognised that aerodynamics still mattered, but not enough that riders thought it worth using some other basic aero kit. Perhaps they felt there was too much of a weight penalty (there's not BTW). Or they did not feel good climbing on a TT bike, or were concerned with the descent? Lack of preparation is my take.

For reference, I used photos from the various websites to work out who used what. For front wheel and helmets, there might be a little debate as to it fits the category of aero or not. Needed to be a full aero TT helmet to count and what looked like low-ish profile wheels went in the "No" category. Always happy to amend if people spot errors.

Richie Porte, with not even an aero front wheel, let alone an aero helmet:

Fabio Aru, not much better:

Contrast with the stage winner Chris Froome who used all the aero aids at his disposal:


Average speeds for the top 20 ranged from 31.5km/h to 33.2km/h. Aerodynamics still matters quite a bit at such speeds. So why not take advantage of it?

Yes it was hilly but the lack of aero equipment choices for a TT even at these speeds does rather bemuse me. Weight penalty of helmets and wheels is negligible and any small benefits are outweighed by aero losses.

More discussion later. Perhaps.

Read More......

Sunday, December 13, 2015

FTP variability (and doping)

In one of the five hundred and twenty five thousand online forum threads about why Chris Froome is or is not a doper, one of the questions raised was about whether a coach could detect if an athlete was on the juice based on their performance (power) data.

That led to a comment about typical changes in a rider's power over the course of a season.

As to the question of a coach's ability to detect doping from performance, performance changes are multifactoral and so that makes it nigh on impossible.

It's relatively easy to measure the performance change (power meters enable that), far more difficult to parse out the specific reasons why it occurs.

Now of course one can wonder if you have known an athlete for a long time and know their training and performance history and have a reasonable understanding of their potential. If they find a sudden large boost when nothing else in particular has changed, well you might naturally begin to wonder.

Consider that I have seen athletes attain Functional Threshold Power improvement of between 5% and 100% in 6 months of training and you can immediately see the problem, especially given doping provides performance advantages well within the range of those attainable by completely legitimate means.

Better training, better diet, better sleep, better psychology, better aero, better planning and support, better race skills and race craft, better equipment and tools, and of course, doping. These are not mutually exclusive means to improve performance.

This is the problem e.g. that makes up much of the discussion about Froome or others. Lot's of Clinic focus on his "transformation". The problem is that there are plenty of legitimate as well as illegitimate means by which such performance changes can be explained.

Balance that with the fact that in the past 30 years half the riders standing on the podium for the major Euro pro races and top 20 in GTs are known to be dopers (let alone the ones that slipped though the net). Objective assessment therefore needs to consider all such possibilities.

However that still doesn't mean one can immediately infer from performance data or even physiological testing data such as lactate threshold or VO2max the reasons for one's performance, or more to the point, their change in performance.

I think the only way an ethical coach is likely to spot or suspect doping is if they are in frequent eye ball contact with the athlete, and it's not so much going to be from their on-bike performance, but rather from observing off-the-bike behaviour.

As much as coaches might like to be in frequent eye ball contact so they can do a better job, coaches are often not in such frequent close quarters with their clients. Riders travel and coach can't be with all their clients all the time. The exception are squad/institute coaches that interact multiple times per week and travel with their athletes that typically attend the same races.

More usually the contact is via phone/skype/chat/email and other social and electronic media style interactions, as well as the athlete's diary notes that accompany their power meter files. For the most part this works pretty well (athlete results demonstrate that to us all the time) but of course there are some things for which seeing the athlete is preferable and some personalities that require more eye ball contact than others.

Anyway, on one of the forums I made a comment about the typical variability in FTP for an active racing cyclist. An often quoted value is about 10% variance from out of form/off season to peak fitness. That was questioned as being quite a large variance. I really had nothing other than my years of coaching and personal experience to suggest whether or not this was realistic.

So I thought about attempting to answer the question with some data.

Fire up WKO4 and create a report using the following expression:

max(ftp(meanmax(power),90)) / min(ftp(meanmax(power),90))

and apply it to ranges covering entire years of data (with power data for >>90% of rides).

That expression calculates the modelled FTP for the date range selected, locates the maximum and minimum values for FTP that are calculated during that date range, and calculates the ratio of the maximum to the minimum FTP.

I did that for a selection of 10 athletes over 2 seasons. These athletes are mostly competitive amateur through to elite level (but no full time pros), and have power data for >> 90% of their rides.

This is the summary:

What I find interesting is the variance as measured by the modelled FTP in WKO4 is larger than I would have expected.

Over 10 riders for 2 seasons each, we have an average maximum to minimum modelled FTP ratio of 1.23, meaning the peak modelled FTP for a season was, on average, 23% higher than the minimum modelled FTP for that same season.

Good luck trying to pick out one specific reason for performance changes when models are showing this sort of variance in FTP.

Do I think their FTP really varies that much? Well possibly not quite, but then with time I am seeing mFTP to be quite reliable indicator, provided the quality of input data is good. One erroneous power spike can mess with the power-duration data and mFTP value. Indeed when there are large changes in the modelled power-duration metrics, it's often due to input data error than anything else.

For reference I also provided an indication of their annual TSS (~27,000) and average CTL (~77 TSS/day) for this selection, just to show that theses are riders on average have quite decent training volume. I would not rely totally on those TSS values though, as they probably need an audit of the FTP history applied in WKO4 to generate them, so I consider them as just indicative for now.

I also looked at my own data for 2009 and 2010, and my annual mFTP variance was 15% each year, so a bit lower than the average reported above.

Now of course with all such things one needs to consider context, and quality of the input data. For now that's a study beyond what I have time for.

Read More......