Adaptive Training

Completely agree. That would be very interesting.

I’m not being a fan of Sufferfest. I’m being a skeptic about ML because I actually know a bit about it and I’m an engineer in the tech industry. ML is certainly useful and has its place, but lots of people seem to think it’s magic.

With machine learning, it’s very easy to create a model that overfits and produces wacky results. Alternatively, it could be tuned very conservatively; at the extreme end, you would wonder why they bothered. You have to get the balance right. This is only useful if you have enough relevant data. If you have some data but not enough to paint a complete picture, your model should not try to get too fancy.

And depending on the type of the model, it might be a black box. Then no one can explain why it suggests what it suggests. I kind of like to understand the logic behind my training.

7 Likes

I agree. I think as ‘customers’ we want a plan that we feel it tailored towards us. The mention of machine learning makes us more willing to pay a premium for a service.

When I saw the TR announcement I was excited but as I looked into it. I realised I’d be getting ML on an training platform that would not keep me engaged or focused. I don’t like following a power line 100% of the time.

The suf app gives the level of variety enabling me to:

  1. ride to a no vid workout (similar to TR)
  2. use a suf training workout
  3. watch a GCN workout
  4. watch an inspiration video and endurance ride
  5. go into mini player and watch a film/Netflix/YouTube

I agree that TR is probably better when you compare number 1). But it’s the variety that keeps me engaged.

I firming believe that it’s consistency that will improve me. Having a app that uses ML is likely to lead to less consistency. For me at least…

3 Likes

Well, obviously I don’t know all the details, but on the TR podcast they said you take a ramp test, and their ML learns based on about 100 different data points (age, gender, 6 week workout averages, 3 week averages, projected vs actual FTP gains, subjective user input, performance in individual zones vs averages, etc) analyzed on over 100,000,000 workouts.

I mean, it’s obvious to me they have data to work with. Their ML models can see performance over time and what patterns worked best with years and years of data. I don’t know why you think they have nothing to base these models on. They are sitting on a mountain of data.

How is doing a workout, nailing it, and saying it was easy not something to adapt to?

Overfitting is one thing I definitely fear as well. I had that in mind in my original post, but I did not want to get into the technical weeds.

However, the real problem is the data quality. Almost all the current problems with ML learning (discrimination in bank lending, facial recognition of minorities, etc.) have largely resulted from improperly curated data.

Then there is the explanability problem. Even with a simple rules engine, once you get beyond a few rules it becomes impossible to understand why the algorithm made the decisions it made.With ML it is much worse. While it is an area of current research, nobody has a good handle on understanding why the models do what they do.

Without that, you have problems understanding if the training recommendations are correct or not.

5 Likes

You say “projected vs. actual FTP gains” how are they going to know that without doing a new test?

I think they said they retest every month. And I know Zwift and I think Garmin will auto detect FTP changes overtime without retesting. I imagine it’s a function of historical heart rate data and power curve vs current HR and power.

1000x this. It’s amazing how often people will use ML when they could have just used linear regression…and in fact a lot of the time I order to get any meaningful results you end up cos training the problem so much that you aren’t very far away, but with a far more complicated set up.

I’m not commenting on the TR product, I know nothing about it.

I would echo some other missives though that it does sound a little bit like they’re just trying to take away the re-test element. What’s wrong with sticking to a plan and retesting when you make it through the 9th Hammer with needing an ambulance?

1 Like

There’s nothing to stop you retesting each month if you decide anyway and anyone with experience of ‘auto detect’ knows it has a problem with training rides: are you aware of any evidence verifying any one such system during training?

1 Like

I switched to TR as I winter in Florida and I like that the workouts are pushed to outdoors and work well but I do prefer Sufferfests indoor workouts and will switch back in the fall. Maybe work on pushing workouts for outdoors first? Don’t know too much about Adaptive Training but feel interfacing with such things as Whoop, Apple Watch, Ora ring etc would allow more precise Adaptive training. Just as training programs link with Strava, Garmin etc. Maybe it’s time for the “off the bike” data interfaces too so the sress levels would automatically adjust to your recovery metrics?

I’d rather just retest than have a program guessing what I’m capable of. This is a big issue with the power duration model because it assumes you are hitting your peak intervals at some point during the time period (usually 90 days) all along the curve.

Sir Neal Henderson discussed this on another thread about WKO5 here:

4 Likes

Agreed. I don’t know if you have the free open source software Golden Cheetah. It has “trends” and you can select the duration of study by hand. They use “critical powers which is a little different, but not hugely. Unless you include a test or races in the timescale it is useless.

Anyone who thinks this would be a benefit should just download that free software & try putting in what it says in Sufferfest. For free. TrainerRoad have no magic sauce.

One thing I think they could do is evaluate race files. They know the dates of your races and race files from Garmin, Wahoo head units etc. upload to them. They could say you held your power for 5 minutes higher than we thought was your maximum etc. but even then your values in a multi hour race don’t represent your real maximum, particularly sprinting. I don’t think their machines will learn only from only the workouts you try really hard at anyway. Lost count of the number of times my Garmin headunit has “found new FTP’ after an easy workout.

The one thing I would like Sufferfest to consider is analysing race files and learning from them to improve their plans. They already have our FFs to learn from.

I don’t disagree! Currently I’m retesting ever 4-6 weeks to gauge where I am. Auto-detection of FTP obviously will have its flaws. Garmin says they do it by detecting workouts where your IF > 1.00, which to them means your FTP setting may be too low. I have no idea if TR will be successful with their adaptive training program, and to be honest I don’t see much of an advantage of it vs just retesting every now and again, especially since they are doing that initially anyway!

That said, I think where we are little off topic as to what TR is trying to do. It’s not just some regression on your previous workouts to predict FTP, etc, rather its predictions based on your current FTP, fitness/freshness, recent workouts, etc., what do I do TODAY. That’s something where I think ML could make a difference. They see all the workouts people have done, they see the Ramp Tests people do. Are there trends or commonalities of users who have had the most gains between ramp tests? There must be thousands of users that match my profile age, gender, power curve, realized power gains over time, etc. Who gained the most power and had the best results? What was similar between them? Can you build a model on that?

I don’t know if they can, but they are trying. And obviously there is a lot of marketing behind what they are doing, so who knows how useful this will truly be. But to me it didn’t seem like a random ML project to quickly get out the door in order to just add some fancy ML/AI marketing material. They said they initially tried something like this years ago but failed and have been building this particular iteration for 3 years. Maybe there’s something behind it, maybe not. It will be interesting to watch nonetheless.

4 Likes

You say :“based on your current FTP, fitness/freshness” neither of which they know. This is the problem. It relies on Magic.

I mean they are doing ramp tests every month. So, I think they know.

It may be something, like intervall.icu and garmin is doing with estimated ftp. For me, this values are very close to reality.

Agree! I only know a little about machine learning, but this doesn’t seem like a reasonable application of actual machine learning.

I would suspect that they are doing a regression analysis, not machine learning.

2 Likes

But who wants to do a ramp test every month?! I’d rather just bump my intensity up and test less frequently.

2 Likes

I don’t input 100% of my physical activity. I don’t have a power meter or bike computer for outdoor rides. I don’t wear a fitness tracker of any sort. I also do things like rock climbing; no idea how that could make its way into the model.

It’s nice to say that there could be lots of data like Apple Watch and Whoop and outdoor ride/run tracking, but most people have some subset of these gadgets. Or, suppose I tell the app that I do have the gadgets, but I forget to wear one during a run? The model has to be able to do reasonable things when we give it bad or incomplete data. It’s not easy. I’m not saying it’s impossible, but given the size of the market and the cost of machine learning specialists, I don’t expect magic.

I said earlier that there are simpler ways to achieve some amount of personalization without re-testing and without risking unexplainable results. I still think that’s the reasonable way to go. Maybe that’s actually what TR is doing, except they attached the magic buzzword.

1 Like