After nearly a decade of testing wearables, I’ve accumulated an overwhelming amount of health and fitness data. While I enjoy analyzing my daily stats, I’ve come to dread the rise of AI-generated summaries in fitness and wellness apps. Over the past couple of years, AI summaries have become a common feature across platforms like Strava, Whoop, and Oura, promising to translate raw data into “plain English.”
Strava’s Athlete Intelligence, for example, aims to break down your workouts into simple insights. Whoop’s Coach offers a “Daily Outlook,” summarizing weather, recent activity, recovery, and workout tips. Oura’s Advisor provides trend analysis and personalized advice. Even my bed greets me each morning with a summary of how AI helped ensure I slept well.
These summaries typically sound helpful: “Good morning! You slept 7 hours with a resting heart rate of 60 bpm. That’s normal, but your slightly elevated heart rate suggests you might not be fully recovered. Consider going to bed earlier tonight. Remember, health is all about balance!” However, these brief insights are often placed next to detailed charts showing the same data—making the summaries feel redundant.
Things get more frustrating with workout summaries. For instance, Strava’s Athlete Intelligence flagged a recent run as “intense,” with high heart rate zones and effort above my usual range. Thanks, but it only regurgitated the effort, pace, and heart rate zones I could already see in the workout graph. It left out crucial context: I was pushing triple my typical mileage early in the season, in high humidity and 85-degree weather, after a long break. I also fell and injured myself, which forced me to cut the run short—details that should have been highlighted but weren’t.
A more insightful summary might have been: “You ran during record heat, maintaining pace but increasing your risk of injury due to rapid mileage buildup. Given your recent injury, consider low-intensity walks until you recover.” Unfortunately, these AI summaries rarely offer such nuanced advice.
Runna, a popular running app with AI insights, was slightly better. It suggested my next run should be “easy,” aligning with my need to recover. Still, 48 hours wasn’t enough for my knees to fully heal, making that advice less practical.
In-app chatbots are no better. I asked Whoop Coach whether I should run given my injury, but it simply failed to respond, directing me to customer support instead. Oura Advisor, however, was more helpful. It noted that my “Readiness” was low, citing heat, stress, and injury as factors, and recommended rest. When I asked about safe activities with an injured knee, it gave common-sense suggestions—short walks, gentle stretching, or resting if pain occurs. While better, it still required prompting and guidance on what to ask.
This type of AI insight could be incredibly useful during recovery or tricky situations, but the reality falls short. I’ve tested this myself, asking Oura Advisor about my sleep and injury risk, only to be told everything was “improving,” despite feeling stressed and fatigued. When I inquired about my stress levels, the chatbot limited its insights to recent data, ignoring years of history that could offer a fuller picture. This disconnect between lived experience and AI feedback can be frustrating and misleading.
Many users share similar frustrations. Forums are filled with skepticism about how helpful these AI features truly are. Yet, industry leaders like Oura’s chief product officer claim that over 60% of users find Advisor impactful, and 20% use it daily. Strava reports that about 80% of users who opt in find Athlete Intelligence “very helpful” or “helpful.” Still, the summaries themselves often seem like superficial overviews—quickly generated, but lacking depth or actionable advice.
Why are these summaries so underwhelming? It largely comes down to the limitations of large language models and the fragmented nature of health data. Platforms like Strava are great for fitness tracking but lack comprehensive health metrics. Oura’s deep sleep data takes time to analyze, and providing meaningful insights instantly isn’t feasible without costly upgrades. Privacy concerns and legal liabilities also likely limit how much these AI can advise, especially on sensitive topics like injuries.
Ultimately, these AI summaries often feel like rehashed data—similar to a book report written by a student relying on Wikipedia instead of reading the actual book. They’re quick, cheap, and easy to deploy but lack the nuance needed for real-world health guidance. While there’s hope that future AI advances could deliver truly personalized, actionable insights, today’s offerings are more of a promotional gimmick than a practical tool.
In short, for now, AI summaries in fitness apps are more about keeping up with trends than providing genuine value. They’re a decent compromise—fast, privacy-conscious, and inexpensive—but they fall far short of offering meaningful, personalized health advice. If you’re looking for real insights that can make a difference, you might need to look beyond the current AI features and rely on trusted medical guidance and your own lived experience.