The session was going badly. I was in a hotel room in Zurich — one of those business hotels near the main station where the walls are thin enough to hear the elevator through them — and nothing was working. Techniques that had been at eighty-five percent consistency the day before were suddenly at sixty. My hands felt clumsy. My timing was off. The cards seemed to have opinions about what they were willing to do, and those opinions were uniformly negative.
I did what I always did on days like that. I pushed through. I assumed I was just having a bad day, that randomness was asserting itself, that the fog would lift if I kept going. An hour later, I’d accomplished nothing except deepening my frustration. I closed the practice log with a single entry: “Bad day. Nothing worked.”
For a long time, that was the extent of my analysis. Bad days happened. Like weather. You couldn’t predict them, you couldn’t prevent them, and you certainly couldn’t extract useful information from them. You just endured them and hoped tomorrow would be better.
I was wrong about all of this. And the realization that I was wrong became one of the most practically useful insights in my entire practice journey.
The Randomness Illusion
The idea that bad practice days are random is comforting because it absolves you of responsibility. If bad days are like rain — they just happen sometimes — then there’s nothing to analyze, nothing to fix, nothing to confront. You can shrug, close the notebook, and wait for the statistical bounce-back to better performance.
But bad days aren’t random. They feel random because we’re not looking at the right variables. When I started tracking my practice sessions with more granularity — not just “good day / bad day” but the specific metrics of what went well and what didn’t, alongside notes about sleep, travel, stress levels, what I’d practiced the day before — patterns emerged almost immediately.
The “Practice Like a Pro” framework put this bluntly: apparent regression during practice is almost never random. It’s a signal. The question isn’t “why am I having a bad day?” but “what is this bad day telling me?” Once you reframe it that way, the entire emotional experience of a difficult session changes. It stops being a source of discouragement and becomes a source of information.
And information, as any consultant will tell you, is always useful. Even when it’s unpleasant.
The Four Causes of “Bad Days”
Looking back through months of practice logs — the ones where I’d been detailed enough to track contributing factors — I found that virtually every bad practice session could be attributed to one of four causes. Not random bad luck. Specific, identifiable, and usually predictable causes.
Cause One: Fatigue
This was the most common and the most obvious in retrospect, though it took me months to see it. I was a consultant who traveled roughly two hundred nights a year. My practice sessions happened in hotel rooms, often at the end of long working days that involved early flights, multi-hour meetings, and the cumulative stress of performing in my day job before sitting down to perform at my desk with a deck of cards.
The correlation between travel fatigue and practice quality was almost perfect. Days where I’d had a short flight, a manageable schedule, and a decent night’s sleep: above-average sessions. Days where I’d taken a red-eye, run three back-to-back client meetings, and barely eaten: sessions that felt like I’d never held a deck of cards before.
This seems absurdly obvious in hindsight. Of course your fine motor skills deteriorate when you’re exhausted. Of course your focus degrades after a twelve-hour work day. But I hadn’t been tracking the correlation, so I’d been experiencing each fatigued session as an isolated mystery. “Why can’t I do this today? I could do it yesterday.” Because yesterday you’d had eight hours of sleep and today you’d had five. That’s why.
The insight wasn’t just diagnostic. It was tactical. Once I understood the fatigue pattern, I could adjust. On high-fatigue days, instead of attempting the most demanding techniques and failing miserably, I’d shift to practice that required less physical precision — mental rehearsal, routine sequencing, reviewing video of my performances, or working on the structural aspects of my sets. The session was still productive. It just wasn’t the same kind of productive as a fresh day.
Cause Two: Wrong Difficulty Level
Some bad days weren’t about fatigue at all. They were about difficulty mismatch. I’d be feeling sharp, well-rested, focused — and still performing poorly. When I dug into the data, the pattern was consistent: these were days where I’d jumped to a significantly harder technique without adequate preparation, or days where I’d been working at the same difficulty level for too long and the stagnation was manifesting as sloppy, disengaged practice.
The sweet spot of difficulty — roughly ten percent above your current maximum — is a narrow window. Push too far above it and you’re flailing, not adapting. Sit too far below it and you’re maintaining, not growing. Either way, the session feels bad, but the badness has a different quality. Difficulty-too-high feels chaotic and overwhelming. Difficulty-too-low feels boring and flat.
Learning to distinguish between these two flavors of bad was genuinely transformative. When a session felt chaotic, I’d step the difficulty down slightly. When it felt flat, I’d push it up. The adjustment was immediate and the effect was tangible. Within minutes, the session would shift from “bad day” to “productive day at the right level.”
Cause Three: Accumulated Stress
This was the subtlest cause and the one that took me longest to identify. Some bad days weren’t caused by anything that happened that day. They were caused by accumulated stress from the preceding days or weeks.
I noticed a pattern in my logs: after a particularly intense consulting engagement — the kind that involved high-stakes presentations, difficult conversations, tight deadlines — my practice quality would dip for two or three days, even if I was sleeping well and managing my schedule. The stress wasn’t physical fatigue. It was cognitive and emotional depletion. My brain had been running at high capacity for an extended period, and the deficit didn’t show up immediately. It showed up with a lag.
In consulting, we’d call this a lagging indicator. The cause precedes the effect by enough time that the connection isn’t obvious. You don’t feel the stress hangover the day of the stressful event. You feel it two days later, when you sit down to practice and your hands don’t do what your brain is telling them to do.
Understanding the lag changed everything. After particularly intense work periods, I’d schedule lighter practice for the following days. Not as punishment, but as recovery. The same way an athlete adjusts training load around competition, I adjusted practice intensity around consulting intensity. The bad days didn’t disappear, but they became predictable — and predictable means manageable.
Cause Four: Skill Consolidation
This was the most counterintuitive cause, and the one that shifted my entire relationship with difficult sessions.
Sometimes a bad day isn’t a bad day at all. It’s a consolidation phase that looks like regression from the outside. The “two steps forward, one step back” pattern that I’ve written about in earlier posts doesn’t only happen between sessions. It happens within sessions and across multi-day periods. Your brain is integrating a skill, building the myelin sheaths around the neural pathways, stabilizing what was recently learned — and during this process, conscious performance can temporarily decline.
The metaphor I use is construction. If you’re building a road and you’ve laid down the surface layer, the road works. Cars can drive on it. Then the crew comes back to reinforce the foundation — adding drainage, compacting the substrate, stabilizing the edges. During that process, the road might be temporarily blocked or rough. It looks like things are getting worse. But what’s actually happening is that the road is becoming permanent.
Skill consolidation works the same way. After a period of rapid improvement, the system needs time to stabilize what was learned. During stabilization, performance dips. The technique you’d been executing at eighty-five percent drops to seventy-five. It feels like regression. It feels like a bad day. But it’s actually the system doing exactly what it should: converting a fragile new skill into a durable one.
The telltale sign of consolidation, I learned, is that the regression is temporary and is followed by performance that exceeds the previous high. If you were at eighty-five, drop to seventy-five, and then rebound to ninety — that wasn’t a bad day followed by a good day. That was the learning process working as designed.
Reading the Data Instead of the Emotion
The shift from “bad day” to “diagnostic information” required a fundamental change in how I experienced difficult sessions. The emotional response to poor performance is immediate and powerful: frustration, self-doubt, the urge to push harder or to quit. These emotions are natural. They’re also useless, at best, and actively counterproductive at worst.
What helped me was treating my practice log like a consulting engagement. When a client brings me a problem, I don’t respond emotionally. I don’t say “that’s terrible, I’m so sorry, let’s just hope tomorrow is better.” I ask: What’s the data? What changed? What are the contributing factors? What does this tell us about what’s actually happening?
Applying this same clinical lens to my own practice sessions stripped the drama out of bad days. A seventy-percent session after an eighty-five-percent session wasn’t a catastrophe. It was a data point. And data points are useful only if you analyze them instead of feeling them.
I started adding a diagnostic section to my practice logs. Alongside the success rates and techniques practiced, I’d note: sleep quality, travel schedule, work stress level, what I’d practiced the previous two days, whether I was working at a new difficulty level. Over time, the entries built into a personal database that made bad days almost entirely predictable.
The most powerful moment in this shift was realizing that the “bad day” I’d had in that Zurich hotel room — the one I’d dismissed as random weather — was actually caused by a combination of accumulated stress from a brutal consulting week and the fact that I’d jumped two difficulty levels ahead the previous day. Both factors were visible in my log. The bad day was entirely legible. I just hadn’t been reading the language.
The Gift Hidden in Difficult Sessions
There’s a deeper point here that goes beyond diagnosis. Bad days aren’t just explainable. They’re valuable.
Every bad day tells you something about your practice system that a good day doesn’t. A good day confirms that things are working. A bad day reveals where they’re not. A good day is reassuring. A bad day is instructive. If you’re only paying attention on good days, you’re reading only half the story.
The fatigue-driven bad days taught me to manage energy, not just time. The difficulty-mismatch days taught me to calibrate more carefully. The accumulated-stress days taught me to plan practice around life, not in isolation from it. The consolidation days taught me to trust the process even when the numbers dipped.
Each of these lessons came from sessions I would have previously written off as lost time. “Bad day. Nothing worked.” That log entry was a missed opportunity. Nothing about the day was random. Everything about it was meaningful. I just wasn’t looking.
What I Do Now
These days, when a session goes poorly, my first response isn’t frustration. It’s curiosity. What’s causing this? Is it fatigue, difficulty, stress, or consolidation? Each cause has a different response:
Fatigue: shift to lower-intensity practice. Mental rehearsal, structural work, video review.
Difficulty mismatch: recalibrate. Step the difficulty up or down depending on whether the session feels chaotic or flat.
Accumulated stress: lighten the load for the next few days. Recovery isn’t laziness; it’s strategy.
Consolidation: trust the process. The dip is temporary. The rebound is coming. Do not panic and do not push harder.
The irony is that this diagnostic approach has made bad days some of my most productive sessions. Not in terms of technique improvement — you’re not going to hit personal bests on a fatigue day. But in terms of learning about how your own system works, what it needs, and how to give it what it needs. That self-knowledge compounds. It makes every future session more intelligent, more calibrated, more effective.
The Myth, Dissolved
The myth of the bad day is the myth of randomness applied to skill development. It says: some days you have it, some days you don’t. Nothing to do about it. Roll the dice again tomorrow.
The reality is that practice performance is a system, and systems don’t operate randomly. They respond to inputs. Change the inputs, and the outputs change. Understand the inputs, and the outputs become predictable. Track the inputs over time, and you build a model of your own practice biology that makes you, in essence, your own coach.
I still have sessions where the success rate drops and the techniques feel sluggish and the cards seem to have declared independence from my hands. But I don’t have “bad days” anymore. I have diagnostic sessions. I have data points. I have signals that tell me something specific about what my system needs.
And that shift — from weather to data, from random to readable, from discouraging to instructive — might be the single most practical thing I learned in this entire practice journey. Not a new technique. Not a clever method. Just the discipline to stop feeling the bad day and start reading it instead.