I have a confession that will surprise no one who has ever worked in consulting: I used to track my practice sessions in a spreadsheet.
Not just any spreadsheet. A proper one. Date, start time, end time, total minutes, weekly running total, monthly aggregate. Color-coded. Conditional formatting. The works. Because that’s what you do when you’ve spent a decade in strategy consulting — you measure things. You track inputs. You build dashboards. And then you convince yourself the dashboard is telling you something meaningful.
For the first year and a half of my magic journey, that spreadsheet was my compass. If I logged twenty hours in a week, I felt good. If I only managed twelve, I felt behind. The number of hours was, in my mind, directly correlated to progress. More hours, more skill. Simple math. The same logic that governs billable hours in consulting ought to govern practice hours in magic.
Except it didn’t. And it took me an embarrassingly long time to figure out why.
The Billable Hours Illusion
In consulting, billable hours have a straightforward relationship with output. Ten hours of strategy work produces roughly twice as much deliverable as five hours. Not exactly twice — some hours are more productive than others — but the relationship is broadly linear. You bill more, you produce more. The client gets more. Everyone understands this.
I carried this mental model directly into practice. Five hours of card work should produce roughly twice the improvement of two and a half hours. The input-output relationship should hold. Practice is work. Work scales with time. Therefore practice scales with time.
The “Art of Practice” framework shattered this assumption by drawing a distinction I’d never considered: naturals measure results, non-naturals measure time.
That hit me like a truck. Not because it was complicated — it’s almost embarrassingly simple — but because it described my behavior with uncomfortable precision.
Think about what happens when you measure time. You sit down with a deck of cards. You set a timer or glance at the clock. You work through your material. An hour passes. You feel a small sense of accomplishment — you practiced for an hour. You log it. The spreadsheet grows. The weekly total climbs. And you’ve told yourself a story about progress that may have no connection to reality.
Now think about what happens when you measure results. You sit down with a deck of cards. You attempt a specific technique. You track whether you can execute it cleanly. You notice whether your success rate improved from yesterday. You identify the specific moment where the technique breaks down. An hour might pass, or twenty minutes, or two hours — but the clock is irrelevant. What matters is whether anything actually changed.
These are fundamentally different orientations. And they produce fundamentally different outcomes.
The Night I Realized Five Hours Meant Nothing
The moment the illusion cracked was a Tuesday night in a hotel in Vienna. I’d had an unusually open evening — no client dinner, no emails to answer, nothing pulling me away. I practiced for nearly five hours straight. Five hours. I went to bed feeling like I’d done something extraordinary.
The next morning, I picked up the cards and ran through the same material. Nothing had changed. My success rate on the technique I’d been working on was identical to the day before. Five hours of work, zero measurable improvement.
At first, I blamed the technique. Maybe it was just hard. Maybe progress on this particular move was going to be slow regardless. But when I thought honestly about what those five hours actually looked like, a different picture emerged.
I’d spent the first hour warming up with material I already knew well. Comfortable, familiar routines that felt good to run through. Then I’d worked on the new technique for maybe forty-five minutes — genuine, focused work at the edge of my ability. Then I’d gotten frustrated with the lack of immediate progress and drifted back to familiar material. Then another burst of focused work, maybe twenty minutes. Then more comfortable repetition. Then some time just fiddling with cards while watching something on my laptop.
Five hours on the clock. Maybe sixty-five minutes of actual challenging work. The rest was maintenance dressed up as practice, padded with distraction that I’d somehow counted as productive time.
The spreadsheet said five hours. Reality said one hour, generously measured.
What Naturals Do Differently
The concept that reframed everything for me was this: people who develop skills efficiently — naturals — don’t ask “how long did I practice?” They ask “did I get better?”
That question — did I get better? — changes the entire practice dynamic. It forces you to define what “better” means before you start. It requires you to establish a baseline so you can compare. It demands honest assessment when you finish. And it makes time completely irrelevant.
Consider two practitioners. Practitioner A sits down and practices for two hours. When they finish, they note “practiced two hours” and feel satisfied. Practitioner B sits down and attempts a technique twenty times, noting their success rate. They succeed eleven times out of twenty. They adjust their approach, attempt twenty more. Thirteen out of twenty. They adjust again. Fifteen out of twenty. They stop. The whole thing took thirty-five minutes.
Practitioner A practiced four times longer. Practitioner B made measurably more progress. And Practitioner B knows exactly how much progress they made, while Practitioner A only knows how long they sat there.
This isn’t hypothetical. This was literally me — I was Practitioner A for over a year, congratulating myself on long sessions while making inconsistent progress, baffled by why the hours weren’t translating.
The Consulting Parallel That Should Have Been Obvious
Here’s the part that embarrasses me: I already knew this principle. I’d been applying it professionally for years. I just hadn’t recognized it in a different context.
In strategy consulting, there’s a well-known distinction between activity metrics and outcome metrics. Activity metrics measure what you’re doing: hours worked, meetings held, slides produced. Outcome metrics measure what you’ve achieved: revenue impact, cost reduction, market share gained. Junior consultants obsess over activity metrics. Senior consultants focus on outcome metrics. The shift from one to the other is a fundamental mark of professional maturity.
I was a senior consultant who’d spent years coaching clients to focus on outcomes rather than activities. And then I’d go back to my hotel room and track my practice in hours. The irony is painful.
The moment I applied the same framework to practice that I applied to consulting, everything clicked. Stop measuring the activity. Start measuring the outcome. The activity is “time spent practicing.” The outcome is “measurable improvement in specific skills.”
Building a Results Dashboard
Once I made the shift, I needed a new system. The old spreadsheet tracked time. The new one needed to track results.
I started simple. For each technique I was working on, I defined a success criterion — what “correct execution” looked like. Then, at the start of each session, I’d do a baseline test: attempt the technique twenty times and record my success rate. At the end of the session, I’d do the same test. Progress was the delta between those two numbers.
Some sessions, the delta was significant. I’d start at fifty percent and end at sixty-five percent. Those were genuinely productive sessions regardless of whether they lasted twenty minutes or two hours.
Other sessions, the delta was zero. I’d start at fifty percent and end at fifty percent. Those sessions produced no measurable improvement regardless of how long they lasted. The time was irrelevant. The result was what mattered.
And sometimes — this was a revelation — the delta was negative. I’d start at fifty percent and end at forty percent. Fatigue, frustration, or bad habits creeping in had actually made me worse over the course of the session. Those were sessions I should have ended sooner. The spreadsheet would have told me they were productive because I logged the hours. The results told me they were counterproductive.
The Liberation of Short Sessions
The most unexpected benefit of results-based measurement was that it liberated me from the tyranny of long sessions.
When you measure time, a twenty-minute session feels like failure. Twenty minutes? That’s nothing. You need to put in the hours. The implied minimum for a “real” practice session is probably an hour, maybe more. Anything less feels like you’re not serious.
When you measure results, a twenty-minute session that produces a five-percentage-point improvement is a success. Full stop. The number on the clock is meaningless. What matters is the number on the results sheet.
This was transformative for my practice schedule. As a consultant traveling two hundred nights a year, my available practice windows were unpredictable and often short. Twenty minutes before a morning meeting. Thirty minutes after dinner before exhaustion set in. Fifteen minutes in an airport lounge. Under the old system, these windows felt too short to bother with. Under the new system, they were opportunities — and often surprisingly productive ones.
In fact, I discovered something counterintuitive: my short sessions frequently produced better results per minute than my long ones. A focused twenty-minute burst, with a clear target and full attention, often generated more improvement than the first twenty minutes of a two-hour session where I knew I had plenty of time and could afford to ease in slowly.
Constraints, it turns out, focus the mind. When you only have twenty minutes and you’re measuring results, you don’t waste time on warm-up material you’ve already mastered. You go straight to the technique that needs work. You concentrate completely. And you often make more progress than you would in a leisurely hour.
What I Track Now
My practice tracking has evolved significantly since those early spreadsheet days. Here’s what I actually record now.
For each technique: current success rate out of twenty attempts. That’s the baseline. Then, after focused work, the new success rate. The delta is the only number that matters.
For each session: what specific aspect of the technique I targeted. Not “practiced card work” but “worked on the timing of the transition between the second and third phase.” Specificity forces focus, and focus produces results.
For the week: which techniques showed improvement, which plateaued, which regressed. The plateaued ones need a new approach — same practice at the same difficulty won’t move them. The regressed ones need investigation — something is degrading the skill, and I need to find out what.
I still track time, but only as a secondary metric, and only to ensure I’m not overdoing it. Time now serves as a ceiling, not a floor. If a session hits forty-five minutes without producing results, that’s a signal to stop — not to push through for the sake of logging more hours.
The Mindset Shift Behind the Method
The deeper change isn’t about tracking systems. It’s about identity.
When you measure time, your identity as a practitioner is tied to effort. “I practiced five hours today” is a statement about how hard you worked. It’s virtuous. It signals dedication. It feels good to say.
When you measure results, your identity shifts from effort to effectiveness. “I improved my success rate by ten percentage points today” is a statement about what you achieved. It might have taken twenty minutes or three hours — that part doesn’t matter. What matters is the outcome.
This shift is uncomfortable at first because effort feels like something you control, while results feel uncertain. You can always choose to practice for five hours. You can’t always choose to improve by ten percentage points. Measuring results means confronting sessions where you tried hard and got nowhere. That’s a harder truth to face than “I didn’t practice enough.”
But it’s the truth that actually leads somewhere. Because once you’re measuring results, you start asking the productive questions: why didn’t that session produce improvement? Was the difficulty wrong? Was my attention scattered? Was I practicing the wrong aspect of the technique? These questions lead to adjustments. Adjustments lead to better practice. Better practice leads to actual progress.
Measuring time just leads to more time. And as I learned in that Vienna hotel room, more time with the wrong approach is just more time wasted.
The spreadsheet still exists. But the columns are different now. And so is everything they measure.