For about a year, I had a routine in my set that I considered unbreakable. I had performed it maybe fifty times. Corporate events, private shows, keynote presentations in Vienna and Graz and Salzburg. Every time, it landed. The audience reacted. The method was invisible. The conclusion hit hard. Fifty performances, fifty successes. A perfect record.
I was wrong about it being unbreakable. I was wrong because I was counting only the right things.
There is a story from the history of science that illustrates this perfectly. For centuries, Europeans believed that all swans were white. This was not a wild guess — it was based on evidence. Every swan anyone had ever seen was white. Thousands of observations, spanning generations, all confirming the same conclusion: swans are white. The evidence was overwhelming.
Then someone went to Australia and saw a black swan.
One observation — a single data point — destroyed a belief built on thousands of confirmations. The philosopher Karl Popper used this example to make a fundamental point about knowledge: no number of confirming observations can prove a theory, but a single contradicting observation can disprove it. A thousand white swans prove nothing about the color of the next swan. One black swan proves everything.
This principle, which Popper called falsifiability, has direct and uncomfortable implications for how magicians evaluate their work. And it connects to a cognitive bias that Gustav Kuhn and Alice Pailhes discuss in their research on the psychology of magic: confirmation bias.
The Bias We All Carry
Confirmation bias is the tendency to search for, interpret, and remember information that confirms our existing beliefs while ignoring information that contradicts them. It is not a character flaw. It is a feature of human cognition, deeply wired into how our brains process information. We all do it, all the time, in every domain of life.
For magicians, confirmation bias operates like this: you perform a routine. It works. You count that as evidence that the routine is good. You perform it again. It works again. More evidence. You perform it a hundred times. It works ninety-eight of those times. You conclude, based on overwhelming evidence, that the routine is essentially foolproof.
But what happened during the two times it did not work? Did you analyze those failures with the same care you celebrated the successes? Did you write them down? Did you investigate what went wrong with the same rigor you applied to confirming what went right?
Almost certainly not. Because confirmation bias does not just make you seek confirming evidence. It makes you dismiss disconfirming evidence. The two failures were anomalies. Outliers. The spectator who was not fooled was “one of those people.” The audience that reacted poorly was “a tough crowd.” The failure was not about the routine — it was about the circumstances.
This is how confirmation bias works. It does not delete the contradicting evidence. It reframes it. It turns exceptions into noise instead of signal. And it does this automatically, beneath conscious awareness, so smoothly that you do not even realize it is happening.
Fifty Successes and the Fifty-First
Back to my “unbreakable” routine. Fifty successful performances. Then the fifty-first.
I was at a technology conference in Klagenfurt. A smaller event, maybe thirty people, an after-dinner entertainment slot. The audience was engaged, the room was good, the sound was fine. I performed the routine the same way I always performed it. Same script, same handling, same timing.
A man in the second row figured it out.
He did not say it during the performance. He waited until afterward, approached me privately, and described — with considerable accuracy — what I had done and when I had done it. He was not confrontational about it. He was genuinely curious. He worked in data security, he said, and his professional training had taught him to watch for specific patterns of information management. He had seen the pattern in my routine.
I thanked him, congratulated him on his observation, and went back to my hotel room to stare at the ceiling.
The curse of knowledge — which I discussed in my last post — meant I could not accurately evaluate my own routine from the audience’s perspective. But confirmation bias had compounded the problem. For fifty performances, I had been counting white swans. Every success had reinforced my belief that the routine was airtight. And when I had encountered small warning signs — a spectator who seemed less amazed than others, a moment where someone’s expression flickered with recognition — I had dismissed them. Noise. Anomalies. Tough crowd.
The man in Klagenfurt was my black swan. One observation that told me more than fifty successes ever could.
Why Successes Are Uninformative
This is the counterintuitive principle that Popper identified and that most performers never internalize: success is uninformative. A successful performance tells you almost nothing about the strength of your routine. It tells you that the routine worked this time, with this audience, under these conditions. It does not tell you why it worked. It does not tell you whether it would work with a different audience, under different conditions, or against a spectator with different knowledge.
The reason is logical, not emotional. When a routine succeeds, you do not know which elements were essential and which were coincidental. Was it your timing that made the method invisible, or was the audience simply not paying attention at that moment? Was it the structure of the routine that created the impact, or was it the wine they had consumed? Was the method truly deceptive, or were the spectators simply not the kind of people who try to figure things out?
You cannot answer these questions from successes alone. You can only answer them from failures. When a routine fails, you learn exactly which conditions were missing. You learn what the routine needs to work — because you can see what it looks like when those conditions are not present.
Kuhn writes about this directly in the context of scientific research on magic. He describes how his most productive research moments were not when experiments confirmed hypotheses but when they contradicted them. A single experiment that actively sought evidence against a theory, he argues, gave him a better understanding of the phenomenon than years of confirming observations.
The Confirmation Trap in Practice
Confirmation bias does not only operate during performances. It operates during practice.
When you practice a routine in your hotel room or home office, you are testing it against yourself. And because of the curse of knowledge, you already know what to expect. You know the method. You know the timing. You know the sequence. So when you run through the routine and everything looks clean, you count that as a confirmation: the routine works.
But what are you actually testing? You are testing whether the routine looks clean to someone who already knows the secret. This is not the same as testing whether the routine looks clean to someone who does not know the secret. Your practice confirms your belief that the routine is good, but the confirmation is contaminated by your knowledge. You are finding white swans in a pond that only contains white swans.
I fell into this trap repeatedly during my first years of practicing. I would run through a routine dozens of times, filming each run, watching the footage, confirming that the method was invisible. But I was confirming it through my own eyes — the eyes of someone who knew exactly where to look and what to look for. The fact that I could not see the method on video did not prove the method was invisible. It proved that the method was invisible to me. These are different things.
How to Break the Cycle
Breaking confirmation bias requires deliberate, uncomfortable effort. It requires actively seeking the evidence that your routine is flawed — not because you want to find flaws, but because the flaws are where the information lives.
The first strategy is to seek disconfirmation instead of confirmation. When you evaluate a routine, do not ask “did it work?” Ask “why might it fail?” Do not look for evidence that supports your belief. Look for evidence that contradicts it. This feels unnatural because confirmation bias makes the confirming evidence feel relevant and the disconfirming evidence feel like noise. You have to override that feeling with discipline.
The second strategy is to treat failures as data, not anomalies. When a routine fails — when a spectator catches the method, when the reaction is weak, when the timing is off — resist the urge to explain it away. Do not say “tough crowd.” Do not say “wrong audience.” Do not say “they were just one of those people.” Instead, ask: what does this failure tell me about the routine? What condition was missing? What assumption was wrong?
The third strategy is to test against hostile conditions. Do not only perform for friendly, receptive audiences. Perform for skeptics. Perform for analysts. Perform for people who work in fields that train careful observation — security, science, engineering. If the routine survives hostile conditions, you have genuinely informative evidence. If it fails, you have even more informative evidence.
The fourth strategy — and this is the one I have found most productive — is to test with laypeople who have no social pressure to be polite. The problem with post-show feedback is that most people are kind. They will tell you they enjoyed it even if they figured it out, because they do not want to embarrass you. Anonymous feedback, structured surveys, or honest conversations with people who have no reason to protect your ego — these provide the disconfirming evidence that polite post-show comments never will.
The Uncomfortable Truth
The uncomfortable truth about confirmation bias is that it feels like wisdom. When you have fifty successful performances, it feels wise to conclude that your routine is strong. It feels responsible. It feels like evidence-based thinking. And it is, in a way — but the evidence base is systematically biased. You have been counting white swans and ignoring the possibility of black ones.
The man in Klagenfurt taught me something I could not have learned from any number of successful performances: my routine had a specific vulnerability to a specific type of observer. This was not a random failure. It was a systematic weakness that had been present in every single performance but had never been triggered — until it was.
After that evening, I rebuilt the routine. Not from scratch, but with targeted modifications based on the specific information the failure had provided. I changed the timing of one element. I restructured the sequence so the critical moment fell at a different point in the audience’s attention cycle. I tested the rebuilt version against the most analytically minded people I could find.
The rebuilt routine was stronger. Not because I added more to it. Because I subtracted the vulnerability that fifty successes had hidden and one failure had revealed.
That is the lesson. You do not prove a routine by performing it successfully a thousand times. You improve a routine by finding the one time it fails and understanding why. The white swans tell you the routine works. The black swan tells you how it works. And “how” is the only question that matters if you want to make it better.
Stop counting your successes. Start studying your failures. That is where the information lives.