(NOTE: This article is part of an ongoing series that documents an experiment with using AI to fill the NCAA brackets and see how it fares against years of human experience. The original article is as follows.)
This is the final entry in my series on using AI to help play March Madness pools. Like most stories, I had hoped this one would have a happy ending. Alas, my experiment using ChatGPT to help fill out my NCAA tournament brackets is best summed up as close, but no cigar.
And yet, I would still call the experiment a success.
That may sound odd coming from someone who did not win. But one of the biggest lessons from this exercise is that AI improved my process more than it improved certainty. In other words, it helped me think better, even if it could not eliminate the madness.
Last week, I was thrilled to have gotten 13 of the Sweet 16 teams right. My brackets were hovering near the top of the standings, and I was starting to think I might actually pull this off. Then the classic chaos of March arrived.
In a pool with 65 brackets, I am still near the top — tied for second in one bracket and tied for sixth in another — which is hardly a disaster. I had Arizona and Michigan correctly advancing on one side of the bracket, but I completely missed on the other. I had projected Duke and Florida meeting in the semifinals, with Duke ultimately winning it all. There was a certain karmic justice in Duke ending up on the receiving end of a Laettner-style Hail Mary, but it also ended my chances of winning.
Still, going into the Elite Eight, my brackets were in the 98th percentile out of 26 million entries on ESPN. I can’t honestly say I would have been there without AI’s help. And more importantly, I came away with a set of lessons I’ll use next year — because yes, I’m doing this again.
Better process, same madness
The central takeaway is simple.
AI did not produce a miracle, but it did produce a better process.
Instead of filling out a bracket based on vague intuition, recent highlights, or whatever team happened to look unbeatable on a Saturday afternoon, I had a more structured way to think about the field. AI helped me organize the decision, compare likely outcomes against higher-upside contrarian choices, and surface some of the variables that matter most in tournament play.
That framework worked. It identified many of the strongest teams correctly. It kept me from making some of the usual lazy mistakes. It pushed me toward a more disciplined, less emotional bracket.

What it did not do was repeal the laws of single-elimination basketball.
That is an important distinction, and one that applies well beyond sports. AI can improve judgment. It cannot remove volatility.
Put more weight on late-season momentum
One of the clearest lessons from this tournament is that I did not give enough credit to teams that were getting hot at the right time.
Where did Illinois and Iowa come from?
Yes, both were good teams in what was clearly the strongest conference in the country this year. But I did not see them taking out a No. 1 seed in Florida and a No. 2 seed in Houston. They were peaking late, and I did not weight that heavily enough.
Next year, I will pay closer attention to who is actually playing their best basketball in March, rather than leaning too heavily on full-season metrics. A season-long résumé still matters, of course. But in a tournament like this, form can matter almost as much as underlying quality.
In business terms, it is the difference between evaluating a company on twelve months of results and recognizing that something meaningful has changed in the last six weeks.
Put more weight on coaches, not just players
I also came away convinced that I underweighted coaching.
Yes, the players are the ones on the floor. But coaches matter enormously in March, especially in a one-and-done format where preparation, adjustments, substitutions, and composure can swing an entire season.
Dan Hurley reminded everyone, once again, why he is such a force in this environment. John Scheyer? Not so much.
Next year, I will spend more time looking at which coaches have consistently shown they can navigate the chaos of tournament basketball. Talent is still the foundation. But coaching is often the force multiplier.
Accept the limits of forecasting
This may be the biggest lesson of all.
Forecasting — even when aided by AI — is good at identifying broad patterns. It is much less reliable when it comes to predicting exactly what a particular person, or a particular team, will do on a specific day.
A college basketball team is just five teenagers on the floor at once. Very talented teenagers, yes, but still teenagers. And anyone who has spent time around young people knows they have ups and downs, mood swings, great days, bad days, and moments when everything suddenly goes sideways. Sometimes those swings happen in the middle of a tournament game.
If these matchups were best-of-five or best-of-seven series, there would be fewer upsets. But in a one-and-done environment, it is much easier for Cinderella to have the last dance.
That is not a failure of AI. It is just a reminder that some environments are inherently noisy. The tournament is designed to turn small edges into dramatic outcomes. That is why we watch.
In the real world, AI is often more useful than it is in a bracket pool
Bracket pools are a particularly unforgiving test.
Here, I had to be right about whether Connecticut would beat Duke. There was no partial credit for identifying both as excellent teams. It was purely binary: win or lose, right or wrong.
In the real world, many of the decisions where I use AI do not work like that.
Years ago, one of my professors said that the harder a choice is, the less the decision often matters. There is a lot of wisdom in that. If I ask you to choose between an ancient Yugo and a Porsche Macan, you will decide instantly. And if you somehow choose the Yugo, you will regret it for the rest of your life. But if I ask you to choose between a Porsche Macan and a BMW X3, suddenly you have a real decision. You might compare reliability, comfort, specs, and performance. But odds are you will still wind up with an excellent car.
That is how AI is useful in many real-world settings. It may not always identify the single best option in hindsight, but it can often narrow the field to several very strong ones. That is still extremely valuable.
The same goes for investing, planning, and research. AI can help identify promising paths, likely outcomes, and sensible options. Will it always pick the all-time winner? Of course not. But it can keep you out of obvious mistakes and help you make a better-informed choice.
Suggestions, not decisions
That, to me, is the healthiest way to think about AI.
Recently, we visited Lima, Peru, and I used ChatGPT extensively to help decide what to see and where to eat. Were the places we visited the absolute best ten options in the city? I have no idea. But were we happy with the trip? Absolutely. Do I have any lingering sense of missing out? None at all.
That is what good AI assistance looks like.
It helps sort through overwhelming amounts of information and presents strong options. The quality of those options depends heavily on the quality of the prompting. The more clearly you explain your interests, constraints, budget, and preferences, the better the suggestions become.
But they are still suggestions.
We are nowhere near the point where anyone should hand responsibility for their life over to an AI model. Nor should we want to.
What I’ll do differently next year
Next year, I will put more weight on late-season momentum, more weight on coaching, and more weight on volatility. I will be less trusting of vulnerable favorites and more alert to the teams that look dangerous even if their seed says otherwise.
Just as importantly, I will go into the exercise with a better understanding of what AI can and cannot do.
It can improve the process. It can sharpen the analysis. It can help organize uncertainty.
What it cannot do is make March stop being March.
See you next year
So yes, March Madness claimed my AI experiment in the end.
But it also proved that the experiment was worth doing.
AI did not deliver a perfect bracket. It did not eliminate uncertainty. It did not make me a champion. What it did do was help me think more systematically, evaluate the field more intelligently, and perform far better than I likely would have with instinct alone.
That is a meaningful result.
So I’ll be back next year — with a slightly better framework, a little more humility, and the same respect for the fact that no model, however sophisticated, gets the final word in March.
If you’ve followed this series along the way, thanks for reading. And if you beat me in your pool without any help from AI at all, enjoy the victory lap while you can.
Next year, the machine and I are coming for revenge.

