Tic-Tac-Toe is what game theorists call a solved game. With perfect play from both sides, the outcome is always a draw. There is no first-move advantage strong enough to force a win. There is no second-move trick that beats first-move best play. If both players know what they are doing, every game ends 0-0.

On our site, the Hard difficulty setting plays perfectly. You cannot win against it. You can only draw. This is not because we made it cheat or look ahead infinitely or use machine learning. It is because we implemented an algorithm called minimax, and minimax on a game tree this small returns the optimal move in under 1 millisecond.

How minimax works

The algorithm assumes two things. First, the AI wants to maximize its score (win = +10, draw = 0, loss = -10). Second, the human plays to minimize the AI score — they play their best, not randomly. Given any board state, the algorithm asks: if I make this move, and my opponent plays their best response, and then I play my best response to that, and so on until the game ends, what is the outcome?

It does this by recursing through every possible game continuation. From an empty board, that is 9 first-move choices, each with 8 possible responses, each with 7 possibilities, totaling 9! = 362,880 leaf nodes. Sounds like a lot, but most branches end early because someone wins or the board fills. Pruning brings the actual count down to about 25,000 evaluations. A modern CPU does this in microseconds.

The naive code

function minimax(board, isAI) {
  if (aiWins(board))    return 10;
  if (humanWins(board)) return -10;
  if (isDraw(board))    return 0;
  let scores = [];
  for (let move of validMoves(board)) {
    let next = applyMove(board, move, isAI);
    scores.push(minimax(next, !isAI));
  }
  return isAI ? Math.max(...scores) : Math.min(...scores);
}

That is the heart of every perfect Tic-Tac-Toe AI ever written. Sixty lines including the board representation, win detection, and move generation. No training data. No neural network. No tuning. It just plays perfectly.

What about Easy and Medium?

Easy plays randomly. Medium uses a heuristic: win if you can, block if the opponent can win, otherwise prefer center > corners > edges. This beats random play but loses to a careful human about 30% of the time. Easy and Medium are not hobbled-Hard. They are entirely different algorithms, written specifically to feel beatable.

The interesting question

If perfect play is so easy, why is Connect Four — a game with the same kind of structure — so much harder for computers? Connect Four was solved in 1988, but it took two researchers years and significant computing resources, because the game tree is roughly 4.5 trillion positions. Our Connect Four AI on the same site uses heuristics, not minimax, because true minimax on Connect Four would lock up your browser tab.

What this teaches

AI is not magic. For small problems with clear rules, brute force is the right answer. For larger problems, you need approximation — heuristics, neural networks, Monte Carlo tree search. The skill is recognizing which kind of problem you are holding.