We left MasterMind last year in a good state using estimation of distribution algorithms; however, if we want to find a solution for higher dimensions (more colors, more pegs) we have to improve the number of evaluations. In this case we use something we call *endgames*; same as chess playing algorithms use a database of endgames for ending a game in a straightforward way, in MasterMind we can recognize a few occasions in which the search space is reduced drastically and it’s better either change the strategy or just change the search space. When we know the colors (that is, we obtain as many white+blacks as the length of the combination) the best is to just revert to exhaustive search over combination space; when the answer is 0 whites/blacks we can also exclude those colors from the search space and start, maybe with a smaller population.

This is what we do in the paper Improving and Scaling Evolutionary Approaches to the MasterMind Problem , which was presented a short time ago in the EvoGames workshop in Torino

During the presentation, Carlos Cotta and Carlos Fernandes played the game shown above.

Here’s the presentation, which you can download at ease. Picture credits are included in the notes.

### Like this:

Like Loading...

*Related*

Pingback: The bestest MasterMind Algorithm ever « GeNeura Team

Pingback: Why Mastermind? | GeNeura Team