Going a bit farther (and a bit faster) solving MasterMind using evolutionary algorithms

We left MasterMind last year in a good state using estimation of distribution algorithms; however, if we want to find a solution for higher dimensions (more colors, more pegs) we have to improve the number of evaluations. In this case we use something we call endgames; same as chess playing algorithms use a database of endgames for ending a game in a straightforward way, in MasterMind we can recognize a few occasions in which the search space is reduced drastically and it’s better either change the strategy or just change the search space. When we know the colors (that is, we obtain as many white+blacks as the length of the combination) the best is to just revert to exhaustive search over combination space; when the answer is 0 whites/blacks we can also exclude those colors from the search space and start, maybe with a smaller population.
This is what we do in the paper Improving and Scaling Evolutionary Approaches to the MasterMind Problem , which was presented a short time ago in the EvoGames workshop in Torino
IMG_1235
During the presentation, Carlos Cotta and Carlos Fernandes played the game shown above.
Here’s the presentation, which you can download at ease. Picture credits are included in the notes.

2 thoughts on “Going a bit farther (and a bit faster) solving MasterMind using evolutionary algorithms

  1. Pingback: The bestest MasterMind Algorithm ever « GeNeura Team

  2. Pingback: Why Mastermind? | GeNeura Team

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s