MasterMind double feature: CEC and GECCO

Besides the publication at EvoStar, this year we have managed to publish in the other two big ticket EC conferences of the year, CEC in Cancún and GECCO in Amsterdam. First one on the calendar was CEC, where we presented
A search for scalable evolutionary solutions to the game of MasterMind
as a poster which I tried to make as awesome as possible by using velcro, magnetic strips and whiteboard for a dynamic experience. It
was quite successful indeed, and I expect the message came through well. Here’s the poster, although just the static part of it:


In this poster we attempt to create some guidelines for finding solutions to the game of MasterMind using evolutionary algorithms. We fix the maximum size of the consistent set and use it throughout all problem sizes, although what we discover here is that set size is different for different scoring methods: Entropy or Most Parts. This paper continues the one presented at LION, where we studied only the size needed by Most Parts, which also extended the one presented at GAMEON.

The paper presented at GECCO, which was admitted as full paper, goes in a different direction and is called Improving evolutionary solutions to the game of MasterMind using an entropy-based scoring method. In this case just Entropy is used for scoring (which means that combinations that partition the set of consistent combinations with maximum entropy will be played and sought in the evolutionary search), and the objective was to try and find fast and efficient solutions so that bigger search spaces could be explored. It was pretty much achieved; fastest solution so far (but still slower, mainly due to the nature of Perl, than the solution by Berghman et al.) and best results that those published before by ourselves and others, so all in all we are quite happy about it. In this case I used impress.js for this awesome presentation (which took me quite a while indeed).

Is entropy good for solving the game of MasterMind?

Well, it does. In another paper published in the Evostar conference, we compare several methods for measuring how good a combination is when compared to the others that could possibly be the solution; so far we had mostly used most parts (counting the number of non-zero partitions), but, in this paper, that compares our previous Evo method with another created by the coauthors, Maestro-Montojo and Salcedo-Sanz, we find that Entropy, at least for these sizes, is the way to go. Here’s the poster


You can access the paper Comparing Evolutionary Algorithms to Solve the Game of MasterMind, by Javier Maestro-Montojo, Juan Julián Merelo and Sancho Salcedo-Sanz (first and last authors from the University of Alcalá de Henares) online or request a copy from the authors.

Finding an evolutionary solution to the game of Mastermind with good scaling behavior

As important as finding a solution to the game of MasterMind that is better than anyone else is to find one that can be applied to a wide range of sizes. In this paper we get rid of a parameter, the limit size of the consistent set we use for scoring every combination. This makes a faster algorithm, and not always worse than the optimal consistent set size.

This was the paper presented at LION by Antonio Fernández using this presentation

Why Mastermind?

MastermindSince nobody is reviewing this, I’m not giving a long answer this question posed by one of the reviewers of a paper I submitted ages ago, in the nineties, proposing solutions using evolutionary algorithms to the game. However, the short question, as happens with most everything in Science, it’s because it’s there. However, after this empirical study of exhaustive solutions it is a bit closer.
In this paper, which is a draft of something we intend to submit to a journal in the near future, we describe something that has been disregarded in Mastermind solutions: the pure chance of drawing the correct solution in the opening moves. In fact, this chance dominates in the first moves, until the search space is reduced to a single solution, which is usually the intention of most empirical solutions to the game.
Which means that a method that is able to increase that chance, will be able to beat traditional solutions. In fact, it does not, but it is consistently as good as the best solution for each size.
It is rather a longish paper (and might become even more so before submission), but you might learn a thing or two about Mastermind. Besides, it is intended as a base for future papers that will apply our usual techniques, evolutionary algorithms.

The bestest MasterMind Algorithm ever

Well, by now, you must be a bit tired of Mastermind papers but we are not, since we are obtaining the best results ever. After introducing end games to streamline the end of the algorithms, we have tweaked the evolutionary algorithm, adding a permutation operator, for instance, to reduce the number of evaluations needed to find the solution. The results is the best yet, but, of course, there’s more to come in the future.
This paper was presented at CEC 2011 in the games session, and raised quite a bit of interest. The paper will be available from IEEExplore soon, but you can request copies now if you want

Going a bit farther (and a bit faster) solving MasterMind using evolutionary algorithms

We left MasterMind last year in a good state using estimation of distribution algorithms; however, if we want to find a solution for higher dimensions (more colors, more pegs) we have to improve the number of evaluations. In this case we use something we call endgames; same as chess playing algorithms use a database of endgames for ending a game in a straightforward way, in MasterMind we can recognize a few occasions in which the search space is reduced drastically and it’s better either change the strategy or just change the search space. When we know the colors (that is, we obtain as many white+blacks as the length of the combination) the best is to just revert to exhaustive search over combination space; when the answer is 0 whites/blacks we can also exclude those colors from the search space and start, maybe with a smaller population.
This is what we do in the paper Improving and Scaling Evolutionary Approaches to the MasterMind Problem , which was presented a short time ago in the EvoGames workshop in Torino
IMG_1235
During the presentation, Carlos Cotta and Carlos Fernandes played the game shown above.
Here’s the presentation, which you can download at ease. Picture credits are included in the notes.

EDAs for solving Mastermind at NICSO

The NICSO 2010 talk is taking place as I type this, quite undercrowded due to flight problems, but it’s one of the places where we are presenting new work on Mastermind. In this case, we have presented the paper Adapting Heuristic Mastermind Strategies to Evolutionary Algorithms, by Tom Runarsson and myself. This paper is actually previous to the work we presented in EvoStar. In this paper is where we describe the results we found about the size of the subset of consistent combinations you have to use. OK, I lost you here, but you can watch the presentation

or download it from here. Fee license, as usual, and check the notes for the attribution of pictures.