As important as finding a solution to the game of MasterMind that is better than anyone else is to find one that can be applied to a wide range of sizes. In this paper we get rid of a parameter, the limit size of the consistent set we use for scoring every combination. This makes a faster algorithm, and not always worse than the optimal consistent set size.
Since nobody is reviewing this, I’m not giving a long answer this question posed by one of the reviewers of a paper I submitted ages ago, in the nineties, proposing solutionsusing evolutionary algorithms to the game. However, the short question, as happens with most everything in Science, it’s because it’s there. However, after this empirical study of exhaustive solutions it is a bit closer.
In this paper, which is a draft of something we intend to submit to a journal in the near future, we describe something that has been disregarded in Mastermind solutions: the pure chance of drawing the correct solution in the opening moves. In fact, this chance dominates in the first moves, until the search space is reduced to a single solution, which is usually the intention of most empirical solutions to the game.
Which means that a method that is able to increase that chance, will be able to beat traditional solutions. In fact, it does not, but it is consistently as good as the best solution for each size.
It is rather a longish paper (and might become even more so before submission), but you might learn a thing or two about Mastermind. Besides, it is intended as a base for future papers that will apply our usual techniques, evolutionary algorithms.
Well, by now, you must be a bit tired of Mastermind papers but we are not, since we are obtaining the best results ever. After introducing end games to streamline the end of the algorithms, we have tweaked the evolutionary algorithm, adding a permutation operator, for instance, to reduce the number of evaluations needed to find the solution. The results is the best yet, but, of course, there’s more to come in the future.
This paper was presented at CEC 2011 in the games session, and raised quite a bit of interest. The paper will be available from IEEExplore soon, but you can request copies now if you want
Last week we were in IWANN Conference, held in Torremolinos (Málaga), presenting two different works. The first one is about evolving IA bots for playing games in the Google AI Challenge. The basic idea is to improve the parameters of a hard-coded bot. Results shown that the default parameters we thought are important may be not work so good, and we can learn a lot of emerging behavior of the trained bot.
The second one, is about a project I was working in last year. It’s about Ambient Assisted Living, Context-awareness and other stuff like that. The presentation is not so awesome. It was presented in the satellite workshop IWAAL.
The paper “Reactive Planning Idioms for Multi-Scale Game AI” (Weber et al.), published last year in the proceedings of the IEEE Conference on Computation Intelligence and Games (CIG 2010), proposes a technique called reactive planning for designing a bot for Real-time strategy games (RTS). The agent is implemented in ABL (a behavioral language), an environment that allows the programmer to embed the multi-level reasoning that is required for efficient and complex RTS bots. A bot for RTS games (such as the StarCraft) must deal simultaneously with several goals, making intelligent high-level decisions while micromanaging units in combat, and ABL provides features such as daemons, messaging (memory), managers and micromanagement behaviors that can be of great help for such task. The authors propose a specific framework, for the structure of the bot and interfaces, and demonstrate that the resulting agent is able to beat the built-in StarCraft bot. However, when tested against moderately skilled human players, the agent performs poorly. As far as we understood, this work deals mainly with traditional Artificial Intelligence. The open question now is: can we model some kind of adaptive behavior in this ABL environment?