EVOGames CFP 11/Nov

Aside

EVOGames es una sesión dedicada a la investigación dentro del campo de los videojuegos, que se incluye dentro del congreso EVO* 2014, que se celebrará en Granada del 23 al 25 de Abril del próximo año.

El Call for Papers ha sido retrasado hasta el 11 de Noviembre, fecha definitiva para el envío de artículos.

———————————–

EVOGames is a Special Session devoted to research works in videgames scope. It is included inside EVO* 2014, to be held in Granada (Spain) next month of April.

The CFP has been delayed to the 11th of November.

Advertisements

Is entropy good for solving the game of MasterMind?

Well, it does. In another paper published in the Evostar conference, we compare several methods for measuring how good a combination is when compared to the others that could possibly be the solution; so far we had mostly used most parts (counting the number of non-zero partitions), but, in this paper, that compares our previous Evo method with another created by the coauthors, Maestro-Montojo and Salcedo-Sanz, we find that Entropy, at least for these sizes, is the way to go. Here’s the poster


You can access the paper Comparing Evolutionary Algorithms to Solve the Game of MasterMind, by Javier Maestro-Montojo, Juan Julián Merelo and Sancho Salcedo-Sanz (first and last authors from the University of Alcalá de Henares) online or request a copy from the authors.

Finding an evolutionary solution to the game of Mastermind with good scaling behavior

As important as finding a solution to the game of MasterMind that is better than anyone else is to find one that can be applied to a wide range of sizes. In this paper we get rid of a parameter, the limit size of the consistent set we use for scoring every combination. This makes a faster algorithm, and not always worse than the optimal consistent set size.

This was the paper presented at LION by Antonio Fernández using this presentation

Why Mastermind?

MastermindSince nobody is reviewing this, I’m not giving a long answer this question posed by one of the reviewers of a paper I submitted ages ago, in the nineties, proposing solutions using evolutionary algorithms to the game. However, the short question, as happens with most everything in Science, it’s because it’s there. However, after this empirical study of exhaustive solutions it is a bit closer.
In this paper, which is a draft of something we intend to submit to a journal in the near future, we describe something that has been disregarded in Mastermind solutions: the pure chance of drawing the correct solution in the opening moves. In fact, this chance dominates in the first moves, until the search space is reduced to a single solution, which is usually the intention of most empirical solutions to the game.
Which means that a method that is able to increase that chance, will be able to beat traditional solutions. In fact, it does not, but it is consistently as good as the best solution for each size.
It is rather a longish paper (and might become even more so before submission), but you might learn a thing or two about Mastermind. Besides, it is intended as a base for future papers that will apply our usual techniques, evolutionary algorithms.

The bestest MasterMind Algorithm ever

Well, by now, you must be a bit tired of Mastermind papers but we are not, since we are obtaining the best results ever. After introducing end games to streamline the end of the algorithms, we have tweaked the evolutionary algorithm, adding a permutation operator, for instance, to reduce the number of evaluations needed to find the solution. The results is the best yet, but, of course, there’s more to come in the future.
This paper was presented at CEC 2011 in the games session, and raised quite a bit of interest. The paper will be available from IEEExplore soon, but you can request copies now if you want

About Game Bots and Ambient Assisted Living

Last week we were in IWANN Conference, held in Torremolinos (Málaga), presenting two different works. The first one is about evolving IA bots for playing games in the Google AI Challenge. The basic idea is to improve the parameters of a hard-coded bot. Results shown that the default parameters we thought are important may be not work so good, and we can learn a lot of emerging behavior of the trained bot.

Here is the presentation:

Citation is here

The second one, is about a project I was working in last year. It’s about Ambient Assisted Living, Context-awareness and other stuff like that. The presentation is not so awesome. It was presented in the satellite workshop IWAAL.

You can download the paper in Springerlink here.

Reactive Planning for RTS games

The paper “Reactive Planning Idioms for Multi-Scale Game AI” (Weber et al.), published last year in the proceedings of the IEEE Conference on Computation Intelligence and Games (CIG 2010), proposes a technique called reactive planning for designing a bot for Real-time strategy games (RTS). The agent is implemented in ABL (a behavioral language), an environment that allows the programmer to embed the multi-level reasoning that is required for efficient and complex RTS bots. A bot for RTS games (such as the StarCraft) must deal simultaneously with several goals, making intelligent high-level decisions while micromanaging units in combat, and ABL provides features such as daemons, messaging (memory), managers and micromanagement behaviors that can be of great help for such task. The authors propose a specific framework, for the structure of the bot and interfaces, and demonstrate that the resulting agent is able to beat the built-in StarCraft bot. However, when tested against moderately skilled human players, the agent performs poorly. As far as we understood, this work deals mainly with traditional Artificial Intelligence. The open question now is: can we model some kind of adaptive behavior in this ABL environment?