Unreal Expert Bots at IWANN 2013

Last week there was held IWANN 2013 at Tenerife, an international conference mainly devoted to researches inside the neural networks scope. In it, Antonio Fernández Leiva, Raúl Lara and Me organized the Special Session on Artificial Intelligence and Games.

There were five works in the session, one of them “Designing and Evolving an Unreal Tournament— 2004 Expert Bot“.

It describes the designing and improvement, through off-line (not during the game) evolution, of an autonomous agent (or bot) for playing the game Unreal Tournament 2004. This was created by means of a finite state machine which models the expert behaviour of a human player in 1 vs 1 deathmatch mode, following the rules of the international competition.

Then, the bot was improved by means of a Genetic Algorithm, yielding an agent that is, in turn a very hard opponent for the medium-level human player and which can (easily) beat the default bots in the game, even in the maximum difficulty level.

The presentation can be seen at:

Moreover, you can watch one example of the evolution in the following video:

Finally, the Unreal Expert and Genetic bot’s source code are available at https://github.com/franaisa/ExpertAgent

Enjoy them. ;)

Anuncios

More results of the hispano-mexican collaboration

As a result of a collaboration with Mario García Valdez, Leonardo Trujillo and Francisco Fernández (this one from Spain) we have published two papers based on the EvoSpace framework a pool-based evolutionary architecture for interactive and straight evolutionary computation. The first paper describes the EvoSpace-i, the interactive part and is well described by Paco Fernández in our group blog, and the

Super Mario autonomous agents at LION 2013

Recently, inside the last LION 7 (2013) conference (Special Session on Games and Computational Intelligence) there was presented the paper entitled “FSM-Based Agents for Playing Super Mario Game”.

It describes the implementation and test of an autonomous agent which can play Super Mario game better than an expert user can do (in some trained levels).
It is build starting from a Finite State Machine and applying an Evolutionary Algorithm.

The presentation is:

You can watch one example of the obtained agent playing a game here:

Enjoy it. ;)

Charla “Ciencia y Videojuegos” en el Curso de Almuñecar 2012

El pasado viernes (27 de Julio), presenté una charla dentro del curso Animación y Videojuegos, ofrecido por el Centro Mediterráneo de Almuñécar.

En ella comenté en tono desenfadado las relaciones existentes entre ambos mundos, considerando tanto las aportaciones de los sistemas de videojuegos al entorno científico, como los avances en investigación dentro del campo de los videojuegos.

Podéis encontrarla en Slideshare y aquí mismo (:D):

Que la disfrutéis. ;)

Además, nos hicieron una ‘super-entrevista’ mientras hacíamos networking (:P) y la publicaron en la edición online de Ideal Costa:

http://almunecar.ideal.es/actualidad/517-curso-de-animacion-y-videojuegos-en-almunecar.html

Saludos.

Why Mastermind?

MastermindSince nobody is reviewing this, I’m not giving a long answer this question posed by one of the reviewers of a paper I submitted ages ago, in the nineties, proposing solutions using evolutionary algorithms to the game. However, the short question, as happens with most everything in Science, it’s because it’s there. However, after this empirical study of exhaustive solutions it is a bit closer.
In this paper, which is a draft of something we intend to submit to a journal in the near future, we describe something that has been disregarded in Mastermind solutions: the pure chance of drawing the correct solution in the opening moves. In fact, this chance dominates in the first moves, until the search space is reduced to a single solution, which is usually the intention of most empirical solutions to the game.
Which means that a method that is able to increase that chance, will be able to beat traditional solutions. In fact, it does not, but it is consistently as good as the best solution for each size.
It is rather a longish paper (and might become even more so before submission), but you might learn a thing or two about Mastermind. Besides, it is intended as a base for future papers that will apply our usual techniques, evolutionary algorithms.

Pool based evolutionary algorithm presented in EvoStar 2012

This is the first internationally published paper (it was previously published in a Spanish conference of a series that deals with a system, intended for volunteer computing, that uses a pool for implementing distributed evolutionary algorithms. The basic idea is that the population resides in a pool (implemented using CouchDB), with clients pulling individuals from the pool, doing stuff on them, and putting them back in the pool. The algorithm uses, as much as possible, CouchDB features (such as revisions and views) to achieve good performance. All the code (for this and, right now, for the next papers) is available as open-source code.
The paper was accepted in the first edition of EvoPar as poster, and mainly concentrates on studying the effect of different parameters on scaling and performance, and comparing it with a canonical GA. Here’s the poster
Pool-based distributed evolutionary algorithms using an object database
The paper Pool-Based Distributed Evolutionary Algorithms Using an Object Database is available from SpringerLink if your university subscribes to it. If it does not, please send us an email.

Going a bit farther (and a bit faster) solving MasterMind using evolutionary algorithms

We left MasterMind last year in a good state using estimation of distribution algorithms; however, if we want to find a solution for higher dimensions (more colors, more pegs) we have to improve the number of evaluations. In this case we use something we call endgames; same as chess playing algorithms use a database of endgames for ending a game in a straightforward way, in MasterMind we can recognize a few occasions in which the search space is reduced drastically and it’s better either change the strategy or just change the search space. When we know the colors (that is, we obtain as many white+blacks as the length of the combination) the best is to just revert to exhaustive search over combination space; when the answer is 0 whites/blacks we can also exclude those colors from the search space and start, maybe with a smaller population.
This is what we do in the paper Improving and Scaling Evolutionary Approaches to the MasterMind Problem , which was presented a short time ago in the EvoGames workshop in Torino
IMG_1235
During the presentation, Carlos Cotta and Carlos Fernandes played the game shown above.
Here’s the presentation, which you can download at ease. Picture credits are included in the notes.