As a result of a collaboration with Mario García Valdez, Leonardo Trujillo and Francisco Fernández (this one from Spain) we have published two papers based on the EvoSpace framework a pool-based evolutionary architecture for interactive and straight evolutionary computation. The first paper describes the EvoSpace-i, the interactive part and is well described by Paco Fernández in our group blog, and the
Recently, inside the last LION 7 (2013) conference (Special Session on Games and Computational Intelligence) there was presented the paper entitled “FSM-Based Agents for Playing Super Mario Game”.
It describes the implementation and test of an autonomous agent which can play Super Mario game better than an expert user can do (in some trained levels).
It is build starting from a Finite State Machine and applying an Evolutionary Algorithm.
The presentation is:
You can watch one example of the obtained agent playing a game here:
Enjoy it. ;)
El pasado viernes (27 de Julio), presenté una charla dentro del curso Animación y Videojuegos, ofrecido por el Centro Mediterráneo de Almuñécar.
En ella comenté en tono desenfadado las relaciones existentes entre ambos mundos, considerando tanto las aportaciones de los sistemas de videojuegos al entorno científico, como los avances en investigación dentro del campo de los videojuegos.
Podéis encontrarla en Slideshare y aquí mismo (:D):
Que la disfrutéis. ;)
Además, nos hicieron una ‘super-entrevista’ mientras hacíamos networking (:P) y la publicaron en la edición online de Ideal Costa:
Since nobody is reviewing this, I’m not giving a long answer this question posed by one of the reviewers of a paper I submitted ages ago, in the nineties, proposing solutions using evolutionary algorithms to the game. However, the short question, as happens with most everything in Science, it’s because it’s there. However, after this empirical study of exhaustive solutions it is a bit closer.
In this paper, which is a draft of something we intend to submit to a journal in the near future, we describe something that has been disregarded in Mastermind solutions: the pure chance of drawing the correct solution in the opening moves. In fact, this chance dominates in the first moves, until the search space is reduced to a single solution, which is usually the intention of most empirical solutions to the game.
Which means that a method that is able to increase that chance, will be able to beat traditional solutions. In fact, it does not, but it is consistently as good as the best solution for each size.
It is rather a longish paper (and might become even more so before submission), but you might learn a thing or two about Mastermind. Besides, it is intended as a base for future papers that will apply our usual techniques, evolutionary algorithms.
This is the first internationally published paper (it was previously published in a Spanish conference of a series that deals with a system, intended for volunteer computing, that uses a pool for implementing distributed evolutionary algorithms. The basic idea is that the population resides in a pool (implemented using CouchDB), with clients pulling individuals from the pool, doing stuff on them, and putting them back in the pool. The algorithm uses, as much as possible, CouchDB features (such as revisions and views) to achieve good performance. All the code (for this and, right now, for the next papers) is available as open-source code.
The paper was accepted in the first edition of EvoPar as poster, and mainly concentrates on studying the effect of different parameters on scaling and performance, and comparing it with a canonical GA. Here’s the poster
The paper Pool-Based Distributed Evolutionary Algorithms Using an Object Database is available from SpringerLink if your university subscribes to it. If it does not, please send us an email.
We left MasterMind last year in a good state using estimation of distribution algorithms; however, if we want to find a solution for higher dimensions (more colors, more pegs) we have to improve the number of evaluations. In this case we use something we call endgames; same as chess playing algorithms use a database of endgames for ending a game in a straightforward way, in MasterMind we can recognize a few occasions in which the search space is reduced drastically and it’s better either change the strategy or just change the search space. When we know the colors (that is, we obtain as many white+blacks as the length of the combination) the best is to just revert to exhaustive search over combination space; when the answer is 0 whites/blacks we can also exclude those colors from the search space and start, maybe with a smaller population.
This is what we do in the paper Improving and Scaling Evolutionary Approaches to the MasterMind Problem , which was presented a short time ago in the EvoGames workshop in Torino
During the presentation, Carlos Cotta and Carlos Fernandes played the game shown above.
Here’s the presentation, which you can download at ease. Picture credits are included in the notes.
El lunes que viene día 30 de noviembre, el profesor Darrell Whitley dará una charla con el título superior y el siguiente contenido
The first part of this talk will present a tutorial on local search
and evolutionary algorithms for the Traveling Salesman
Problem (TSP). The second part of the talk will introduce a
new recombination operator for the TSP. The operator
can recombine parents in such a way that offspring inherit
all shared edges. Offspring also only inherit edges from parents.
When applied to solutions that are already local optima,
the operator automatically generates samples of new
local optima without applying any additional local search.
La charla tendrá lugar a las 12 de la mañana en el salón de Grados de la ETSIIT.
This year the Genetic and Evolutionary Computation Conference (GECCO) took place in Montréal (Québec-Canada) where we were presenting our last work in collaboration with the Laboratoire de Vision et Systèmes Numériques de l’Université Laval in Quebec City:
Despite the intuition that the same population size is not needed throughout the run of an Evolutionary Algorithm (EA), most EAs use a fixed population size.
This paper presents an empirical study on the possible benefits of a Simple Variable Population Sizing (SVPS) scheme on the performance of Genetic Algorithms (GAs). It consists in decreasing the population for a GA run following a predetermined schedule, configured by a speed and a severity parameter. The method uses as initial population size an estimation of the minimum size needed to supply enough building blocks, using a fixed-size selectorecombinative GA converging within some confidence interval toward good solutions for a particular problem. Following this methodology, a scalability analysis is conducted on deceptive, quasi-deceptive, and non-deceptive trap functions in order to assess whether SVPS-GA improves performances compared to a fixed-size GA under different problem instances and difficulty levels. Results show several combinations of speed-severity where SVPS-GA preserves the solution quality while improving performances, by reducing the number of evaluations needed for success.