Acerca de jjmerelo

Servidor de ustedes

Dealing with noise in evolutionary algorithms

For evolutionary algorithms, noise is something that happens when you cannot get a fitness function to return the same value twice in a row. It is a mainstay of games, but it can be found also in industrial processes and in things like neural nets. We have been working despite it many times usually by doing several evaluations and averaging, but this is not really the best way of dealing with it. Since the shape of noise is not known in advance, in the paper presented at the ECTA 2014 conference we proposed a new method for dealing with it: using statistically sound comparisons, namely Wilcoxon. The paper is entitled “Studying and Tackling Noisy Fitness in Evolutionary Design of Game Characters”, and here’s the abstract.

In most computer games as in life, the outcome of a match is uncertain due to several reasons: the characters or assets appear in different initial positions or the response of the player, even if programmed, is not deterministic; different matches will yield different scores. That is a problem when optimizing a game-playing engine: its fitness will be noisy, and if we use an evolutionary algorithm it will have to deal with it. This is not straightforward since there is an inherent uncertainty in the true value of the fitness of an individual, or rather whether one chromosome is better than another, thus making it preferable for selection. Several methods based on implicit or explicit average or changes in the selection of individuals for the next generation have been proposed in the past, but they involve a substantial redesign of the algorithm and the software used to solve the problem. In this paper we propose new methods based on incremental computation (memory-based) or fitness average or, additionally, using statistical tests to impose a partial order on the population; this partial order is considered to assign a fitness value to every individual which can be used straightforwardly in any selection function. Tests using several hard combinatorial optimization problems show that, despite an increased computation time with respect to the other methods, both memory-based methods have a higher success rate than implicit averaging methods that do not use memory; however, there is not a clear advantage in success rate or algorithmic terms of one method over the other

The presentation uses jmpress.js and, besides, is generated randomly every time it is loaded, by adding some Javascript cleverness to the layout. It’s open source and available in GitHub (check out the gh-pages bramch.

Out of CPU cycles and in need to do science? No problem!

After the bad experience of spending money in clusters and grids and then spending more time doing maintenance, hack-proofing and installing stuff than science, maybe it is the time to rethink how massive distributed evolutionary computation should be done. Nowadays there are lots of free or use-based resources that can be tapped for doing volunteer-based evolutionary algorithms. That is way my last keynote and tutorial have dealt with that: the IDC Keynote, Low or No Cost Evolutionary computation, which you can access here in Heroku, puts the money where its mouth is: talking and doing volunteer-based evolutionary computing at the same time. The PPSN tutorial, Low or no cost distributed evolutionary computation, touched on the same topic, only longer and with more enphasis on tools.
So finally it is just a matter of a little Javascript and using free cloud resources and you can have your very own massive distributed experiment. Whose results will be published soon enough.

CFP for EvoGAMES 2015

Are you into games and computational intelligence? Submit your paper to this conference track, deadline in a month.

The EvoGAMES blog

The deadline for EvoGAMES 2015 is approaching: 15 November.

Please, prepare ASAP your (excellent :D) contribution to the edition of this year, for two main reasons: first Evo* will be held in the beautiful city of Copenhagen (Denmark); and second, we are working hard to join a special issue in a very good JCR journal.

Thus, the selected papers will be proposed to be included in it, after an extension/improvement phase, of course. ;)

The main topics are (not limited to):

  • Computational Intelligence in video games
  • Intelligent avatars and new forms of player interaction
  • Player experience measurement and optimization
  • Procedural content generation
  • Human-like artificial adversaries and emotion modelling
  • Authentic movement, believable multi-agent control
  • Experimental methods for gameplay evaluation
  • Evolutionary testing and debugging of games
  • Adaptive and interactive narrative
  • Games related to social, economic, and financial simulations
  • Adaptive educational, serious and/or social games
  • General game intelligence (e.g. general purpose…

Ver la entrada original 56 palabras más

Evolution using JavaScript in EvoStar

JavaScript, despite its age, is considered now an emergent language, since it is starting to have a ecosystem that allows the development of complex and high-performance applications. That is why in the recent EvoStar we had a poster that uses evolutionary algorithms libraries written using it. It is based on MOOTools to create an object-oriented browser-based library called jsEO, and is entitled An object-oriented library in JavaScript to build modular and flexible cross-platform evolutionary algorithms.

MasterMind double feature: CEC and GECCO

Besides the publication at EvoStar, this year we have managed to publish in the other two big ticket EC conferences of the year, CEC in Cancún and GECCO in Amsterdam. First one on the calendar was CEC, where we presented
A search for scalable evolutionary solutions to the game of MasterMind
as a poster which I tried to make as awesome as possible by using velcro, magnetic strips and whiteboard for a dynamic experience. It
was quite successful indeed, and I expect the message came through well. Here’s the poster, although just the static part of it:

In this poster we attempt to create some guidelines for finding solutions to the game of MasterMind using evolutionary algorithms. We fix the maximum size of the consistent set and use it throughout all problem sizes, although what we discover here is that set size is different for different scoring methods: Entropy or Most Parts. This paper continues the one presented at LION, where we studied only the size needed by Most Parts, which also extended the one presented at GAMEON.

The paper presented at GECCO, which was admitted as full paper, goes in a different direction and is called Improving evolutionary solutions to the game of MasterMind using an entropy-based scoring method. In this case just Entropy is used for scoring (which means that combinations that partition the set of consistent combinations with maximum entropy will be played and sought in the evolutionary search), and the objective was to try and find fast and efficient solutions so that bigger search spaces could be explored. It was pretty much achieved; fastest solution so far (but still slower, mainly due to the nature of Perl, than the solution by Berghman et al.) and best results that those published before by ourselves and others, so all in all we are quite happy about it. In this case I used impress.js for this awesome presentation (which took me quite a while indeed).

Accesit for OSGiLiath in the local edition of the Free Software Contest #CUSL7

Our friend Pablo has achieved an accesit in the local edition of the Free Software Contest.

OSGiLiath Evolutionary Framework

Yesterday, it was the final of the local edition of the Spanish Universitary Free Software Contest and OSGiLiath was awarded with an accesit : )

The winner of the event was Zomblind, an awesome game for blind people. Participating in the contest has helped to improve the framework (as I explained in previous posts), and the experience worth it.

I made a presentation to show the other participants and the public the improvements of the project during the months of the contest.

The national final will be soon, so stay tuned!

Ver la entrada original

Is entropy good for solving the game of MasterMind?

Well, it does. In another paper published in the Evostar conference, we compare several methods for measuring how good a combination is when compared to the others that could possibly be the solution; so far we had mostly used most parts (counting the number of non-zero partitions), but, in this paper, that compares our previous Evo method with another created by the coauthors, Maestro-Montojo and Salcedo-Sanz, we find that Entropy, at least for these sizes, is the way to go. Here’s the poster

You can access the paper Comparing Evolutionary Algorithms to Solve the Game of MasterMind, by Javier Maestro-Montojo, Juan Julián Merelo and Sancho Salcedo-Sanz (first and last authors from the University of Alcalá de Henares) online or request a copy from the authors.

Special issue of Evolutionary Intelligence focused on games

Do you apply bioinspired techniques to games? Specially evolutionary algorithms? Submit a paper with your work to the Special Issue of Evolutionary Intelligence focused on games.

Anything related to games is valid, provided evolutionary techniques (or, in general, bioinspired techniques):

  • Level design
  • NPC design
  • Learning in games
  • Behaviour mining
  • Puzzle solving

Please check the dates: the deadline for submission is May 10th, 2013, the notification of acceptance/revision/rejection will be given by June 10th, 2013 and the final manuscript submission date is July 1st, 2013. The articles must be submitted via the

More results of the hispano-mexican collaboration

As a result of a collaboration with Mario García Valdez, Leonardo Trujillo and Francisco Fernández (this one from Spain) we have published two papers based on the EvoSpace framework a pool-based evolutionary architecture for interactive and straight evolutionary computation. The first paper describes the EvoSpace-i, the interactive part and is well described by Paco Fernández in our group blog, and the