Dealing with noise in evolutionary algorithms

For evolutionary algorithms, noise is something that happens when you cannot get a fitness function to return the same value twice in a row. It is a mainstay of games, but it can be found also in industrial processes and in things like neural nets. We have been working despite it many times usually by doing several evaluations and averaging, but this is not really the best way of dealing with it. Since the shape of noise is not known in advance, in the paper presented at the ECTA 2014 conference we proposed a new method for dealing with it: using statistically sound comparisons, namely Wilcoxon. The paper is entitled “Studying and Tackling Noisy Fitness in Evolutionary Design of Game Characters”, and here’s the abstract.

In most computer games as in life, the outcome of a match is uncertain due to several reasons: the characters or assets appear in different initial positions or the response of the player, even if programmed, is not deterministic; different matches will yield different scores. That is a problem when optimizing a game-playing engine: its fitness will be noisy, and if we use an evolutionary algorithm it will have to deal with it. This is not straightforward since there is an inherent uncertainty in the true value of the fitness of an individual, or rather whether one chromosome is better than another, thus making it preferable for selection. Several methods based on implicit or explicit average or changes in the selection of individuals for the next generation have been proposed in the past, but they involve a substantial redesign of the algorithm and the software used to solve the problem. In this paper we propose new methods based on incremental computation (memory-based) or fitness average or, additionally, using statistical tests to impose a partial order on the population; this partial order is considered to assign a fitness value to every individual which can be used straightforwardly in any selection function. Tests using several hard combinatorial optimization problems show that, despite an increased computation time with respect to the other methods, both memory-based methods have a higher success rate than implicit averaging methods that do not use memory; however, there is not a clear advantage in success rate or algorithmic terms of one method over the other

The presentation uses jmpress.js and, besides, is generated randomly every time it is loaded, by adding some Javascript cleverness to the layout. It’s open source and available in GitHub (check out the gh-pages bramch.

Curso Google App Engine

El próximo lunes 17 de Noviembre empieza la primera edición del curso >”INICIACIÓN A LOS SERVICIOS DE GOOGLE PARA APLICACIONES” que impartiremos muchos de los miembros de este grupo.

El temario está organizado para empezar desde 0 y poder llegar a crear una aplicación usando la tecnología de google conocida como Google App Engine. Tienes el temario y los docentes resumidos en este cartel.

El curso es semipresencial e incluye 62,5 horas en total y sólo 20 presenciales que serán el 17, 18, 19 y 20 de noviembre en la ETSIIT de la UGR. Los días 17, 18 y 20 tiene horario de tarde, mientras que el día 19 se impartirá por la mañana.

El precio del curso es de 100 euros y se convalidará automáticamente por un crédito ECTS para todos los alumnos de la ETSIIT de la Universidad de Granada.

Para más información podéis dirigiros a la Fundación Empresa Universidad de la Universidad de Granada a través o directamente en el teléfono +34 958 24 61 20.

Si tienes alguna duda o pregunta, no dudes en enviarnos tu pregunta:

[Paper] Going a Step Beyond the Black and White Lists for URL Accesses in the Enterprise by means of Categorical Classifiers

Our work titled Going a Step Beyond the Black and White Lists for URL Accesses in the Enterprise by means of Categorical Classifiers, as part of the researh under the MUSES project, has been presented today at the ECTA 2014 conference.

Abstract:

Corporate systems can be secured using an enormous quantity of methods, and the implementation of Black or White lists is among them.
With these lists it is possible to restrict (or to allow) the users the execution of applications or the access to certain URLs, among others. This paper is focused on the latter option. It describes the whole processing of a set of data composed by URL sessions performed by the employees of a company; from the preprocessing stage, including labelling and data balancing processes, to the application of several classification algorithms. The aim is to define a method for automatically make a decision of allowing or denying future URL requests, considering a set of corporate security policies.
Thus, this work goes a step beyond the usual black and white lists, since they can only control those URLs that are specifically included in them, but not by making decisions based in similarity (through classification techniques), or even in other variables of the session, as it is proposed here.
The results show a set of classification methods which get very good classification percentages (95-97%), and which infer some useful rules based in additional features (rather that just the URL string) related to the user’s access. This led us to consider that this kind of tool would be very useful tool for an enterprise.

You can check the presentation at: .

Out of CPU cycles and in need to do science? No problem!

After the bad experience of spending money in clusters and grids and then spending more time doing maintenance, hack-proofing and installing stuff than science, maybe it is the time to rethink how massive distributed evolutionary computation should be done. Nowadays there are lots of free or use-based resources that can be tapped for doing volunteer-based evolutionary algorithms. That is way my last keynote and tutorial have dealt with that: the IDC Keynote, Low or No Cost Evolutionary computation, which you can access here in Heroku, puts the money where its mouth is: talking and doing volunteer-based evolutionary computing at the same time. The PPSN tutorial, Low or no cost distributed evolutionary computation, touched on the same topic, only longer and with more enphasis on tools.
So finally it is just a matter of a little Javascript and using free cloud resources and you can have your very own massive distributed experiment. Whose results will be published soon enough.

CFP for EvoGAMES 2015

jjmerelo:

Are you into games and computational intelligence? Submit your paper to this conference track, deadline in a month.

Originally posted on The EvoGAMES blog:

The deadline for EvoGAMES 2015 is approaching: 15 November.

Please, prepare ASAP your (excellent :D) contribution to the edition of this year, for two main reasons: first Evo* will be held in the beautiful city of Copenhagen (Denmark); and second, we are working hard to join a special issue in a very good JCR journal.

Thus, the selected papers will be proposed to be included in it, after an extension/improvement phase, of course. ;)

The main topics are (not limited to):

  • Computational Intelligence in video games
  • Intelligent avatars and new forms of player interaction
  • Player experience measurement and optimization
  • Procedural content generation
  • Human-like artificial adversaries and emotion modelling
  • Authentic movement, believable multi-agent control
  • Experimental methods for gameplay evaluation
  • Evolutionary testing and debugging of games
  • Adaptive and interactive narrative
  • Games related to social, economic, and financial simulations
  • Adaptive educational, serious and/or social games
  • General game intelligence (e.g. general purpose…

View original 56 more words

[Paper] My life as a sim: evolving unique and engaging life stories using virtual worlds

Our latest publication My life as a sim: evolving unique and engaging life stories using virtual worlds, using our framework MADE (created by @rubenhek), has been published in the ALIFE 2014 conference. The abstract:

Stories are not only painfully weaved by crafty writers in the solitude of their studios; they also have to be produced massively for non-player characters in the video game industry or tailored to particular tastes in personalized stories. However, the creation of fictional stories is a very complex task that usually implies a creative process where the author has to combine characters, conflicts and backstories to create an engaging narrative. This work describes a general methodology to generate cohesive and coherent backstories where desired archetypes (universally accepted literary symbols) can emerge in complex stochastic systems. This methodology supports the modeling and parametrization of the agents, the environment where they will live and the desired literary setting. The use of a Genetic Algorithm (GA) is proposed to establish the parameter configuration that will lead to backstories that best fit the setting. Information extracted from a simulation can then be used to create the literary work. To demonstrate the adequacy of the methodology, we perform an implementation using a specific multi-agent system and evaluate the results, testing with three different literary settings.

Check out the presentation by @jjmerelo at http://jj.github.io/alife14-made/#/home. You can download the proceedings of the conference (CC license), or download the paper draft.

More information is available on the project page.

Tenemos el segundo bot más humano DEL MUNDO…

…jugando a Unreal Tournament 2004. :D

José L. Jiménez, estudiante de informática de la Universidad de Málaga, dirigido por Antonio Fernández Leiva y por mí ha creado un bot, llamado NizorBot, a partir del ExpertBot que hicieron Francisco Aisa y Ricardo Caballero, que ha obtenido el segundo puesto (considerando la humanidad (humanness) del mismo) en el BotPrize 2014, celebrado dentro del CIG 2014.

Human or Bot competition (BotPrize)

Resultados de la competición BotPrize 2014 (primeros puestos)

La competición se basa en un Test de Turing que los bots deben pasar frente a jueces humanos. Éstos se enfrentarán a los bots en combates (Deathmatch) dentro del juego Unreal Tournament 2004. Durante dichos combates (en varias tandas), los jueces marcarán a cada uno de sus rivales como humano o bot en base a su criterio sobre el comportamiento que observen en el jugador.

Además, este año se ha añadido la presencia de un gran número de jueces off-line, es decir, ‘fuera del juego’, que visualizan los enfrentamientos y deciden también qué jugadores son humanos y cuáles no.

Esto le da más valor al resultado que otros años, ya que estos jueces cuentan con un punto de vista ‘menos limitado’ que los participantes en el juego.

El bot ha sido implementado mediante algoritmos evolutivos interactivos, en los que un controlador humano interviene cada cierto tiempo en dos sentidos:

  • Ajustando parámetros del algoritmo (controlador experto en el algoritmo)
  • Respondiendo a cuestiones sobre el jugador y el juego (controlador experto en el juego).

En ambos casos, el desarrollo del algoritmo (su ejecución) se ve afectado y dirigido por este controlador, lo que hace que el bot evolucione de una forma más acorde con lo que éste espera. En este caso, para comportarse de la manera más humana posible.

José está en proceso de escritura de su Proyecto Fin de Carrera y, a continuación, los tres trabajaremos en un artículo describiendo todo el proceso.

¡Estad atentos! :D