How the world was MADE: parametrisation of evolved agent-based models for backstory generation

Our paper “How the world was MADE: parametrisation of evolved agent-based models for backstory generation” has been accepted in Evostar 2015 conference. This paper continues our previous work published at ALIFE. You can also download the MADE framework source and binaries to test it for your own. Also, read the draft in our repository on GitHub.

The abstract:

Generating fiction environments for a multi-agent system optimized by genetic algorithms (with some specific requirements related to the desirable plots), presents two main problems: first it is impossible to know in advance the optimal value for the particular designed fitness function, and at the same time, it creates a vast search space for the parameters that it needs. The purpose of this paper is to define a methodology to find the best parameter values for both, the evolutionary algorithm, and the own fictional world configuration. This design includes running, to completion, a world simulation represented as a chromosome, and assigning a fitness to it, thus composing a very complex fitness landscape.

In order to optimize the resources allocated to evolution and to have some guarantees that the final result will be close to the optimum, we systematically analyse a set of possible values of the most relevant parameters, obtaining a set of generic rules. These rules, when applied to the plot requisites, and thus, to the fitness function, will lead to a reduced range of parameter values that will help the storyteller to create optimal worlds with a reduced computation budget.

Anuncios

[Paper] Going a Step Beyond the Black and White Lists for URL Accesses in the Enterprise by means of Categorical Classifiers

Our work titled Going a Step Beyond the Black and White Lists for URL Accesses in the Enterprise by means of Categorical Classifiers, as part of the researh under the MUSES project, has been presented today at the ECTA 2014 conference.

Abstract:

Corporate systems can be secured using an enormous quantity of methods, and the implementation of Black or White lists is among them.
With these lists it is possible to restrict (or to allow) the users the execution of applications or the access to certain URLs, among others. This paper is focused on the latter option. It describes the whole processing of a set of data composed by URL sessions performed by the employees of a company; from the preprocessing stage, including labelling and data balancing processes, to the application of several classification algorithms. The aim is to define a method for automatically make a decision of allowing or denying future URL requests, considering a set of corporate security policies.
Thus, this work goes a step beyond the usual black and white lists, since they can only control those URLs that are specifically included in them, but not by making decisions based in similarity (through classification techniques), or even in other variables of the session, as it is proposed here.
The results show a set of classification methods which get very good classification percentages (95-97%), and which infer some useful rules based in additional features (rather that just the URL string) related to the user’s access. This led us to consider that this kind of tool would be very useful tool for an enterprise.

You can check the presentation at: .

[Paper] My life as a sim: evolving unique and engaging life stories using virtual worlds

Our latest publication My life as a sim: evolving unique and engaging life stories using virtual worlds, using our framework MADE (created by @rubenhek), has been published in the ALIFE 2014 conference. The abstract:

Stories are not only painfully weaved by crafty writers in the solitude of their studios; they also have to be produced massively for non-player characters in the video game industry or tailored to particular tastes in personalized stories. However, the creation of fictional stories is a very complex task that usually implies a creative process where the author has to combine characters, conflicts and backstories to create an engaging narrative. This work describes a general methodology to generate cohesive and coherent backstories where desired archetypes (universally accepted literary symbols) can emerge in complex stochastic systems. This methodology supports the modeling and parametrization of the agents, the environment where they will live and the desired literary setting. The use of a Genetic Algorithm (GA) is proposed to establish the parameter configuration that will lead to backstories that best fit the setting. Information extracted from a simulation can then be used to create the literary work. To demonstrate the adequacy of the methodology, we perform an implementation using a specific multi-agent system and evaluate the results, testing with three different literary settings.

Check out the presentation by @jjmerelo at http://jj.github.io/alife14-made/#/home. You can download the proceedings of the conference (CC license), or download the paper draft.

More information is available on the project page.

[Paper] A Methodology to Develop Service Oriented Evolutionary Algorithms

Paper about our methodology for service oriented EAs

OSGiLiath Evolutionary Framework

Our paper A Methodology to Develop Service Oriented Evolutionary Algorithms has been published in the proceedings of the 8th International Symposium on Intelligent Distributed Computing – IDC’2014. This paper is a resume of the SOA-EA methodology, one of the chapters of my thesis (available here). The presentation, made by JJ Merelo, is quite cool:

http://jj.github.io/pres-idc-2014/index.html#/home

The abstract:

This paper proposes a methodology to design and implement Evolutionary Algorithms using the Service Oriented Architecture paradigm. This paradigm allows to deal with some of the shortcomings in the Evolutionary Algorithms area, facilitating the development, integration, standardization of services that conform a evolutionary algorithm, and, besides, the dynamic alteration of those elements in runtime.
A four-step methodology to design services for Evolutionary Algorithms is presented: identification, specification, implementation and deployment. Also, as an example of application of this methodology, an adaptive algorithm is developed.

You can download the paper draft from…

Ver la entrada original 15 palabras más

Free access to paper accepted at GECCO’14

During 1 month, papers accepted at GECCO1’4 will be freely available. Thus, you can get and read our papers:

  • “Assessing different architectures for evolutionary algorithms in javascript” by Juan Julián Merelo, Pedro Castillo, Antonio Mora, Anna I. Esparcia-Alcázar, Víctor M. Rivas Santos (doi 10.1145/2598394.2598460) at http://goo.gl/jqLud5
  • NodEO, a multi-paradigm distributed evolutionary algorithm platform in JavaScript” by Juan-Julián Merelo, Pedro Castillo, Antonio Mora, Anna Esparcia-Alcázar, Víctor Rivas-Santos (doi:10.1145/2598394.2605688) at http://goo.gl/eFmv1T
  • “Enforcing corporate security policies via computational intelligence techniques” by Antonio M. Mora, Paloma De las Cuevas, Juan Julián Merelo, Sergio Zamarripa, Anna I. Esparcia-Alcázar (doi: 10.1145/2598394.2605438) at http://goo.gl/33gWES
  • A methodology for designing emergent literary backstories on non-player characters using genetic algorithms”, by Rubén Héctor García-Ortega, Pablo García-Sánchez, Antonio Miguel Mora, Juan Julián Merelo (doi: 10.1145/2598394.2598482) at http://goo.gl/9CEcMc

Enjoy!

New poster accepted for GECCO’14

Assessing different architectures for evolutionary algorithms in JavaScript is one of the papers that GeNeura has sent to GECCO 2014 (to be held in Vancouver, Canada, next July).

The authors of the paper are J.J. Merelo, Pedro Castillo, Antonio Mora and Víctor Rivas from GeNeura Team, and Anna Esparcia-Alcazar, from S2 Grupo

Unreliable Heterogeneous Workers in a pool-based evolutionary algorithm

by Mario Garcia-Valdez, Juan-J. Merelo, Francisco Fernández de Vega
in  EvoAPPS posters

In this paper the effect of node unavailability in algorithms using EvoSpace, a pool-based evolutionary algorithm, is assessed. EvoSpace is a framework for developing evolutionary algorithms (EAs) using heterogeneous and unreliable resources. It is based on Linda’s tuple space coordination model. The core elements of EvoSpace are a central repository for the evolving population and remote clients, here called EvoWorkers, which pull random samples of the population to perform on them the basic evolutionary processes (selection, variation and survival), once the work is done, the modified sample is pushed back to the central population. To address the problem of unreliable EvoWorkers, EvoSpace uses a simple re-insertion algorithm using copies of samples stored in a global queue which also prevents the starvation of the population pool. Using a benchmark problem from the P-Peaks problem generator we have compared two approaches: (i) the re-insertion of previous individuals at the cost of keeping copies of each sample, and a common approach of other pool based EAs, (ii) inserting randomly generated  individuals. We found that EvoSpace is fault tolerant to highly unreliable resources and also that the re-insertion algorithm is only needed when the population is near the point of starvation.