Towards automatic StarCraft strategy generation using genetic programming

I forgot to mention that we published our paper “Towards automatic StarCraft strategy generation using genetic programming” in CIG 2015 conference, held in Taiwan. This was a work made in collaboration with Alberto Tonda (INRA) and Giovanni Squillero (Politecnico di Torino), starting a new research line using this game (and also, starting other nice collaborations that are still a secret!)

The abstract:

Among Real-Time Strategy games few titles have enjoyed the continued success of StarCraft. Many research lines aimed at developing Artificial Intelligences, or “bots”, capable of challenging human players, use StarCraft as a platform. Several characteristics make this game particularly appealing for researchers, such as: asymmetric balanced factions, considerable complexity of the technology trees, large number of units with unique features, and potential for optimization both at the strategical and tactical level. In literature, various works exploit evolutionary computation to optimize particular aspects of the game, from squad formation to map exploration; but so far, no evolutionary approach has been applied to the development of a complete strategy from scratch. In this paper, we present the preliminary results of StarCraftGP, a framework able to evolve a complete strategy for StarCraft, from the building plan, to the composition of squads, up to the set of rules that define the bot’s behavior during the game. The proposed approach generates strategies as C++ classes, that are then compiled and executed inside the OpprimoBot open-source framework. In a first set of runs, we demonstrate that StarCraftGP ultimately generates a competitive strategy for a Zerg bot, able to defeat several human-designed bots.

Do you want to know more? Download the paper draft or electronic version in IEEE web.

How the world was MADE: parametrisation of evolved agent-based models for backstory generation

Our paper “How the world was MADE: parametrisation of evolved agent-based models for backstory generation” has been accepted in Evostar 2015 conference. This paper continues our previous work published at ALIFE. You can also download the MADE framework source and binaries to test it for your own. Also, read the draft in our repository on GitHub.

The abstract:

Generating fiction environments for a multi-agent system optimized by genetic algorithms (with some specific requirements related to the desirable plots), presents two main problems: first it is impossible to know in advance the optimal value for the particular designed fitness function, and at the same time, it creates a vast search space for the parameters that it needs. The purpose of this paper is to define a methodology to find the best parameter values for both, the evolutionary algorithm, and the own fictional world configuration. This design includes running, to completion, a world simulation represented as a chromosome, and assigning a fitness to it, thus composing a very complex fitness landscape.

In order to optimize the resources allocated to evolution and to have some guarantees that the final result will be close to the optimum, we systematically analyse a set of possible values of the most relevant parameters, obtaining a set of generic rules. These rules, when applied to the plot requisites, and thus, to the fitness function, will lead to a reduced range of parameter values that will help the storyteller to create optimal worlds with a reduced computation budget.

Evolving Evil: Optimizing Flocking Strategies through Genetic Algorithms for the Ghost Team in the Game of Ms. Pac-Man

by Federico Liberatore, Antonio Mora, Pedro Castillo, Juan Julián Merelo in EvoGAMES
Flocking strategies are sets of behavior rules for the interaction of agents that allow to devise controllers with reduced complexity that generate emerging behavior. In this paper, we present an application of genetic algorithms and flocking strategies to control the Ghost Team in the game Ms. Pac-Man. In particular, we define flocking strategies for the Ghost Team and optimize them for robustness with respect to the stochastic elements of the game and effectivity against different possible opponents by means of genetic algorithm. 
The performance of the methodology proposed is tested and compared with that of other standard controllers belonging to the framework of the Ms. Pac-Man versus Ghosts Competition. The results show that flocking strategies are capable of modelling complex behaviors and produce effective and challenging agents. 

The presentation is:

 

You can also see a brief demo here (we are the ghosts :D):

Enjoy it!

(And cite us, of course :D)

Hackathon in Videogames at EVO* 2014

Imagen

Hi to all,

Finally, the EVOHackathon will be held in the Oficina de Software Libre on Tuesday 22 April (one day before EVOGames conference).

There are 5 projects confirmed right now,namely:

  • Creating autonomous agents for Super Mario Bros. game
  • Creating an AI engine for the game Wetland (Greyman Studios)
  • Creating bots for 1 vs 1 combats in the RTS Planet Wars
  • Procedural generation of stages for a new game (Greyman Studios)
  • Progamer: Code visualization tool based in Super Mario Bros. levels

As you can see, two of them are proposed and will be directed by a videogames company.

We invite you to join us. It is free! :D

Unreal Expert Bots at IWANN 2013

Last week there was held IWANN 2013 at Tenerife, an international conference mainly devoted to researches inside the neural networks scope. In it, Antonio Fernández Leiva, Raúl Lara and Me organized the Special Session on Artificial Intelligence and Games.

There were five works in the session, one of them “Designing and Evolving an Unreal Tournament— 2004 Expert Bot“.

It describes the designing and improvement, through off-line (not during the game) evolution, of an autonomous agent (or bot) for playing the game Unreal Tournament 2004. This was created by means of a finite state machine which models the expert behaviour of a human player in 1 vs 1 deathmatch mode, following the rules of the international competition.

Then, the bot was improved by means of a Genetic Algorithm, yielding an agent that is, in turn a very hard opponent for the medium-level human player and which can (easily) beat the default bots in the game, even in the maximum difficulty level.

The presentation can be seen at:

Moreover, you can watch one example of the evolution in the following video:

Finally, the Unreal Expert and Genetic bot’s source code are available at https://github.com/franaisa/ExpertAgent

Enjoy them. ;)

More results of the hispano-mexican collaboration

As a result of a collaboration with Mario García Valdez, Leonardo Trujillo and Francisco Fernández (this one from Spain) we have published two papers based on the EvoSpace framework a pool-based evolutionary architecture for interactive and straight evolutionary computation. The first paper describes the EvoSpace-i, the interactive part and is well described by Paco Fernández in our group blog, and the

Super Mario autonomous agents at LION 2013

Recently, inside the last LION 7 (2013) conference (Special Session on Games and Computational Intelligence) there was presented the paper entitled “FSM-Based Agents for Playing Super Mario Game”.

It describes the implementation and test of an autonomous agent which can play Super Mario game better than an expert user can do (in some trained levels).
It is build starting from a Finite State Machine and applying an Evolutionary Algorithm.

The presentation is:

You can watch one example of the obtained agent playing a game here:

Enjoy it. ;)