Creating Hearthstone decks by using Genetic Algorithms

I’m glad you’re here, friend! There’s a chill outside, so pull up a chair by the hearth of our inn and prepare to learn how the Ancient Gods use the power of the secret and ancient branch of the Evolution to generate Hearthstone decks by means of the magic and mistery!!

The_Innkeeper's_Tale_-_The_Innkeeper's_Tale2.jpg

Several months ago, my colleague Alberto Tonda and I were discussing about our latest adventures playing the Digital Collectible Card Game Hearthstone, when one of us said “Uhm, Genetic Algorithms usually work well with combinatorial problems, and solutions are usually a vector of elements. Elements such as cards. Such as cards of Hearthstone, the game we are playing right now while we are talking. Are you thinking what I’m thinking?”

Five minutes later we found an open-source Hearthstone simulator and started to think how to address the possibility of automatically evolve decks of Hearthstone.

The idea is quite simple: Hearthstone is played using a deck of 30 cards (from a pool of thousands available), so it is easy to model the candidate solution. With the simulator, we can perform several matches using different enemy decks, and obtain the number of victories. Therefore, we have a number that can be used to model the performance (fitness) of the deck.

Soooo, it’s easy to see one and one makes two, two and one makes three, and it was destiny, that we created a genetic algorithm that generates deck for Hearthstone for free.

Our preliminary results where discussed here, but we wanted to continue testing our method, so we tested using all available classes of the game, with the help of JJ, Giovanny and Antonio. All the best human-made decks were outperformed by our approach! And not only that, we applied a new operator called Smart Mutation that it is based in what we do when we test new decks in Hearthstone: we remove a card, and place another instead, but with +/-1 mana crystals, and not one completely random from the pool. The results were even better. Neat!

Maybe you prefer to read the abstract, that it is written in a more formal way than this post. You know, using the language of the science.

Collectible card games have been among the most popular and profitable products of the entertainment industry since the early days of Magic: The Gathering in the nineties. Digital versions have also appeared, with HearthStone: Heroes of WarCraft being one of the most popular. In Hearthstone, every player can play as a hero, from a set of nine, and build his/her deck before the game from a big pool of available cards, including both neutral and hero-specific cards.
This kind of games offers several challenges for researchers in artificial intelligence since they involve hidden information, unpredictable behaviour, and a large and rugged search space. Besides, an important part of player engagement in such games is a periodical input of new cards in the system, which mainly opens the door to new strategies for the players. Playtesting is the method used to check the new card sets for possible design flaws, and it is usually performed manually or via exhaustive search; in the case of Hearthstone, such test plays must take into account the chosen hero, with its specific kind of cards.
In this paper, we present a novel idea to improve and accelerate the playtesting process, systematically exploring the space of possible decks using an Evolutionary Algorithm (EA). This EA creates HearthStone decks which are then played by an AI versus established human-designed decks. Since the space of possible combinations that are play-tested is huge, search through the space of possible decks has been shortened via a new heuristic mutation operator, which is based on the behaviour of human players modifying their decks.
Results show the viability of our method for exploring the space of possible decks and automating the play-testing phase of game design. The resulting decks, that have been examined for balancedness by an expert player, outperform human-made ones when played by the AI; the introduction of the new heuristic operator helps to improve the obtained solutions, and basing the study on the whole set of heroes shows its validity through the whole range of decks.

You can download the complete paper from the Knowledge-based Systems Journal https://www.sciencedirect.com/science/article/pii/S0950705118301953

See you in future adventures!!!

Advertisements

Early prediction of the outcome of Starcraft Games

As a result of Antonio Álvarez Caballero master’s thesis, we’ll be presenting tomorrow at the IJCCI 2017 conference a poster on the early prediction of Starcraft games.
The basic idea behind this line of research is to try and find a model of the game so that we can do fast fitness evaluation of strategies without playing the whole game, which can take up to 60 minutes. That way, we can optimize those strategies in an evolutionary algorithm and find the best ones.
In our usual open science style, paper and data are available in a repository.
Our conclusions say that we might be able to pull that off, using k-nearest neighbor algorithm. But we might have to investigate a bit further if we really want to find a model that gives us some insight about what makes a strategy a winner.

37404346594_a261c62e38_k.jpg

Dark clouds allow early prediction of heavy rain in Funchal, near where IJCCI is taking place

Our TORCS driving controller presented at EvoGAMES 2017

Last week, @jjmerelo presented at EvoGAMES 2017 (inside Evo* 2017) our work titled “Driving in TORCS using modular fuzzy controllers”.

This paper presents a novel car racing controller for TORCS (The Open Racing Car Simulator), which is based in the combination of two fuzzy subcontrollers, one for setting the speed, and one to control the steer angle. The obtained results are quite promissing, as the controller is quite competitive even against very tough TORCS teams.

The abstract of the paper is:

When driving a car it is essential to take into account all possible factors; even more so when, like in the TORCS simulated race game, the objective is not only to avoid collisions, but also to win the race within a limited budget. In this paper, we present the design of an autonomous driver for racing car in a simulated race. Unlike previous controllers, that only used fuzzy logic approaches for either acceleration or steering, the proposed driver uses simultaneously two fuzzy controllers for steering and computing the target speed of the car at every moment of the race. They use the track border sensors as inputs and besides, for enhanced safety, it has also taken into account the relative position of the other competitors. The proposed fuzzy driver is evaluated in practise and timed races giving good results across a wide variety of racing tracks, mainly those that have many turning points.

There was an interactive presentation at the conference, together with a poster:

The paper is available online from: https://link.springer.com/chapter/10.1007/978-3-319-55849-3_24

Enjoy (and cite) it! :D

 

Master of Evolution! Using Genetic Algorithms to generate decks for the game HearthStone

This September we attended to the IEEE CIG 2017 Conference in Santorini, Greece, to present the paper “Evolutionary Deckbuilding in HearthStone”. This paper was written in collaboration with our colleagues Alberto Tonda and Giovanni Squillero.

The story of this paper started a (not so) long time ago while me and Alberto were discussing about how awesome HearthStone is. Suddenly, we thought about how easy it would be to create the constraints for the uGP framework, and that there were some open source simulators of the game. In a while, we already had the constraints, the simulator adapted to accept individuals from uGP, and some experiments running.

And we finished the paper after, of course.

You can download the paper draft from here (the electronic original version is not available yet).

And here is the presentation:

The abstract:

—One of the most notable features of collectible card games is deckbuilding, that is, defining a personalized deck before the real game. Deckbuilding is a challenge that involves a big and rugged search space, with different and unpredictable behaviour
after simple card changes and even hidden information. In this paper, we explore the possibility of automated deckbuilding: a genetic algorithm is applied to the task, with the evaluation delegated to a game simulator that tests every potential deck against a varied and representative range of human-made decks.
In these preliminary experiments, the approach has proven able to create quite effective decks, a promising result that proves that, even in this challenging environment, evolutionary algorithms can find good solutions.

Towards automatic StarCraft strategy generation using genetic programming

I forgot to mention that we published our paper “Towards automatic StarCraft strategy generation using genetic programming” in CIG 2015 conference, held in Taiwan. This was a work made in collaboration with Alberto Tonda (INRA) and Giovanni Squillero (Politecnico di Torino), starting a new research line using this game (and also, starting other nice collaborations that are still a secret!)

The abstract:

Among Real-Time Strategy games few titles have enjoyed the continued success of StarCraft. Many research lines aimed at developing Artificial Intelligences, or “bots”, capable of challenging human players, use StarCraft as a platform. Several characteristics make this game particularly appealing for researchers, such as: asymmetric balanced factions, considerable complexity of the technology trees, large number of units with unique features, and potential for optimization both at the strategical and tactical level. In literature, various works exploit evolutionary computation to optimize particular aspects of the game, from squad formation to map exploration; but so far, no evolutionary approach has been applied to the development of a complete strategy from scratch. In this paper, we present the preliminary results of StarCraftGP, a framework able to evolve a complete strategy for StarCraft, from the building plan, to the composition of squads, up to the set of rules that define the bot’s behavior during the game. The proposed approach generates strategies as C++ classes, that are then compiled and executed inside the OpprimoBot open-source framework. In a first set of runs, we demonstrate that StarCraftGP ultimately generates a competitive strategy for a Zerg bot, able to defeat several human-designed bots.

Do you want to know more? Download the paper draft or electronic version in IEEE web.

Evostar 2015 mandatory post

We can never skip the chance to assist the Evostar conference, and aside learn the latest trends in Evolutionary Computation and present our results, we also have a good time with our other colleagues.

This time the conference was held in Copenhagen (Denmark), and because Antonio and me were part of the organization we didn’t have much time to go sightseeing, but we went to Tivoli Gardens and ride the flying chairs (and screamed like babies).

On the scientific part, we presented two papers to EvoGames track, related with our research lines on content generation for videogames and AI optimization. The first paper, How the World Was MADE: Parametrization of Evolved Agent-Based Models for Backstory Generation, presents a study on parametrization of the values that define a virtual world to facilitate the emergence of archetypes, and be able to generate interesting backstories (for videogames, for example). See the poster here:

The poster

Also, as we are commited to open science and open software, you can download the MADE environment from its web. The abstract:

Generating fiction environments for a multi-agent system optimized by genetic algorithms (with some specific requirements related to the desirable plots), presents two main problems: first it is impossible to know in advance the optimal value for the particular designed fitness function, and at the same time, it creates a vast search space for the parameters that it needs. The purpose of this paper is to define a methodology to find the best parameter values for both, the evolutionary algorithm, and the own fictional world configuration. This design includes running, to completion, a world simulation represented as a chromosome, and assigning a fitness to it, thus composing a very complex fitness landscape.
In order to optimize the resources allocated to evolution and to have some guarantees that the final result will be close to the optimum, we systematically analyze a set of possible values of the most relevant parameters, obtaining a set of generic rules. These rules, when applied to the plot requisites, and thus, to the fitness function, will lead to a reduced range of parameter values that will help the storyteller to create optimal worlds with a reduced computation budget.

Evostar 2015 - Copenhagen(That’s me with the IKEA rat plushies I used to describe our system)

Our other paper, It’s Time to Stop: A Comparison of Termination Conditions in the Evolution of Game Bots, describes a methodology to compare different termination conditions in noisy environments such as the RTS games. The abstract:

Evolutionary Algorithms (EAs) are frequently used as a mechanism for the optimization of autonomous agents in games (bots), but knowing when to stop the evolution, when the bots are good enough, is not as easy as it would a priori seem. The first issue is that optimal bots are either unknown (and thus unusable as termination condition) or unreachable. In most EAs trying to find optimal bots fitness is evaluated through game playing. Many times it is found to be noisy, making its use as a termination condition also complicated. A fixed amount of evaluations or, in the case of games, a certain level of victories does not guarantee an optimal result. Thus the main objective of this paper is to test several termination conditions in order to find the one that yields optimal solutions within a restricted amount of time, and that allows researchers to compare different EAs as fairly as possible. To achieve this we will examine several ways of finishing an EA who is finding an optimal bot design process for a particular game, Planet Wars in this case, with the characteristics described above, determining the capabilities of every one of them and, eventually, selecting one for future designs.

Evostar 2015 - Copenhagen(Here’s Antonio presenting the paper)

You can see the rest of the Evostar photos in their flickr account.

Ms. PacMan in IEEE Transactions on CI and AI in Games

Our fans and followers must be happy! ;D

They can now access the excellent work by Federico Liberatore in IEEE ToCIAIG journal.

This is the best journal concerning Artificial Intelligence in games, with a very strict reviewing process, so, we are very proud of this success. ;)

This is the next step in the research started one year and a half ago designing competitive  Ghost Teams for catching Ms. PacMan.

The abstract is:

In the last year, thanks to the Ms. Pac-Man vs Ghosts competition, the game of Ms. Pac-Man has gained increasing attention from academics in the field of Computational Intelligence. In this work, we contribute to this research stream by presenting a simple Genetic Algorithm with Lexicographic Ranking (GALR) for the optimization of Flocking Strategy-based ghost controllers. Flocking Strategies are a paradigm for intelligent agents characterized by showing emergent behavior and for having very little computational and memory requirements, making them well suited for commercial applications and mobile devices. In particular, we study empirically the effect of optimizing homogeneous and heterogeneous teams. The computational analysis shows that the Flocking Strategy-based controllers generated by the proposed GALR outperform the ghost controllers included in the competition framework and some of those presented in the literature.

The paper can be found here: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7093170&tag=1

Enjoy it!

(And cite us) :D