Creating Hearthstone decks by using Genetic Algorithms

I’m glad you’re here, friend! There’s a chill outside, so pull up a chair by the hearth of our inn and prepare to learn how the Ancient Gods use the power of the secret and ancient branch of the Evolution to generate Hearthstone decks by means of the magic and mistery!!

The_Innkeeper's_Tale_-_The_Innkeeper's_Tale2.jpg

Several months ago, my colleague Alberto Tonda and I were discussing about our latest adventures playing the Digital Collectible Card Game Hearthstone, when one of us said “Uhm, Genetic Algorithms usually work well with combinatorial problems, and solutions are usually a vector of elements. Elements such as cards. Such as cards of Hearthstone, the game we are playing right now while we are talking. Are you thinking what I’m thinking?”

Five minutes later we found an open-source Hearthstone simulator and started to think how to address the possibility of automatically evolve decks of Hearthstone.

The idea is quite simple: Hearthstone is played using a deck of 30 cards (from a pool of thousands available), so it is easy to model the candidate solution. With the simulator, we can perform several matches using different enemy decks, and obtain the number of victories. Therefore, we have a number that can be used to model the performance (fitness) of the deck.

Soooo, it’s easy to see one and one makes two, two and one makes three, and it was destiny, that we created a genetic algorithm that generates deck for Hearthstone for free.

Our preliminary results where discussed here, but we wanted to continue testing our method, so we tested using all available classes of the game, with the help of JJ, Giovanny and Antonio. All the best human-made decks were outperformed by our approach! And not only that, we applied a new operator called Smart Mutation that it is based in what we do when we test new decks in Hearthstone: we remove a card, and place another instead, but with +/-1 mana crystals, and not one completely random from the pool. The results were even better. Neat!

Maybe you prefer to read the abstract, that it is written in a more formal way than this post. You know, using the language of the science.

Collectible card games have been among the most popular and profitable products of the entertainment industry since the early days of Magic: The Gathering in the nineties. Digital versions have also appeared, with HearthStone: Heroes of WarCraft being one of the most popular. In Hearthstone, every player can play as a hero, from a set of nine, and build his/her deck before the game from a big pool of available cards, including both neutral and hero-specific cards.
This kind of games offers several challenges for researchers in artificial intelligence since they involve hidden information, unpredictable behaviour, and a large and rugged search space. Besides, an important part of player engagement in such games is a periodical input of new cards in the system, which mainly opens the door to new strategies for the players. Playtesting is the method used to check the new card sets for possible design flaws, and it is usually performed manually or via exhaustive search; in the case of Hearthstone, such test plays must take into account the chosen hero, with its specific kind of cards.
In this paper, we present a novel idea to improve and accelerate the playtesting process, systematically exploring the space of possible decks using an Evolutionary Algorithm (EA). This EA creates HearthStone decks which are then played by an AI versus established human-designed decks. Since the space of possible combinations that are play-tested is huge, search through the space of possible decks has been shortened via a new heuristic mutation operator, which is based on the behaviour of human players modifying their decks.
Results show the viability of our method for exploring the space of possible decks and automating the play-testing phase of game design. The resulting decks, that have been examined for balancedness by an expert player, outperform human-made ones when played by the AI; the introduction of the new heuristic operator helps to improve the obtained solutions, and basing the study on the whole set of heroes shows its validity through the whole range of decks.

You can download the complete paper from the Knowledge-based Systems Journal https://www.sciencedirect.com/science/article/pii/S0950705118301953

See you in future adventures!!!

Advertisements

A better TORCS driving controller presented in EvoStar 2018

Amazing bench
Last year, we presented along with Mohammed Salem, from the university of Mascara, in Algeria, our TORCS driving controller. This controller effectively drives a simulated vehicle, considering input from its sensors, and deciding on a target speed and how to turn the steering wheel.
Poster session, with our poster in the first position
This year, in Evostar 2018 in Parma, we had again our paper accepted for the poster session, which took place in the incredible corridor to the right of these words. The poster included interactive elements, such as a small car used for demonstration on how the driver worked.

And it works really well, or at least better than the previous versions. The key element was the design of a new fitness function that includes damages, and also terms related to speed. Still some way to go; in the near future we will be posting our new results in this area.

The book of proceedings can be downloaded from Springer. Our paper is in page 342 and you can also download just the paper from here, but we do open science, so you can follow our writing process and download the paper from this GitHub repository too

 

 

Detección y predicción de flujos de personas y vehículos

En el marco del congreso CIMAS 21, que se celebrará en Granada, haré una presentación sobre las posibilidades de nuestro sistema de detección de tramas WiFi y Bluetooth, del que ya hemos hablado varias veces.

La presentación se centrará en los aspectos más analíticos de la plataforma, viendo las posibilidades que puede tener para un destino turístico con énfasis deportivo.

2379652749_3af4c0f090_b

 

Early prediction of the outcome of Starcraft Games

As a result of Antonio Álvarez Caballero master’s thesis, we’ll be presenting tomorrow at the IJCCI 2017 conference a poster on the early prediction of Starcraft games.
The basic idea behind this line of research is to try and find a model of the game so that we can do fast fitness evaluation of strategies without playing the whole game, which can take up to 60 minutes. That way, we can optimize those strategies in an evolutionary algorithm and find the best ones.
In our usual open science style, paper and data are available in a repository.
Our conclusions say that we might be able to pull that off, using k-nearest neighbor algorithm. But we might have to investigate a bit further if we really want to find a model that gives us some insight about what makes a strategy a winner.

37404346594_a261c62e38_k.jpg

Dark clouds allow early prediction of heavy rain in Funchal, near where IJCCI is taking place

StarCraft protagoniza la reunión semanal de 30/oct/2017

En la pasada reunión del grupo Geneura, Victor estuvo exponiendo al resto de asistentes el trabajo denominado Predicting the Winner in Two Player StarCraft Games que fue publicado en el congreso  CoSECiVi’15 por el profesor Antonio A. Sanchez-Ruiz.

El enlace a la presentación está en https://vrivas.github.io/explicando-sanchez-ruiz-2015/output/index.html

 

Self-organized criticality in software repositories, poster presented at ECAL 2017

mural-insa

The European Conference on Artificial Life or ECAL is not one of our usual suspects. Although we have attended from time to time, and even organized it back in 95 (yep, that is a real web page from 1995, minus the slate gray background), it is a conference I quite enjoy, together with other artificial life related conferences. Artificial life was quite the buzzword in the 90s, but nowadays with all the deep learning and AI stuff it has gone out of fashion. Last time I attended,ten years ago, it seemed more crowded. Be that as it may, I have presented a tutorial and a poster about our work on looking for critical state in software repositories. This the poster itself, and there is a link to the open access proceedings, although, as you know, all our papers are online and you can obtain that one (and a slew of other ones) from repository.
This is a line of research we have been working on for a year now, from this initial paper were we examined a single repository for the Moose Perl module. We are looking for patterns that allow us to say whether repositories are in a critical state or not. Being as they are completely artificial systems, engineering artefacts, looking for self organized criticality might seem like a lost cause. On the other hand, it really clicks with our own experience when writing a paper or anything, really. You write in long stretches, and then you do small sessions where you change a line or two.
This paper, which looks at all kinds of open source projects, from Docker to vue.js, looks at three different things: long distance correlations, free-scale behavior of changes, and a pink noise in the spectral density of the time series of changes. And we do find it, almost everywhere. Most big repos, with more than a few hundred commits, possess it, independently of their language or origin (hobbyist or company).
There is still a lot of work ahead. What are the main mechanisms for this self-organization? Are there any exceptions? That will have to wait until the next conference.

Asynchronous, heterogeneous, pool based evolutionary algorithms in GECCO 2017

35172883894_62a4ac78b1_zboo
Fresh back from GECCO 2017, which is probably the main event for evolutionary algorithms and other metaheuristics. Together with the conference proper, there are workshops and tutorials. Last year we achieved full score, with papers, posters and tutorials. Unfortunately, not this year.
We’re happy though with the two papers that were accepted in the EvoSoft workshop, which we usually attend, and the BBOB benchmarking workshop. Both used the same thing, EvospaceJS, Mario’s framework for working with tuple-space pool-based evolutionary algorithms. The idea of this pool is decoupling algorithms from population. And as soon as you do that, a world of posibility opens, like using different clients on the same pool. In the EvoSoft paper, evospace-js: asynchronous pool-based execution of heterogeneous metaheuristics, we presented the general framework and a pool of concept which combined PSO and evolutionary algorithms, with very interesting results. Here’s the rather laconic presentation, which is a reason more to check out the paper.
In the second paperBenchmarking a pool-based execution with GA and PSO workers on the BBOB noiseless testbed.
All in all, EvospaceJS and NodIO, the two frameworks we work with, offer a nice platform for experimentation with different kind of algorithms that can be easily transported to the cloud and adapted to volunteer computing environments. Whatever the case, it also has an interesting dynamics that has an influence on the working of the evolutionary algorithms. Sure, we will continue tapping this source of interesting insights on evolutionary models.