In the week before the discussion on microarrays, the seminar’s paper was From Recombination of Genes to the Estimation of Parameters I, Binary Parameters, by H. Mulhenbein and G. Paaβ. The seminar aimed at discussing the simple Univariate Marginal Distribution Algorithm (UMDA), its origins, advantages and disadvantages when compared to standard Genetic Algorithms, and the algorithms that were born out of this simple evolutionary method. There is a lot of work done on UMDA and Estimation of Distribution Algorithms (EDAs), so there is not much latitude for speculation.
One of the most interesting features of UMDA and following EDAs is their simplicity and the way research has been done by carefully analyzing and improving existing EDAs, from the “old” PBIL to the sophisticated and rather effective BOA and hBOA algorithms. Theoretical analysis of Evolutionary Computation, namely that concerned with scalability and convergence issues, experienced a consisted improvement since the burst on EDAs research. Moreover, it is when scalability is carefully analyzed and/or measured that some EDAs reveal all its full power when compared to traditional Genetic Algorithms. Some EDAs, by “learning” a problems’ structure (think of BOA, for instance) are able to deal with very large instances of that problem in practicable computational time. In addition, recent investigations have concluded that these kind of algorithms and Ant Colony Optimization share a sufficient number of traits to be seen as methods belonging to the same class of heuristics, a fact that may spoil all the “magic” behind Ant Algorithms (and it will sure discredit much of the jargon involved in some discussions on Ant Algorithms, self-organization, etc). On the other hand, this “unification” will shed some light on both algorithms’ characteristics.
(In this line of investigation, our group has recently published the paper UMDAs for Dynamic Optimization Problems, but further research is on way, on both static and dynamic environments.)