Evolutionary squeaky wheel optimization: a new framework for analysis

Li, Jingpeng and Parkes, Andrew J. and Burke, Edmund K. (2011) Evolutionary squeaky wheel optimization: a new framework for analysis. Evolutionary Computation, 19 (3). pp. 405-428. ISSN 1530-9304

[img]
Preview
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (247kB) | Preview

Abstract

Squeaky wheel optimization (SWO) is a relatively new metaheuristic that has been shown to be effective for many real-world problems. At each iteration SWO does a complete construction of a solution starting from the empty assignment. Although the construction uses information from previous iterations, the complete rebuilding does mean that SWO is generally effective at diversification but can suffer from a relatively weak intensification. Evolutionary SWO (ESWO) is a recent extension to SWO that is designed to improve the intensification by keeping the good components of solutions and only using SWO to reconstruct other poorer components of the solution. In such algorithms a standard challenge is to understand how the various parameters affect the search process. In order to support the future study of such issues, we propose a formal framework for the analysis of ESWO. The framework is based on Markov chains, and the main novelty arises because ESWO moves through the space of partial assignments. This makes it significantly different from the analyses used in local search (such as simulated annealing) which only move through complete assignments. Generally, the exact details of ESWO will depend on various heuristics; so we focus our approach on a case of ESWO that we call ESWO-II and that has probabilistic as opposed to heuristic selection and construction operators. For ESWO-II, we study a simple problem instance and explicitly compute the stationary distribution probability over the states of the search space. We find interesting properties of the distribution. In particular, we find that the probabilities of states generally, but not always, increase with their fitness. This nonmonotonocity is quite different from the monotonicity expected in algorithms such as simulated annealing.

Item Type: Article
Keywords: Combinatorial optimization, metaheuristics, stochastic search, stochastic process, Markov chain.
Schools/Departments: University of Nottingham, UK > Faculty of Science > School of Computer Science
Identification Number: https://doi.org/10.1162/EVCO_a_00033
Depositing User: YUAN, Ziqi
Date Deposited: 08 Mar 2018 10:56
Last Modified: 08 Jun 2018 15:42
URI: http://eprints.nottingham.ac.uk/id/eprint/50212

Actions (Archive Staff Only)

Edit View Edit View