Optimization problems occur frequently in practice and one of the most important questions when facing a new problem is which of the many available algorithms to choose. To assist in answering this question, existing algorithms are typically compared experimentally on (test) problems with (more or less) well-understood properties that relate to the difficulties occurring in practical problems. Theoretical algorithm comparisons, e.g., in terms of runtime or convergence speed analysis, can also help to advice against or in favor of a certain algorithm for a specific unknown problem.
In the particular case of multiobjective problems, one has to face several difficulties in assessing the performance of algorithms, and despite many advances in the recent years, there are still fundamental questions to be solved. On the one hand, the objectives are most often conflicting and the algorithms' outcomes can therefore comprise several incomparable solutions. On the other hand, the algorithms themselves are typically randomized and result potentially in different solution sets each time they are run. Moreover, the definition of good algorithm performance is not straight forward in the multiobjective case as both a good convergence and a good coverage of the so-called Pareto front are desired.
The aim of this special session on Performance Assessment for Multiobjective Metaheuristics is to bring together researchers working on performance assessment and to present salient current research results to people interested in this topic.
To this end, we are welcoming high quality abstracts in all theoretical, implementational, and applied aspects of performance assessment for multiobjective optimizers. Studies on applied and hands-on algorithm comparisons are highly encouraged and we are explicitly inviting studies for both continuous and discrete search spaces. Topics of interest include, but are not limited to
* analysis of previous benchmarking exercises (e.g. CEC'07 and CEC'09) * approximation algorithms for multiobjective problems * comparisons between exact and randomized approaches * constrained test problems * large-scale comparisons of different multiobjective metaheuristics * many-objective test problems * new performance measures for set-based optimization * new test problem instances/suites * performance assessment for dynamic and/or noisy problems * pros and cons of existing benchmark test suites * ranking schemes and their advantages and disadvantages * relations between Pareto front and Pareto set approximations * real-world test problems * runtime analyses of multiobjective metaheuristics * software for benchmarking multiobjective optimizers * theoretical foundations of benchmarking multiobjective optimizers
All submitted abstracts will be reviewed by at least three different experts in the field. Authors of accepted abstracts will be then invited to present their work during the META'2012 conference.