Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/90439
Citations
Scopus Web of Science® Altmetric
?
?
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWeise, T.-
dc.contributor.authorChiong, R.-
dc.contributor.authorLassig, J.-
dc.contributor.authorTang, K.-
dc.contributor.authorTsutsui, S.-
dc.contributor.authorChen, W.-
dc.contributor.authorMichalewicz, Z.-
dc.contributor.authorYao, X.-
dc.date.issued2014-
dc.identifier.citationIEEE Computational Intelligence Magazine, 2014; 9(3):40-52-
dc.identifier.issn1556-603X-
dc.identifier.issn1556-6048-
dc.identifier.urihttp://hdl.handle.net/2440/90439-
dc.description.abstractWe introduce an experimentation procedure for evaluating and comparing optimization algorithms based on the Traveling Salesman Problem (TSP). We argue that end-of-run results alone do not give sufficient information about an algorithm's performance, so our approach analyzes the algorithm's progress over time. Comparisons of performance curves in diagrams can be formalized by comparing the areas under them. Algorithms can be ranked according to a performance metric. Rankings based on different metrics can then be aggregated into a global ranking, which provides a quick overview of the quality of algorithms in comparison. An open source software framework, the TSP Suite, applies this experimental procedure to the TSP. The framework can support researchers in implementing TSP solvers, unit testing them, and running experiments in a parallel and distributed fashion. It also has an evaluator component, which implements the proposed evaluation process and produces detailed reports. We test the approach by using the TSP Suite to benchmark several local search and evolutionary computation methods. This results in a large set of baseline data, which will be made available to the research community. Our experiments show that the tested pure global optimization algorithms are outperformed by local search, but the best results come from hybrid algorithms.-
dc.description.statementofresponsibilityThomas Weise, Raymond Chiong, Jörg Lässig, Ke Tang, Shigeyoshi Tsutsui, Wenxiang Chen, Zbigniew Michalewicz, Xin Yao-
dc.language.isoen-
dc.publisherInstitute of Electrical and Electronics Engineers Inc.-
dc.rights© 2014 IEEE-
dc.source.urihttp://dx.doi.org/10.1109/mci.2014.2326101-
dc.titleBenchmarking optimization algorithms: an open source framework for the traveling salesman problem-
dc.typeJournal article-
dc.identifier.doi10.1109/MCI.2014.2326101-
pubs.publication-statusPublished-
Appears in Collections:Aurora harvest 7
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.