Analytical benchmark problems and methodological framework for the assessment and comparison of multifidelity optimization methods
File(s)s11831-025-10392-8.pdf (6.58 MB)
Published online version
Author(s)
Type
Journal Article
Abstract
As engineering systems increase in complexity and performance demands intensify, Multidisciplinary Design Optimization (MDO) methodologies are becoming essential for integrating models from multiple disciplines to optimize complex multi-physics systems. Within this context, major challenges remain in selecting appropriate disciplinary fidelity levels, and how to couple them effectively. Multifidelity methods offer a promising path forward by strategically combining information sources of varying fidelity - whether computational or experimental - to enable efficient and scalable design exploration and optimization. Despite the development of numerous multifidelity methods, their comparative performance remains difficult to assess due to the absence of standardized benchmark frameworks capable of evaluating performance across diverse optimization tasks. To address this gap, this paper introduces a comprehensive benchmarking framework that includes: (i) a suite of analytical benchmark optimization problems designed to stress-test and validate multifidelity methods; (ii) a set of assessment metrics for quantifying and comparing performance over measurable objectives; and (iii) the classification, evaluation, and comparison of several families of multifidelity optimization methods and frameworks using the proposed benchmarks to identify their respective strengths and weaknesses in real-world scenarios. The proposed benchmark problems are analytically defined functions carefully selected to capture mathematical challenges commonly encountered in real-world applications, including high dimensionality, multimodality, discontinuities, and noise. Their closed-form nature ensures computational efficiency, high reproducibility, and a clear separation of algorithmic behavior from numerical artifacts. The accompanying performance metrics support the systematic evaluation of multifidelity methods, measuring both optimization effectiveness and global approximation accuracy. By providing a rigorous, reproducible, and accessible benchmarking framework, this work aims to enable the broader community to understand, compare, and advance multifidelity optimization methods for complex problems in science and engineering.
Date Issued
2025-11-10
Date Acceptance
2025-08-23
Citation
Archives of Computational Methods in Engineering, 2025
ISSN
1134-3060
Publisher
Springer Science and Business Media LLC
Journal / Book Title
Archives of Computational Methods in Engineering
Copyright Statement
© The Author(s) 2025 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
License URL
Publication Status
Published online
Date Publish Online
2025-11-10