Deakin University
Browse

File(s) under permanent embargo

Regret for expected improvement over the best-observed value and stopping condition

conference contribution
posted on 2017-01-01, 00:00 authored by Tien Vu Nguyen, Sunil GuptaSunil Gupta, Santu RanaSantu Rana, Cheng Li, Svetha VenkateshSvetha Venkatesh
Bayesian optimization (BO) is a sample-efficient method for global optimization of expensive, noisy, black-box functions using probabilistic methods. The performance of a BO method depends on its selection strategy through the acquisition function. Expected improvement (EI) is one of the most widely used acquisition functions for BO that finds the expectation of the improvement function over the incumbent. The incumbent is usually selected as the best-observed value so far,
termed as ymax (for the maximizing problem). Recent work has studied the convergence rate for EI under some mild assumptions or zero noise of observations. Especially, the work of Wang and de Freitas (2014) has derived the sublinear regret for EI under a stochastic noise. However, due to the difficulty in stochastic noise setting and to make the convergent proof feasible, they use an alternative choice for the incumbent as the maximum of the Gaussian process predictive mean, µmax. This modification makes the algorithm computationally inefficient because it requires an additional global optimization step to estimate µmax that is costly and may be inaccurate. To address this issue, we derive a sublinear convergence rate for EI using the commonly used ymax. Moreover, our
analysis is the first to study a stopping criteria for EI to prevent unnecessary evaluations. Our analysis complements the results of Wang and de Freitas (2014) to theoretically cover two incumbent settings for EI. Finally, we demonstrate empirically that EI using ymax is both more computationally efficiency and more accurate than EI using µmax.

History

Event

Asian Conference on Machine Learning (2017 : Seoul, South Korea)

Pagination

1 - 16

Publisher

[The Conference]

Location

Seoul, Korea

Place of publication

[Seoul, South Korea)

Start date

2017-11-14

End date

2017-11-17

Language

wng

Grant ID

ARC Australian Laureate Fellowship (FL170100006)

Publication classification

E Conference publication; E1 Full written paper - refereed

Editor/Contributor(s)

Yung-Kyun Noh, Min-Ling Zhang

Title of proceedings

ACML 2017 : Proceedings of the Ninth Asian Conference on Machine Learning

Usage metrics

    Research Publications

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC