This thesis seeks to advance the state-of-the-art Bayesian optimization with the improvements coming from two aspects: (1) use of derivative information to accelerate the optimization convergence; and (2) tackle down issues in Bayesian optimization when a large number of function observations and derivative observations are present.
History
Pagination
120 p.
Open access
Yes
Language
English
Degree type
Doctorate
Degree name
Ph.D.
Copyright notice
All rights reserved
Editor/Contributor(s)
Cheng Li, Santu Rana, Sunil Gupta, Svetha Venkatesh