Bayesian optimization is a general class of optimization algorithms that allows us to solve possibly nonlinear, nonconvex reward functions. While these generality of these techniques makes them particularly amenable, they also exhibit low efficiency in the way the deicision space is searched. Furthermore, these methods tend to exhibit inefficiencies in scaling to high dimensions, due to the fact that a prediction of the reward over the solution space is maintained at each iteration.
These drawbacks have somewhat impacted the use of this family of methods in real applications. In our group we challenge these issues and do research in the following areas:
Accelerating search using local optimization methods;
Scaling algorithms to a large number of dimensions (i.e., >1000)
Scaling algorithms when simulation cost is very high (e.g., running a single experiment takes 10s of minutes or hours)
Scaling to High Dimensions.
Handling high simulation Costs.