Cara menggunakan high-dimensional optimization python

Requests for name changes in the electronic proceedings will be accepted with no questions asked. However name changes may cause bibliographic tracking issues. Authors are asked to consider this carefully and discuss it with their co-authors prior to requesting a name change in the electronic proceedings.

Use the "Report an Issue" link to request a name change.

Bayesian optimization (BO) has recently emerged as powerful method for the global optimization of expensive-to-evaluate black-box functions. However, these methods are usually limited to about 15 input parameters (levers). In the paper "A Framework for Bayesian Optimization in Embedded Subspaces" (to appear at ICML'19), Munteanu, Nayebi, and Poloczek propose a non-adaptive probabilistic subspace embedding that can be combined with many BO algorithms to enable them to higher dimensional problems.

This repository provides Python implementations of several algorithms that extend BO to problems with high input dimensions:

  • The HeSBO algorithm proposed by Munteanu, Nayebi, and Poloczek (ICML '19) (see below for the citation) combined with

    • The Knowledge Gradient (KG) algorithm of Cornell-MOE (Wu & Frazier NIPS'16; Wu, Poloczek, Wilson, and Frazier NIPS'17)

    • The BLOSSOM algorithm of McLeod, Osborne, and Roberts (ICML '18)

    • Expected improvement, e.g., see Jones, Schonlau, and Welch (JGO '98)

  • The REMBO method using

    • the KXand Ky kernels of Wang et al. (JMLR '18) and

    • the Kkernel of Binois, Ginsbourger and Roustant (LION '15).

Installing the requirements

The codes are written in python 3.6, so it is recommended to use this version of python to run the scripts. To install the requirements one can simply use this line:

pip3 install -r requirements.txt

Running different BO methods

There are HeSBO and three different variants of REMBO implemented in this code. Three REMBO variants are called Ky, KX, and K. These algorithms can be run as follows.

python experiments.py [algorithm] [first_job_id] [last_job_id] [test_function] [num_of_steps] [low_dim] [high_dim] [num_of_initial_sample] [noise_variance] [REMBO_variant]

To determine the algorithm, use REMBO or HeSBO input for the python script. If REMBO algorithm is selected to be run, the REMBO variant must be determined by X, Y, or psi as the last argument. If none of those variants is picked, all of those variants will be run. Here is an example of running HeSBO-EI on 100 dim noise-free Branin with 4 low dimensions:

Code associated with paper "High-Dimensional Contextual Policy Search with Unknown Context Rewards using Bayesian Optimization"

Installation

To install the code clone the repo and install the dependencies as

git clone https://github.com/facebookresearch/ContextualBO.git
cd ContextualBO
python3 -m pip install -r requirements.txt

Some of the baselines require additional packages that can not be pip-installed.

Reproducing the experiments

This repository contains the code required to run the numerical experiments and the contextual Adaptive Bitrate (ABR) video playback experiment in the paper.

Running Synethetic Benchmarks

The benchmarks/ directory contains code for running the numerical experiments described in the paper. The benchmark problems are defined in synethetic_problems.py.

Running Park ABR experiments

The park_abr/ directory contains code for running the benchmark BO experiments described in the paper. The park problem is defined in fb_abr_problem.py and the simulator park_abr/park/ is a folk of the adaptive video streaming environment in https://github.com/park-project/park. Each method has its own script for evaluating that method on the appropriate set of benchmark problems: run_park_{method}.py, where {method} is:

  • lcea, for our method LCE-A, implemented in Ax
  • sac, for our method SAC, implemented in Ax
  • benchmarks/0, for Standard BO, implemented in Ax
  • benchmarks/1, for ALEBO implemented in Ax
  • benchmarks/2, for HesBO implemented in Ax
  • benchmarks/3, for REMBO implemented in Ax
  • benchmarks/4 for Add-GP-UCB via Dragonfly
  • benchmarks/5 for CMA-ES
  • benchmarks/6 for Ensemble Bayesian Optimization
  • benchmarks/7 for TuRBO
  • benchmarks/8, for Standard BO used for non-contextual optmization, implemented in Ax

See the paper for references for each of these methods. Each file explains what needs to be done in order to run the experiments for that method. For instance, benchmarks/9 requires installing synethetic_problems.py0 from pip; synethetic_problems.py1 requires cloning a repository. See each file for its instructions.

The contextual BO models and generation code

The actual implementation of the LCE-A, SAC, and LCE-M models is at: https://github.com/facebook/Ax/tree/master/ax/models/torch and https://github.com/pytorch/botorch/tree/master/botorch/models/