Spherical linear kernels for high dimensional BO

Spherical Linear Kernel is useful for optimizing high-dimensional problems.

from bofire.benchmarks.svm import SVM
from bofire.data_models.strategies.api import SoboStrategy
from bofire.data_models.kernels.api import SphericalLinearKernel
from bofire.data_models.surrogates.api import SingleTaskGPSurrogate, BotorchSurrogates
import bofire.strategies.api as strategies

We use the SVM benchmark.

# problem setup for spherical linear kernels
benchmark = SVM()
candidates = benchmark._domain.inputs.sample(benchmark.dim+1, seed=benchmark.seed)
experiments = candidates.copy()
result = benchmark._f(experiments)
# Add empty columns 'y' and 'valid_y' to experiments DataFrame
experiments["y"], experiments["valid_y"] = result["y"], result["valid_y"]
sobo_strategy_data_model = SoboStrategy(
    domain=benchmark._domain,
    seed=benchmark.seed,
    surrogate_specs=BotorchSurrogates(
        surrogates=[
            SingleTaskGPSurrogate(
                inputs=benchmark._domain.inputs,
                outputs=benchmark._domain.outputs,
                kernel=SphericalLinearKernel(),
            )
        ]
    ),
)
strategy = strategies.map(sobo_strategy_data_model)
Downloading SVM data...
Download complete.

Running the optimization loop

strategy.tell(experiments, replace=True)
num_steps = 3 # set the number of steps here (the original paper uses 1000 steps)
for step_number in range(num_steps):
    print(f"Step {step_number+1}/{num_steps}")
    new_candidates = strategy.ask(candidate_count=1)
    new_experiments = new_candidates.copy()
    result = benchmark._f(new_candidates)
    new_experiments["y"], new_experiments["valid_y"] = result["y"], result["valid_y"]
    print(f"New experiment:\n{new_experiments}")
    strategy.tell(experiments=new_experiments)
# save all the experiments
all_experiments = strategy.experiments
Step 1/3
/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:

A not p.d., added jitter of 1.0e-08 to the diagonal
New experiment:
        x_1      x_10     x_100     x_101     x_102     x_103     x_104  \
0  0.667271  0.645865  0.673514  0.788116  0.515714  0.899225  0.370435   

      x_105     x_106     x_107  ...      x_95      x_96      x_97      x_98  \
0  0.745721  0.642221  0.920136  ...  0.456597  0.457589  0.825119  0.552249   

       x_99    y_pred     y_sd     y_des         y  valid_y  
0  0.946278  0.236777  0.00201 -0.236777  0.229736        1  

[1 rows x 393 columns]
Step 2/3
/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:

A not p.d., added jitter of 1.0e-08 to the diagonal

/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/site-packages/botorch/optim/optimize.py:789: RuntimeWarning:

Optimization failed in `gen_candidates_scipy` with the following warning(s):
[OptimizationWarning('Optimization failed within `scipy.optimize.minimize` with status 2 and message ABNORMAL: .')]
Trying again with a new set of initial conditions.
New experiment:
        x_1      x_10     x_100     x_101     x_102     x_103     x_104  \
0  0.571132  0.402144  0.774951  0.740535  0.325296  0.751865  0.269752   

      x_105     x_106     x_107  ...      x_95      x_96      x_97      x_98  \
0  0.163741  0.343008  0.328641  ...  0.959124  0.405325  0.504206  0.866072   

       x_99    y_pred      y_sd     y_des         y  valid_y  
0  0.324885  0.236599  0.002038 -0.236599  0.228193        1  

[1 rows x 393 columns]
Step 3/3
New experiment:
        x_1      x_10    x_100  x_101     x_102     x_103     x_104  x_105  \
0  0.827119  0.849634  0.12755    1.0  0.285359  0.549768  0.751327    1.0   

      x_106     x_107  ...      x_95  x_96      x_97      x_98      x_99  \
0  0.915843  0.460966  ...  0.422644   1.0  0.663223  0.673715  0.730583   

     y_pred      y_sd     y_des         y  valid_y  
0  0.197406  0.001861 -0.197406  0.237064        1  

[1 rows x 393 columns]

One can use the results obtained in all_experiments to get the evolution of the optimum with respect to the iterations.