from bofire.benchmarks.svm import SVM
from bofire.data_models.strategies.api import SoboStrategy
from bofire.data_models.kernels.api import SphericalLinearKernel
from bofire.data_models.surrogates.api import SingleTaskGPSurrogate, BotorchSurrogates
import bofire.strategies.api as strategiesSpherical linear kernels for high dimensional BO
Spherical Linear Kernel is useful for optimizing high-dimensional problems.
We use the SVM benchmark.
# problem setup for spherical linear kernels
benchmark = SVM()
candidates = benchmark._domain.inputs.sample(benchmark.dim+1, seed=benchmark.seed)
experiments = candidates.copy()
result = benchmark._f(experiments)
# Add empty columns 'y' and 'valid_y' to experiments DataFrame
experiments["y"], experiments["valid_y"] = result["y"], result["valid_y"]
sobo_strategy_data_model = SoboStrategy(
domain=benchmark._domain,
seed=benchmark.seed,
surrogate_specs=BotorchSurrogates(
surrogates=[
SingleTaskGPSurrogate(
inputs=benchmark._domain.inputs,
outputs=benchmark._domain.outputs,
kernel=SphericalLinearKernel(),
)
]
),
)
strategy = strategies.map(sobo_strategy_data_model)Downloading SVM data...
Download complete.
Running the optimization loop
strategy.tell(experiments, replace=True)
num_steps = 3 # set the number of steps here (the original paper uses 1000 steps)
for step_number in range(num_steps):
print(f"Step {step_number+1}/{num_steps}")
new_candidates = strategy.ask(candidate_count=1)
new_experiments = new_candidates.copy()
result = benchmark._f(new_candidates)
new_experiments["y"], new_experiments["valid_y"] = result["y"], result["valid_y"]
print(f"New experiment:\n{new_experiments}")
strategy.tell(experiments=new_experiments)
# save all the experiments
all_experiments = strategy.experiments/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/site-packages/bofire/surrogates/botorch.py:181: UserWarning:
The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:213.)
Step 1/3
/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/site-packages/botorch/optim/optimize.py:795: RuntimeWarning:
Optimization failed in `gen_candidates_scipy` with the following warning(s):
[OptimizationWarning('Optimization failed within `scipy.optimize.minimize` with status 2 and message ABNORMAL: .'), OptimizationWarning('Optimization failed within `scipy.optimize.minimize` with status 2 and message ABNORMAL: .')]
Trying again with a new set of initial conditions.
New experiment:
x_1 x_10 x_100 x_101 x_102 x_103 x_104 \
0 0.212249 0.211433 0.648706 0.470268 0.790014 0.953726 0.071058
x_105 x_106 x_107 ... x_95 x_96 x_97 x_98 \
0 0.885282 0.153164 0.409444 ... 0.49382 0.428308 0.897351 0.521494
x_99 y_pred y_sd y_des y valid_y
0 0.44064 0.236782 0.00201 -0.236782 0.232084 1
[1 rows x 393 columns]
Step 2/3
/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
New experiment:
x_1 x_10 x_100 x_101 x_102 x_103 x_104 \
0 0.1298 0.434168 0.636069 0.078802 0.946293 0.91358 0.795813
x_105 x_106 x_107 ... x_95 x_96 x_97 x_98 \
0 0.162688 0.562483 0.466307 ... 0.96857 0.017605 0.858722 0.889144
x_99 y_pred y_sd y_des y valid_y
0 0.118597 0.236666 0.002021 -0.236666 0.233162 1
[1 rows x 393 columns]
Step 3/3
/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/site-packages/linear_operator/utils/cholesky.py:40: NumericalWarning:
A not p.d., added jitter of 1.0e-08 to the diagonal
New experiment:
x_1 x_10 x_100 x_101 x_102 x_103 x_104 \
0 0.597506 0.288676 0.780293 0.684276 0.133081 0.508465 0.64511
x_105 x_106 x_107 ... x_95 x_96 x_97 x_98 \
0 0.380321 0.88236 0.376153 ... 0.927389 0.279624 0.134092 0.740703
x_99 y_pred y_sd y_des y valid_y
0 0.935112 0.236574 0.002026 -0.236574 0.228982 1
[1 rows x 393 columns]
One can use the results obtained in all_experiments to get the evolution of the optimum with respect to the iterations.