Use of Mixture of ExpertsΒΆ

from smt.sampling_methods import LHS
from smt.problems import Sphere
from smt.applications import MOE
import numpy as np
import otsmt
import openturns as ot
Definition of Initial data
# Construction of the DOE
fun = Sphere(ndim=2)
sampling = LHS(xlimits=fun.xlimits, criterion="m")
xt = sampling(40)
yt = fun(xt)
# Compute the gradient
for i in range(2):
    yd = fun(xt, kx=i)
    yt = np.concatenate((yt, yd), axis=1)

xv = ot.Sample([[0.1,1.],[1.,2.]])
Training of smt model for Mixture of Experts
moe = MOE(n_clusters=2)
moe.set_training_values(xt, yt[:,0][:,np.newaxis])
moe.train()

Out:

Kriging 0.0019774962744136815
LS 14.091040553892764
QP 1.2079226507921703e-13
KPLS 0.000566379341389478
KPLSK 0.0019774962744136815
RBF 16.00426964422284
RMTC 1.7085740478443157
RMTB 1.514499983582354
IDW 12.916361301819009
Best expert = QP
Kriging 0.01824342457143845
LS 75.38873897617815
QP 4.1988206199602467e-13
KPLS 0.019851775845021857
KPLSK 0.017764731883920172
RBF 196.50526420109568
RMTC 17.721843759762763
RMTB 17.377560811321434
IDW 66.65987756513623
Best expert = QP
Creation of OpenTurns PythonFunction for prediction
otmoe = otsmt.smt2ot(moe)
otmoeprediction = otmoe.getPredictionFunction()
print('Predicted values by MOE:',otmoeprediction(xv))

Out:

Predicted values by MOE:     [ y0   ]
0 : [ 1.01 ]
1 : [ 5    ]

Total running time of the script: ( 0 minutes 1.178 seconds)

Gallery generated by Sphinx-Gallery