Quick start¶

This section walks you through the steps of evaluating performance functions and submitting results within the challenge framework.

Prerequisites:

We provide a Python and a Matlab tutorial as well, pick the one you prefer.

Python¶

The most important actions you have to be familiar with to successfully participate in the challenge are:

Evaluate evaluates a performance function on the web server, while Submit submits your intermediate and/or final results such as reliability indices and sensitivity factors to the server. Note that is of utmost importance that you also submit your intermediate results so we can compare the rate of convergence of different methods.

You need to have a Python installation (either 2.x or 3.x will work).

Evaluate¶

Download our evaluate.py file that manages the communication with the server and put into a folder of your choice.

Warning

GitLab will propose a filename that contains the path, rename it to evaluate.py.

Set the working directory of Python to this folder and open a new py-file (script) [*]. If you do not have the requests package installed (pip install requests) then install it as the Python files rely on it.

First, we need to provide our username and password. Once the challenge is open (Timeline) you can register and use your username and password to access the performance functions. For testing purposes let’s use a predefined, test user account:

username = 'testuser'


Then we need some ids to uniquely identify our performance function. Let’s pick the second problem from the – Tutorial set.:

set_id = -1
problem_id  = 2


Problem 2 is corresponding to RP22 that is a two-dimensional, nonlinear function.

The next step is to specify the location where we would like to evaluate the performance function. The location is the independent variable of the function and we define it as a list with numerical items (you can also use a numpy array and bundle (vectorized) calls are also possible Evaluate):

x = [0.545, 1.23]


Now we have everything to evaluate the performance function. First we need to import the earlier downloaded evaluate.py function:

from evaluate import evaluate


Then we can use it to evaluate the performance function:

g_val_sys, g_val_comp, msg = evaluate(username, password, set_id, problem_id, x)
print(msg)
print(g_val_sys)
print(g_val_comp)


Where g_val_sys is the performance function value on system level and g_val_comp contains a performance function value for each component involved in the problem. For cases with a single performance function (as this one) g_val_sys=g_val_comp. Executing the above code prints the following to the command window:

Ok
1.2918
[1.2918]


Congratulations! You just evaluated a performance function on our server! Go ahead and test the function with other inputs, incorrect input, and try our other functions as well.

For using the performance function in your reliability algorithm you can wrap the above code into a function or you can create a lambda function that fits your reliability algorithm, e.g.

def g_fun(x):
return g


Given that within this challenge we intentionally cap your number of performance function evaluations you might wonder how many evaluations you have left. You can check this by clicking on this link. The request_limit is set to -1 for this tutorial example which means that there is no upper limit so you can experiment with these functions. However, for the problems included in the challenge finite limits are enforced. Once you registered you will have your dedicated page to check your submissions.

 [*] You can download a file with all the code above from the GitLab repository: tutorial_evaluate.py.

Submit¶

After evaluating a performance function multiple times you might want to submit your intermediate or final results to the server to be compared with other participants. This section shows you how to do that. Note that you have to submit at least one result for a problem to be considered among the participants. Results submitted via other means than that described below are not accepted. You are strongly encouraged to submit your intermediate results as well since that will give us an idea about the rate of convergence of your method.

For submitting result you need to download the submit.py file. Save it into your working directory. For illustrative purposes let’s make a dummy a submission with the following input (you can use numpy arrays as well):

from submit import submit

set_id      = -1
problem_id  = 2
beta_sys    = 3.4
beta_comp   = 3.4
alpha_sys   = []
alpha_comp  = [0.64, 0.77]


You have to identify yourself and specify the particular set and problem you are submitting the results to. Note that some inputs are optional, e.g. alpha_sys is left empty in this case. The results can submitted by using the submit function:

msg = submit(username, password, set_id, problem_id, beta_sys, beta_comp, alpha_sys, alpha_comp)

print(msg)


To check your submission online, in the database of the web server click on this link.

Matlab¶

The most important actions you have to be familiar with to successfully participate in the challenge are:

Evaluate evaluates a performance function on the web server, while Submit submits your intermediate and/or final results such as reliability indices and sensitivity factors to the server. Note that is of utmost importance that you also submit your intermediate results so we can compare the rate of convergence of different methods.

You need to have a Matlab installation (version R2015a or above).

Evaluate¶

Download our evaluate.m file that manages the communication with the server and parse_json.m that is used to post-process the response from the server. Put both files into a folder of your choice.

Warning

GitLab will propose a filename that contains the path, rename it to evaluate.m.

Set the working directory of Matlab to this folder and open a new m-file (script) [†].

First we need to provide our username and password. Once the challenge is open (Timeline) you can register and use your username and password to access the performance functions. For testing purposes let’s use a predefined, test user account:

username = 'testuser';


Then we need some ids to uniquely identify our performance function. Let’s pick the second problem from the tutorial set (see – Tutorial set. for further details):

set_id = -1;
problem_id  = 2;


Problem 2 is corresponding to RP22 that is a two-dimensional, nonlinear function.

The next step is to specify the location where we would like to evaluate the performance function. The location is the independent variable of the function and we define it as a numerical vector (bundle (vectorized) calls are also possible by using matrices Evaluate):

x = [0.545, 1.23];


Now we have everything to evaluate the performance function for which we are going to use the earlier downloaded evaluate.m function:

[g_val_sys, g_val_comp, msg] = evaluate(username, password, set_id, problem_id, x);
disp(msg)
disp(g_val_sys)
disp(g_val_comp)


Where g_val_sys is the performance function value on system level and g_val_comp contains a performance function value for each component involved in the problem. For cases with a single performance function (as this one) g_val_sys=g_val_comp. Executing the above code prints the following to the command window:

Ok
1.2918
1.2918


Congratulations! You just evaluated a performance function on our server! Go ahead and test the function with other inputs, incorrect input, and try our other functions as well.

For using the performance function in your reliability algorithm you can wrap the above code into a function or you can create a anonymous function that fits your reliability algorithm, e.g.

g_fun = @(x) evaluate(username, password, set_id, problem_id, x);


Given that within this challenge we intentionally cap your number of performance function evaluations you might wonder how many evaluations you have left. You can check this by clicking on this link. The request_limit is set to -1 for this tutorial example which means that there is not upper limit so you can experiment with these functions. However, for the problems included in the challenge finite limits are enforced.

 [†] You can download a file with all the code above from the GitLab repository: tutorial_evaluate.m.

Submit¶

After evaluating a performance function multiple times you might want to submit your intermediate or final results to the server to be compared with other participants. This section shows you how to do that. Note that you have to submit at least one result for a problem to be considered among the participants. Results submitted via other means than that described below are not accepted. You are strongly encouraged to submit your intermediate results as well since that will give us an idea about the rate of convergence of your method.

For submitting result you need to download the submit.m file. Save it into your working directory. For illustrative purposes let’s make a dummy a submission with the following input:

username    = 'testuser';
set_id      = -1;
problem_id  = 2;
beta_sys    = 3.4;
beta_comp   = 3.4;
alpha_sys   = [];
alpha_comp  = [0.64, 0.77];


You have to identify yourself and specify the particular set and problem you are submitting the results to. Note that some inputs are optional, e.g. alpha_sys is left empty in this case. The results can submitted by using the submit function:

msg = submit(username, password, set_id, problem_id, beta_sys, beta_comp, alpha_sys, alpha_comp);

disp(msg)


To check your submission online, in the database of the web server click on this link.