| Interpretation result: |
|
| Problems: | 1 1 |
| Insights: | 0 |
| Model: |
{
"model_type": "mock",
"experiment_type": "binomial",
"metadata": {
"target_col": "SEX",
"labels": [
1
],
"num_labels": 1,
"used_features": [
"ID",
"LIMIT_BAL",
"EDUCATION",
"MARRIAGE",
"AGE",
"PAY_0",
"PAY_2",
"PAY_3",
"PAY_4",
"PAY_5",
"PAY_6",
"BILL_AMT1",
"BILL_AMT2",
"BILL_AMT3",
"BILL_AMT4",
"BILL_AMT5",
"BILL_AMT6",
"PAY_AMT1",
"PAY_AMT2",
"PAY_AMT3",
"PAY_AMT4",
"PAY_AMT5",
"PAY_AMT6",
"default payment next month"
],
"transformed_features": [],
"importances": {},
"features_metadata": {
"id": [],
"categorical": [],
"numeric": [],
"catnum": [],
"date": [],
"time": [],
"datetime": [],
"text": [],
"image": [],
"date-format": [],
"quantile-bin": {}
},
"model_path": ""
}
}
|
| Target column: |
SEX
|
| Dataset: |
data/predictive/creditcard.csv
|
| Interpretation status: | SUCCESS |
| Interpretation ID: |
c2fbf2c1-0e39-43f5-a807-85f8c8af7e1c
|
| Created: |
2026-01-30 17:30:59
|
Approximate model behavior
Surrogate Decision Tree
The surrogate decision tree is an approximate overall flow chart of the model, created by training a simple...
Explainers identified the following problems:
| Severity | Type | Problem | Suggested actions | Explainer | Resources |
|---|---|---|---|---|---|
| MEDIUM | bias | The residual partial dependence plot of feature 'LIMIT_BAL' indicates the highest interaction of this feature with the error (residual abs(max-min) = 0.03134083655380704) from all the features used by the model (or features configured for PD calculation) | Verify that feature `LIMIT_BAL' error interaction does not indicate a model bias or other problem. | Residual Partial Dependence Plot | PartialDependenceExplanation / application/json |
| LOW | bias | A path in the residual surrogate decision tree leading to the largest residual (2.0222235) may indicate a problem in the model. | Verify that the following surrogate decision tree path does not indicate a model bias or other problem: IF (LIMIT_BAL >= 54956.0 OR LIMIT_BAL IS N/A) AND (AGE < 25.5) AND (LIMIT_BAL >= 65507.5 OR LIMIT_BAL IS N/A) THEN AVERAGE VALUE OF TARGET IS 2.0222235 | Residual Surrogate Decision Tree | GlobalDtExplanation / application/json |
Scheduled explainers (10):
Disparate Impact Analysis (DIA) is a technique that is used to evaluate fairness. Bias can be introduced to models during the process of collecting, processing, and labeling data as a result, it is important to determine whether a model is harming certain users by making a significant number of biased decisions. DIA typically works by comparing aggregate measurements of unprivileged groups to a privileged group. For instance, the proportion of the unprivileged group that receives the potentially harmful outcome is divided by the proportion of the privileged group that receives the same outcome - the resulting proportion is then used to determine whether the model is biased.
Fairness metrics for the feature: PAY_5
| Parameter | Value | Description | Type | Default value |
|---|---|---|---|---|
| dia_cols |
None
|
List of features for which to compute DIA. |
list
|
None
|
| cut_off |
0.0
|
Cut off. |
float
|
0.0
|
| maximize_metric |
F1
|
Maximize metric. |
str
|
F1
|
| max_cardinality |
10
|
Max cardinality for categorical variables. |
int
|
10
|
| min_cardinality |
2
|
Minimum cardinality for categorical variables. |
int
|
2
|
| num_card |
25
|
Max cardinality for numeric variables to be considered categorical. |
int
|
25
|
Explainer identified the following problems:
| Severity | Type | Problem | Suggested actions | Explainer | Resources |
|---|---|---|---|---|---|
| LOW | bias | A path in the residual surrogate decision tree leading to the largest residual (2.0222235) may indicate a problem in the model. | Verify that the following surrogate decision tree path does not indicate a model bias or other problem: IF (LIMIT_BAL >= 54956.0 OR LIMIT_BAL IS N/A) AND (AGE < 25.5) AND (LIMIT_BAL >= 65507.5 OR LIMIT_BAL IS N/A) THEN AVERAGE VALUE OF TARGET IS 2.0222235 | Residual Surrogate Decision Tree | GlobalDtExplanation / application/json |
The residual surrogate decision tree predicts which paths in the tree (paths explain approximate model behavior) lead to highest or lowest error. The residual surrogate decision tree is created by training a simple decision tree on the residuals of the predictions of the model. Residuals are differences between observed and predicted values which can be used as targets in surrogate models for the purpose of model debugging. The method used to calculate residuals varies depending on the type of problem. For classification problems, logloss residuals are calculated for a specified class (only one residual surrogate decision is created by the explainer and it is built for this class). For regression problems, residuals are determined by calculating the square of the difference between targeted and predicted values.
Approximate model behavior for the class '1':
| Parameter | Value | Description | Type | Default value |
|---|---|---|---|---|
| debug_residuals_class |
1
|
Class for debugging classification model logloss residuals, empty string for debugging regression model residuals. |
str
|
|
| dt_tree_depth |
3
|
Decision tree depth. |
int
|
3
|
| nfolds |
3
|
Number of CV folds. |
int
|
3
|
| qbin_cols |
None
|
Quantile binning columns. |
list
|
None
|
| qbin_count |
0
|
Quantile bins count. |
int
|
0
|
| categorical_encoding |
onehotexplicit
|
Categorical encoding. |
str
|
onehotexplicit
|
| debug_residuals |
True
|
|
|
The surrogate decision tree is an approximate overall flow chart of the model, created by training a simple decision tree on the original inputs and the predictions of the model.
Approximate model behavior for the class '1':
| Parameter | Value | Description | Type | Default value |
|---|---|---|---|---|
| debug_residuals |
False
|
Debug model residuals. |
bool
|
False
|
| debug_residuals_class |
|
Class for debugging classification model logloss residuals, empty string for debugging regression model residuals. |
str
|
|
| dt_tree_depth |
3
|
Decision tree depth. |
int
|
3
|
| nfolds |
3
|
Number of CV folds. |
int
|
3
|
| qbin_cols |
None
|
Quantile binning columns. |
list
|
None
|
| qbin_count |
0
|
Quantile bins count. |
int
|
0
|
| categorical_encoding |
onehotexplicit
|
Categorical encoding. |
str
|
onehotexplicit
|
Shapley explanations are a technique with credible theoretical support that presents consistent global and local feature contributions.
The Shapley Summary Plot shows original features versus their local Shapley values on a sample of the dataset. Feature values are binned by Shapley values and the average normalized feature value for each bin is plotted. The legend corresponds to numeric features and maps to their normalized value - yellow is the lowest value and deep orange is the highest. You can also get a scatter plot of the actual numeric features values versus their corresponding Shapley values. Categorical features are shown in grey and do not provide an actual-value scatter plot.
Notes:
Global Shapley values for original features of class 'None (Regression)':
| Parameter | Value | Description | Type | Default value |
|---|---|---|---|---|
| max_features |
50
|
Maximum number of features to be shown in the plot. |
int
|
50
|
| sample_size |
20000
|
Sample size. |
int
|
20000
|
| x_shapley_resolution |
500
|
x-axis resolution (number of Shapley values bins). |
int
|
500
|
| enable_drilldown_charts |
True
|
Enable creation of per-feature Shapley/feature value scatter plots. |
bool
|
True
|
| fast_approx_contribs |
True
|
Speed up predictions with fast predictions and contributions approximations. |
bool
|
True
|
Partial dependence plot (PDP) portrays the average prediction behavior of the model across the domain of an input variable along with +/- 1 standard deviation bands. Individual Conditional Expectations plot (ICE) displays the prediction behavior for an individual row of data when an input variable is toggled across its domain.
PD binning:
Integer feature:
grid_resolution integer values in between minimum and maximum
of feature valuesgrid_resolution is bigger or equal to 2)grid_resolution values from feature values ordered by frequency
(int values are converted to strings and most frequent values are used
as bins)q bins where q is specified by PD parameterFloat feature:
grid_resolution float values in between minimum and maximum of feature
valuesgrid_resolution is bigger or equal to 2)grid_resolution values from feature values ordered by frequency
(float values are converted to strings and most frequent values are used
as bins)q bins where q is specified by PD parameterString feature:
grid_resolution values from feature values ordered by frequencyDate/datetime feature:
grid_resolution date values in between minimum and maximum of feature
valuesgrid_resolution is bigger or equal to 2)grid_resolution values from feature values ordered by frequency
(dates are handled as opaque strings and most frequent values are used
as bins)PD out of range binning:
Integer feature:
oor_grid_resolution integer values are added below minimum and
above maximumoor_grid_resolution
timesoor_grid_resolution is so high
that it would cause lower OOR bins to be negative numbers, then standard
deviation of size 1 is tried insteadFloat feature:
oor_grid_resolution float values are added below minimum and above maximumoor_grid_resolution
timesString feature:
UNSEEN is added as OOR binDate feature:
UNSEEN is added as OOR binPartial Dependence Plot for the feature 'ID' and class 'None (Regression)':
| Parameter | Value | Description | Type | Default value |
|---|---|---|---|---|
| sample_size |
25000
|
Sample size for Partial Dependence Plot. |
int
|
25000
|
| max_features |
10
|
Partial Dependence Plot number of features (to see all features used by model set to -1). |
int
|
10
|
| features |
None
|
Partial Dependence Plot feature list. |
list
|
None
|
| oor_grid_resolution |
0
|
Partial Dependence Plot number of out of range bins. |
int
|
0
|
| quantile-bin-grid-resolution |
0
|
Partial Dependence Plot quantile binning (total quantile points used to create bins). |
int
|
0
|
| grid_resolution |
20
|
Partial Dependence Plot observations per bin (number of equally spaced points used to create bins). |
int
|
20
|
| center |
False
|
Center Partial Dependence Plot using ICE centered at 0. |
bool
|
False
|
| sort_bins |
True
|
Ensure bin values sorting. |
bool
|
True
|
| histograms |
True
|
Enable histograms. |
bool
|
True
|
| quantile-bins |
|
Per-feature quantile binning (Example: if choosing features F1 and F2, this parameter is '{"F1": 2,"F2": 5}'. Note, you can set all features to use the same quantile binning with the `Partial Dependence Plot quantile binning` parameter and then adjust the quantile binning for a subset of PDP features with this parameter). |
str
|
|
| numcat_num_chart |
True
|
Unique feature values count driven Partial Dependence Plot binning and chart selection. |
bool
|
True
|
| numcat_threshold |
11
|
Threshold for Partial Dependence Plot binning and chart selection (<=threshold categorical, >threshold numeric). |
int
|
11
|
| debug_residuals |
False
|
Debug model residuals. |
bool
|
False
|
Explainer identified the following problems:
| Severity | Type | Problem | Suggested actions | Explainer | Resources |
|---|---|---|---|---|---|
| MEDIUM | bias | The residual partial dependence plot of feature 'LIMIT_BAL' indicates the highest interaction of this feature with the error (residual abs(max-min) = 0.03134083655380704) from all the features used by the model (or features configured for PD calculation) | Verify that feature `LIMIT_BAL' error interaction does not indicate a model bias or other problem. | Residual Partial Dependence Plot | PartialDependenceExplanation / application/json |
The residual partial dependence plot (PDP) indicates which variables interact most with the error. Residuals are transformed differences between observed and predicted values: the square of the difference between observed and predicted values is used in case of regression problems; -1 * log(p) is used in case of classification problems. The residual partial dependence is created using normal partial dependence algorithm, while instead of prediction is used the residual. Individual Conditional Expectations plot (ICE) displays the interaction with error for an individual row of data when an input variable is toggled across its domain.
Partial Dependence Plot for the feature 'ID' and class 'None (Regression)':
| Parameter | Value | Description | Type | Default value |
|---|---|---|---|---|
| sample_size |
25000
|
Sample size for Partial Dependence Plot. |
int
|
25000
|
| max_features |
10
|
Partial Dependence Plot number of features (to see all features used by model set to -1). |
int
|
10
|
| features |
None
|
Partial Dependence Plot feature list. |
list
|
None
|
| oor_grid_resolution |
0
|
Partial Dependence Plot number of out of range bins. |
int
|
0
|
| quantile-bin-grid-resolution |
0
|
Partial Dependence Plot quantile binning (total quantile points used to create bins). |
int
|
0
|
| grid_resolution |
20
|
Partial Dependence Plot observations per bin (number of equally spaced points used to create bins). |
int
|
20
|
| center |
False
|
Center Partial Dependence Plot using ICE centered at 0. |
bool
|
False
|
| sort_bins |
True
|
Ensure bin values sorting. |
bool
|
True
|
| histograms |
True
|
Enable histograms. |
bool
|
True
|
| quantile-bins |
|
Per-feature quantile binning (Example: if choosing features F1 and F2, this parameter is '{"F1": 2,"F2": 5}'. Note, you can set all features to use the same quantile binning with the `Partial Dependence Plot quantile binning` parameter and then adjust the quantile binning for a subset of PDP features with this parameter). |
str
|
|
| numcat_num_chart |
True
|
Unique feature values count driven Partial Dependence Plot binning and chart selection. |
bool
|
True
|
| numcat_threshold |
11
|
Threshold for Partial Dependence Plot binning and chart selection (<=threshold categorical, >threshold numeric). |
int
|
11
|
| debug_residuals |
True
|
|
|
Shapley explanations are a technique with credible theoretical support that presents consistent global and local variable contributions. Local numeric Shapley values are calculated by tracing single rows of data through a trained tree ensemble and aggregating the contribution of each input variable as the row of data moves through the trained ensemble. For regression tasks, Shapley values sum to the prediction of the Driverless AI model. For classification problems, Shapley values sum to the prediction of the Driverless AI model before applying the link function. Global Shapley values are the average of the absolute Shapley values over every row of a dataset. Shapley values for original features are calculated with the Kernel Explainer method, which uses a special weighted linear regression to compute the importance of each feature. More information about Kernel SHAP is available at http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf.
Feature importance for the class '1':
The most important
original
feature of the class
1
is
MARRIAGE
.
Original feature importances for the class '1':
MARRIAGE
feature with importance
0.5048248714839003
BILL_AMT6
feature with importance
0.5032058119254781
PAY_AMT1
feature with importance
0.5029552167108501
PAY_4
feature with importance
0.5027351920100945
PAY_AMT6
feature with importance
0.5010883391722856
PAY_AMT3
feature with importance
0.5004985354974225
PAY_0
feature with importance
0.500284473535244
BILL_AMT1
feature with importance
0.5002006452352199
LIMIT_BAL
feature with importance
0.4997530831820978
PAY_3
feature with importance
0.49947839694224516
AGE
feature with importance
0.4993569130311947
PAY_AMT4
feature with importance
0.49910010407530536
BILL_AMT5
feature with importance
0.4990933644743017
ID
feature with importance
0.49842250047515246
BILL_AMT4
feature with importance
0.4982346212625947
default payment next month
feature with importance
0.497985070650521
PAY_AMT2
feature with importance
0.49785012302559956
BILL_AMT2
feature with importance
0.4974761425230914
PAY_5
feature with importance
0.49722928324889265
PAY_6
feature with importance
0.4971471089510063
BILL_AMT3
feature with importance
0.4969625066178144
PAY_AMT5
feature with importance
0.49647477937225026
EDUCATION
feature with importance
0.49621047003623286
PAY_2
feature with importance
0.4932958341860075
| Parameter | Value | Description | Type | Default value |
|---|---|---|---|---|
| sample_size |
100000
|
Sample size. |
int
|
100000
|
| sample |
True
|
Sample Kernel Shapley. |
bool
|
True
|
| nsample |
|
Number of times to re-evaluate the model when explaining each prediction with Kernel Explainer. Default is determined internally.'auto' or int. Number of times to re-evaluate the model when explaining each prediction. More samples lead to lower variance estimates of the SHAP values. The 'auto' setting uses nsamples = 2 * X.shape[1] + 2048. This setting is disabled by default and runtime determines the right number internally. |
int
|
|
| L1 |
auto
|
L1 regularization for Kernel Explainer. 'num_features(int)', 'auto' (default for now, but deprecated), 'aic', 'bic', or float. The L1 regularization to use for feature selection (the estimation procedure is based on a debiased lasso). The 'auto' option currently uses aic when less that 20% of the possible sample space is enumerated, otherwise it uses no regularization. The aic and bic options use the AIC and BIC rules for regularization. Using 'num_features(int)' selects a fix number of top features. Passing a float directly sets the alpha parameter of the sklearn.linear_model.Lasso model used for feature selection. |
str
|
auto
|
| max runtime |
900
|
Max runtime for Kernel explainer in seconds. |
int
|
900
|
| fast_approx |
True
|
Speed up predictions with fast predictions approximation. |
bool
|
True
|
| leakage_warning_threshold |
0.95
|
The threshold above which to report a potentially detected feature importance leak problem. |
float
|
0.95
|
Partial dependence for 2 features portrays the average prediction behavior of a model across the domains of two input variables i.e. interaction of feature tuples with the prediction. While PD for one feature produces 2D plot, PD for two features produces 3D plots. This explainer plots PD for two features using heatmap, contour 3D or surface 3D.
Partial dependence plot for features 'ID' and 'LIMIT_BAL':
| Parameter | Value | Description | Type | Default value |
|---|---|---|---|---|
| sample_size |
25000
|
Sample size for Partial Dependence Plot of 2 features. |
int
|
25000
|
| max_features |
3
|
Partial Dependence Plot number of features. |
int
|
3
|
| features |
None
|
List of features from which to choose pairs to compute PD for two features. |
list
|
None
|
| grid_resolution |
10
|
Partial Dependence Plot observations per bin (number of equally spaced points used to create bins). |
int
|
10
|
| oor_grid_resolution |
0
|
Partial Dependence Plot number of out of range bins. |
int
|
0
|
| quantile-bin-grid-resolution |
0
|
Partial Dependence Plot quantile binning (total quantile points used to create bins). |
int
|
0
|
| plot_type |
heatmap
|
Plot type. |
str
|
heatmap
|
Friedman's H-statistic describes the amount of variance explained by the feature pair. It's expressed with a graph where most important original features are nodes and the interaction scores are edges. When features interact with each other, then the influence of the features on the prediction does not have be additive, but more complex. For instance the contribution might be greater than the sum of contributions. Friedman's H-statistic calculation is computationally intensive and typically requires long time to finish - calculation duration grows with the number of features and bins.
Feature importance for the class 'None (Regression)':
| Parameter | Value | Description | Type | Default value |
|---|---|---|---|---|
| features_number |
4
|
Number of features for which to calculate H-Statistic. |
int
|
4
|
| grid_resolution |
3
|
Observations per bin (number of equally spaced points used to create bins). |
int
|
3
|
| features |
None
|
Feature list - at least 2 features must be selected. |
multilist
|
None
|
| sample_size |
25000
|
Sample size for Partial Dependence Plot |
int
|
25000
|
The explainer checks the dataset and model for various issues. For example, it provides problems and actions for missing values in the target column and a low number of unique values across columns of a dataset.
<class 'datatable.Frame'>
data/predictive/creditcard.csv
944719B
(10000, 25)
10000
['ID', 'LIMIT_BAL', 'SEX', 'EDUCATION', 'MARRIAGE', 'AGE', 'PAY_0', 'PAY_2', 'PAY_3', 'PAY_4', 'PAY_5', 'PAY_6', 'BILL_AMT1', 'BILL_AMT2', 'BILL_AMT3', 'BILL_AMT4', 'BILL_AMT5', 'BILL_AMT6', 'PAY_AMT1', 'PAY_AMT2', 'PAY_AMT3', 'PAY_AMT4', 'PAY_AMT5', 'PAY_AMT6', 'default payment next month']
[10000, 72, 2, 7, 4, 54, 11, 11, 11, 11, 10, 10, 8371, 8215, 8072, 7913, 7764, 7550, 3763, 3581, 3305, 3247, 3258, 3174, 2]
['int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'int', 'bool']
ID
int
True
True
10000
10000
LIMIT_BAL
int
True
72
72
SEX
int
True
2
2
EDUCATION
int
True
7
7
MARRIAGE
int
True
4
4
AGE
int
True
54
54
PAY_0
int
True
11
11
PAY_2
int
True
11
11
PAY_3
int
True
11
11
PAY_4
int
True
11
11
PAY_5
int
True
10
10
PAY_6
int
True
10
10
BILL_AMT1
int
True
8371
8371
BILL_AMT2
int
True
8215
8215
BILL_AMT3
int
True
8072
8072
BILL_AMT4
int
True
7913
7913
BILL_AMT5
int
True
7764
7764
BILL_AMT6
int
True
7550
7550
PAY_AMT1
int
True
3763
3763
PAY_AMT2
int
True
3581
3581
PAY_AMT3
int
True
3305
3305
PAY_AMT4
int
True
3247
3247
PAY_AMT5
int
True
3258
3258
PAY_AMT6
int
True
3174
3174
default payment next month
bool
True
2
2
mock
binomial
SEX
1
[1]
['ID', 'LIMIT_BAL', 'EDUCATION', 'MARRIAGE', 'AGE', 'PAY_0', 'PAY_2', 'PAY_3', 'PAY_4', 'PAY_5', 'PAY_6', 'BILL_AMT1', 'BILL_AMT2', 'BILL_AMT3', 'BILL_AMT4', 'BILL_AMT5', 'BILL_AMT6', 'PAY_AMT1', 'PAY_AMT2', 'PAY_AMT3', 'PAY_AMT4', 'PAY_AMT5', 'PAY_AMT6', 'default payment next month']
[]
Interpretation parameters:
<class 'tests.lib.test_containers.SimpleMockModel'>
any
None
any
data/predictive/creditcard.csv
any
None
str
None
str
True
bool
SEX
str
str
str
[]
list
[]
0
int
0
/tmp/pytest-of-dvorka/pytest-35/test_all_explainers0
str
None
list
None
| Config parameter | Value | Description | Type | Default value |
|---|---|---|---|---|
| h2o_host |
localhost
|
The host of the H2O-3 server that should be used for the explanation that requires it. |
str
|
localhost
|
| h2o_port |
54321
|
The port of the H2O-3 server that should be used for the explanation that requires it. |
int
|
12349
|
| h2o_auto_start |
False
|
Automatically start H2O-3 server on the interpretation start (True), or do not start the server (False). |
bool
|
True
|
| h2o_auto_cleanup |
True
|
Automatically remove all data from the H2O-3 server onthe interpretation end (True), or do not remove all data fromthe server (False). |
bool
|
True
|
| h2o_auto_stop |
False
|
Automatically stop H2O-3 server on the interpretation end (True), or do not stop the server (False). |
bool
|
False
|
| h2o_min_mem_size |
2G
|
Minimum memory specification for H2O-3 server started by H2O Sonar. |
int
|
2G
|
| h2o_max_mem_size |
4G
|
Maximum memory specification for H2O-3 server started by H2O Sonar. |
int
|
4G
|
| custom_explainers |
[]
|
List of custom "Bring Your Own Explainer" string locators to be registered on H2O Sonar run. The location has the following structure: "[PACKAGE and MODULE]::[EXPLAINER-CLASS-NAME]" where PACKAGE and MODULE is dot (.) separated path to the the module (installed on PYTHONPATH) and EXPLAINER-CLASS-NAME is the name of explainer class. Example: [ "my_package.explainer_module::MyExplainerClass", "their_package.explainer_module::TheirExplainerClass"] |
customlist
|
[]
|
| look_and_feel |
h2o_sonar
|
Charts theme (look and feel) - one of: 'h2o_sonar', 'blue', 'driverless_ai'. |
str
|
h2o_sonar
|
| device |
cpu
|
Device to be used for the calculations. The value of this configuration item might be ``cpu`` or ``gpu``. |
str
|
|
| enable_slow_perturbators |
False
|
Enable slow (agent-based, model-based, resource intensive) perturbators which are by default skipped and not listed. |
bool
|
False
|
| force_eval_judge |
false
|
Force the use of custom evaluation judge for the evaluation of the models over the judges used by evaluators by default. For example to use a local judge in order to avoid sending sensitive data to a 3rd party or to the cloud. The value of this configuration item might be ``false``, ``true`` or configuration key of the custom evaluation judge. Forcing the use of a custom evaluation judge will automatically reconfigure the embeddings calculation in evaluations to a local model to ensure privacy safety. |
str
|
false
|
| multiprocessing_start_method |
spawn
|
Multiprocessing start method - one of: 'spawn', 'fork', 'forkserver' or `None` (default). |
str
|
spawn
|
| model_cache_dir |
/home/dvorka/.cache/h2o_sonar/models
|
Directory where the models are cached. If not specified, the models are cached in a default directory in user home which follows operating system conventions. |
str
|
/home/dvorka/.cache/h2o_sonar/models
|
| http_ssl_cert_verify |
True
|
SSL certificate verification for HTTPS requests. If set to ``false``, then SSL certificate verification is disabled. If set to ``true``, then SSL certificate verification is enabled. If set to the path (string) to a ``CA_BUNDLE`` file or directory with certificates of trusted CAs, then they will be used for the verification (in this case the directory must have been processed using the c_rehash utility supplied with OpenSSL). |
str
|
true
|
| branding |
|
Branding for HTML reports. If not specified (empty string). Valid values: 'H2O_SONAR', 'EVAL_STUDIO', or '' (empty for auto). |
str
|
|
| per_explainer_logger |
True
|
Create new logger for each explainer (which logs to explainer sandbox) or reuse one logger and use library logger for all log messages. |
bool
|
True
|
| create_html_representations |
True
|
Indicate that explainers can create HTML representation (True), or request to skip it (False) from performance/resource consumption reasons. |
bool
|
True
|
| connections |
[]
|
|
|
|
| licenses |
[]
|
|
|
|
| evaluation_judges |
[]
|
|
|
Directories and files: