7/2/2023 0 Comments Conda upgrade packageIn general, any transformations are supported as long as they operate on a single column so that it's clear they're one-to-many. The format of supported transformations is the same as described in sklearn-pandas. Otherwise, the explainer provides explanations in terms of engineered features. For this option, you pass your feature transformation pipeline to the explainer in train_explain.py. You can opt to get explanations in terms of raw, untransformed features rather than engineered features. Sorted_local_importance_values = local_explanation.get_ranked_local_values() Sorted_local_importance_names = local_explanation.get_ranked_local_names() Local_explanation = explainer.explain_local(x_test) # get explanation for the first data point in the test set PFIExplainer does not support local explanations. Get the individual feature importance values of different datapoints by calling explanations for an individual instance or a group of instances. Global_explanation.get_feature_importance_dict()Įxplain an individual prediction (local explanation) # alternatively, you can print out a dictionary that holds the top K feature names and values Sorted_global_importance_names = global_explanation.get_ranked_global_names()ĭict(zip(sorted_global_importance_names, sorted_global_importance_values)) Sorted_global_importance_values = global_explanation.get_ranked_global_values() # sorted feature importance values and feature names # global_explanation = explainer.explain_global(x_train, true_labels=y_train) # if you used the PFIExplainer in the previous step, use the next line of code instead Global_explanation = explainer.explain_global(x_test) # you can use the training data or the test data here, but test data would allow you to use Explanation Exploration Refer to the following example to help you get the aggregate (global) feature importance values. Or from import PFIExplainerĮxplain the entire model behavior (global explanation) # LGBMExplainableModel can be replaced with LinearExplainableModel, SGDExplainableModel, or DecisionTreeExplainableModel # max_num_of_augmentations is optional and defines max number of times we can increase the input data size. Useful for high-dimensional data where the number of rows is less than the number of columns. # augment_data is optional and if true, oversamples the initialization examples to improve surrogate model accuracy to fit original model. # you can use one of the following four interpretable models as a global surrogate to the black box modelįrom import LGBMExplainableModelįrom import LinearExplainableModelįrom import SGDExplainableModelįrom import DecisionTreeExplainableModel # "features" and "classes" fields are optionalįeatures=breast_cancer_data.feature_names,įrom import MimicExplainer TabularExplainer automatically selects the most appropriate one for your use case, but you can call each of its three underlying explainers directly.įrom import TabularExplainer.TabularExplainer calls one of the three SHAP explainers underneath ( TreeExplainer, DeepExplainer, or KernelExplainer).The following code blocks show how to instantiate an explainer object with TabularExplainer, MimicExplainer, and PFIExplainer locally. To make your explanations and visualizations more informative, you can choose to pass in feature names and output class names if doing classification.To initialize an explainer object, pass your model and some training data to the explainer's constructor.X_train, x_test, y_train, y_test = train_test_split(breast_cancer_data.data,Ĭlf = svm.SVC(gamma=0.001, C=100., probability=True) # load breast cancer dataset, a well-known small dataset that comes with scikit-learnįrom sklearn.datasets import load_breast_cancerįrom sklearn.model_selection import train_test_splitīreast_cancer_data = load_breast_cancer()Ĭlasses = breast_cancer_data.target_names.tolist() Train a sample model in a local Jupyter Notebook. The following example shows how to use the interpretability package on your personal machine without contacting Azure services. Generate feature importance value on your personal machine Certain features might not be supported or might have constrained capabilities.įor more information, see Supplemental Terms of Use for Microsoft Azure Previews.įor more information on the supported interpretability techniques and machine learning models, see Model interpretability in Azure Machine Learning and sample notebooks.įor guidance on how to enable interpretability for models trained with automated machine learning see, Interpretability: model explanations for automated machine learning models (preview). This preview version is provided without a service-level agreement, and it's not recommended for production workloads. This feature is currently in public preview.
0 Comments
Leave a Reply. |