top of page

XAI for Foundational Models

Foundational models, in this context, can be defined as non-neural network based model. These include essential AI/ML models such as logit, k-nearest neighbors (KNN) and random forest. These models, while strong options for many AI/ML applications, generally have less precision and accuracy than deeper learning models. However, their strength resides in naturally being more explainable than their more black-box counterparts. 

For example, in logit regression, significant features are identified that contribute to predicting a target variable. Because so, the significant features being extracted immediately provides a certain level of explainability. Users can pin point what features are most important. Additionally, these features are accompanied with weights, making explainability even higher.

That said, there are still advancements in XAI on these model types. There are also suggested forms of explainability on these model types that assist developers in properly articulating how users should interpret the model. 

 

Regression XAI Explainability Models/Techniques

Explainability models for regression are techniques that help to understand and interpret the results of a regression analysis. Here are some popular explainability models for regression:

  1. Feature Importance: Feature importance helps to understand which variables or features have the most significant impact on the outcome variable. It can be measured using various methods such as permutation feature importance, SHAP values, or partial dependence plots.

  2. Partial Regression Plots: Partial regression plots help to visualize the relationship between the independent variable and dependent variable while controlling for the effect of other variables.

  3. Residual Plots: Residual plots help to check the assumptions of the regression model and to identify any non-linear relationships or heteroscedasticity.

  4. Influence Plots: Influence plots help to identify any influential observations or outliers that may have a significant impact on the regression model's results.

  5. Coefficient Plots: Coefficient plots help to understand the direction and magnitude of the relationship between the independent and dependent variables.

  6. Lasso Regression: Lasso regression is a type of regression that can help to identify the most important features and remove the irrelevant ones.

 
Decision Trees Explainability Models/Techniques
 

Decision trees are particularly useful for explainability as they provide a clear and intuitive way of understanding how the algorithm makes predictions. Here are some explainability models for decision trees:

  1. Decision Tree Visualization: Decision tree visualization is a graphical representation of the decision tree that shows how the algorithm splits the data based on different features and creates a hierarchy of decisions. This visualization helps to understand the logic behind the algorithm's predictions and can be used to identify any biases or overfitting issues.

  2. Feature Importance: Feature importance can be calculated for decision trees using various methods such as Gini Importance or Mean Decrease Impurity. It helps to understand which features have the most significant impact on the model's predictions.

  3. Partial Dependence Plots: Partial dependence plots show the relationship between a specific feature and the model's predictions while holding all other features constant. They help to understand the impact of a feature on the model's predictions and can be used to identify any non-linear relationships.

  4. Decision Rules: Decision rules are a set of if-else statements that represent the logic behind the decision tree's predictions. They can be used to understand the decision-making process and provide an intuitive way of explaining the algorithm's predictions.

  5. Tree Pruning: Tree pruning is a technique used to remove unnecessary branches from the decision tree to reduce overfitting and improve generalization. Pruned trees are often simpler and easier to understand, making them more interpretable and explainable.

 

Nearest Neighbor Explainability Models/Techniques
 

Nearest neighbor is a type of instance-based learning that uses the similarity between instances to make predictions. Here are some explainability models for nearest neighbor:

  1. Feature Importance: Feature importance can be calculated for nearest neighbor using various methods such as Permutation Feature Importance or SHAP values. It helps to understand which features have the most significant impact on the model's predictions.

  2. Instance Importance: Instance importance helps to identify the most influential instances in the model's predictions. It can be measured using various methods such as Leave-One-Out Importance or Influence Functions.

  3. Distance Metrics: Distance metrics such as Euclidean distance or Cosine similarity can be used to measure the similarity between instances. Understanding the distance metrics used in the model can help to identify any biases or limitations in the model.

  4. Nearest Neighbor Visualization: Nearest neighbor visualization is a graphical representation of the nearest neighbor algorithm that shows how the algorithm finds the closest instances to the query instance. This visualization helps to understand the logic behind the algorithm's predictions and can be used to identify any issues with the algorithm.

  5. Local Interpretable Model-Agnostic Explanations (LIME): LIME is a technique used to explain the predictions of machine learning models. It works by generating local interpretable models around a specific instance and using them to explain the model's predictions.

©2023 by XAIF. Proudly created with Wix.com

bottom of page