MMS • Daniel Dominguez
Article originally posted on InfoQ. Visit InfoQ
Amazon is announcing that Amazon SageMaker Clarify now supports online explainability by providing explanations for machine learning model’s individual predictions in near real-time on live endpoints.
Amazon SageMaker Clarify provides machine learning developers with greater visibility into their training data and models so they can identify and limit bias and explain predictions. Biases are disparities in the training data or model’s behavior while making predictions for various groups.
The data or algorithm used to train any model may have biases. An ML model, for instance, may perform less well when making predictions about younger and older people if it was largely trained on data from middle-aged people. By identifying and quantifying biases in a data and model, machine learning offers the chance to address prejudices.
The notions of bias and fairness are highly dependent on the application. Further, the choice of the attributes for which bias is to be measured, as well as the choice of the bias metrics, may need to be guided by social, legal, and other non-technical considerations.
ML models may consider some feature inputs more strongly than others when generating predictions. SageMaker Clarify provides scores detailing which features contributed the most to a model’s individual prediction after the model has been run on new data. These details can help determine if a particular input feature has more influence on the model predictions than expected.
To understand why models produce the predictions they do, it may also have a look at the significance of model inputs. According to Amazon, SageMaker Clarify’s new feature reduces latency for explanations from minutes to seconds.
Deepening the way machine learning systems are applied, machine learning biases can lead to illegal actions, reduced revenue or sales, and potentially poor customer service. Building consensus and achieving collaboration across key stakeholders such as product, policy, legal, engineering, and AI/ML teams, as well as end users and communities, is a prerequisite for the successful adoption of fairness-aware ML approaches in practice.