MMS • Roland Meertens
Article originally posted on InfoQ. Visit InfoQ
At the QCon London conference, Mehrnoosh Sameki, principal product manager at Microsoft, delivered a talk on “Responsible AI: From Principle to Practice“. She outlined six key principles for responsible AI, detailed the four essential building blocks for implementing these principles, and introduced the audience to useful tools such as Fairlearn, InterpretML, and the Responsible AI dashboard.
Mehrnoosh Sameki opted for the term “Responsible AI” over other alternatives such as “Ethical AI” and “Trusted AI”. She believes that Responsible AI embodies a more holistic and proactive approach that is widely shared among the community. Those discussing this field should demonstrate empathy, humility, and a helpful attitude. As the AI landscape is currently evolving at a rapid pace, with companies accelerating the adoption of AI technologie, our societal expectations will shift and regulations emerge. It is thus becoming a best practice for individuals to introduce the right to inquire about the rationale behind AI-driven decisions.
Mehrnoosh outlined Microsoft’s Responsible AI principles, which are based on six fundamental aspects:
1. Fairness
2. Reliability and safety
3. Privacy and security
4. Inclusiveness
5. Transparency
6. Accountability
She also outlined four building blocks she deemed essential to effectively implement these principles, which were “tools and processes”, “training and practices”, “rules” and “governance”. In the presentation she mostly talked about the tools and processes and practices around responsible AI.
The importance of fairness can be best understood through the potential harms it prevents. Examples of such harms include different qualities of service for various groups of people, such as varying performance for genders in voice recognition systems or considering skin tone when determining loan eligibility. It is crucial to evaluate the possibility of these harms and understand their implications. To address fairness, Microsoft developed Fairlearn, a tool that enables assessment through evaluation metrics and visualizations, as well as mitigation using fairness criteria and algorithms.
InterpretML is another useful tool aimed at understanding and debugging AI algorithms. It focuses on both glassbox models and so-called “opaquebox” explanations, such as explainable boosting machines. This allows users to see through their predictions and determine the top-k factors impacting them. InterpretML also offers counterfactuals as a powerful debugging tool, enabling users to ask questions like, “What can I do to get a different outcome from the AI?”. Counterfactuals give a machine learning engineer insight into how far away certain samples are from the decision border, and which features are most likely to “flip” a decision. For example, an outcome could be that people where the gender feature is switched suddenly get a different prediction, which could indicate an unwanted bias in your model.
Mehrnoosh also gave a demo of Microsoft’s Responsible AI dashboard. The analysis of errors in predictions is vital for ensuring reliability and safety. The tool provides insights into the various factors leading to errors, and allows you to create cohorts to dive deeper into causes of bias and errors.
Mehrnoosh Sameki also discussed the potential dangers associated with large language models, specifically in the context of Responsible AI for Generative AI, such as GPT-3, which is used for zero-shot, one-shot, and few-shot learning. Some considerations for responsible AI in this context include:
1. Discrimination, hate speech, and exclusion. It is easy to let models generate this automatically.
2. Hallucination – the generation of unintentional misinformation. Models are generating text, and are not knowledge engines.
3. Information hazards. It’s possible for models to leak information in an unintended way
4. Malicious use by bad actors to automatically generate text.
5. Environmental and socioeconomic harms.
To address these challenges, Sameki proposed several solutions and predictions for improving AI-generated output:
1. Provide clearer instructions to the model. This is something which individuals should do.
2. Break complex tasks into simpler subtasks. Large language models
3. Structure instructions to keep the model focused on the task
4. Prompt the model to explain its reasoning before answering
5. Request justifications for multiple possible answers and synthesize them
6. Generate numerous outputs and use the model to select the best one
7. Fine-tune custom models to maximize performance and align with responsible AI practices
To explore Mehrnoosh Sameki’s work on Responsible AI, consider visiting the following resources:
The Microsoft’s Responsible AI Dashboard. This impressive tool allows users to visualize different factors that contribute to errors in AI systems.
Responsible AI Mitigations Library and Responsible AI Tracker. These newly launched open-source tools provide guidance on mitigating potential risks and tracking progress in the development of Responsible AI.
Fairlearn. This toolkit helps assess and improve fairness in AI systems, providing both evaluation metrics and visualization capabilities as well as mitigation algorithms.
InterpretML. This tool aims to make machine learning models more understandable and explainable, offering insights and debugging capabilities for both glassbox models and opaquebox explainers.
Microsoft’s Responsible AI Guidelines
Last but not least: her talk Responsible AI: From Principle to Practice