Neural networks and other ML model development typically use large amounts of data for training and testing purposes. Because much of this data is historical, there is the risk that the AI models could learn existing prejudices pertaining to gender, race, age, sexual orientation, and other biases. This Advisor explores how the Data & Trust Alliance consortium created an initiative to help end-user organizations evaluate vendors offering AI-based solutions according to their ability to detect, mitigate, and monitor algorithmic bias over the lifecycle of their products.
Advisor
Alleviating Algorithmic Bias in AI-Powered HR & Workforce Management Systems
By Curt Hall
Posted December 14, 2021 | Leadership | Technology |
Don’t have a login?
Make one! It’s free and gives you access to all Cutter research.