Neural networks and other machine learning (ML) model development typically requires large amounts of data for training and testing purposes. Because much of this data is historical, there is the chance that the artificial intelligence (AI) models could learn existing prejudices pertaining to gender, race, age, sexual orientation, and other biases. This Advisor explores these and other issues around data that can also contribute to biases and inaccuracies in ML algorithms.
Advisor
Alleviating Bias in AI Systems with Data Profiling and Synthetic Data Sets
By Curt Hall
Posted May 18, 2021 | Technology |
Don’t have a login?
Make one! It’s free and gives you access to all Cutter research.