Wednesday, August 29, 2018

Can AI Self-Police and Reduce Bias?

Concerns around the potential for AI-based systems to hard code the biases that blight human decision-making has caused considerable consternation among researchers around the world.  It has also prompted a number of attempts to overcome this challenge.  For instance, I recently wrote about a new tool developed by Accenture to try and identify biases within AI systems.

The tool will check the data that feeds any AI-based tool to determine whether sensitive variables have an impact upon other variables. For instance, gender is usually correlated with profession, so even if a company removes gender from the data set, it can still produce biased results if profession is part of the data set.



from DZone.com Feed https://ift.tt/2BSx5u1

No comments:

Post a Comment