Artificial intelligence is increasingly involved in heavy business processes such as credit scoring and reviewing CVs to determine ideal candidates. As a result, AI and its results naturally come under the microscope. The key question that worries implementers: is the AI algorithm biased?
Bias can creep in in a number of ways, including sampling practices that ignore large swaths of the population and confirmation bias, where a data scientist only includes datasets that conform to their view of the world.
Here are several ways data scientists are tackling the problem.
1. Understand the potential for bias in AI
Supervised learning, one of the subsets of AI, works on rote data ingestion. By learning under “supervision”, a trained algorithm makes decisions on data sets it has never seen before. According to the “Garbage in, garbage out” principle, the quality of the AI decision can only be as good as the data it ingests.
Data scientists must evaluate their data to ensure that it is an unbiased representation of the true equivalent. To address confirmation bias, the diversity of data teams is also important.
2. Increase transparency
AI remains challenged by the inscrutability of its processes. Deep learning algorithms, for example, use neural networks modeled after the human brain to arrive at decisions. But exactly how they get there remains unclear.
“Part of the movement toward ‘explainable AI’ is to shine a light on how data is trained and how you use which algorithms,” said Jonathon Wright, Chief Technology Evangelist at Keysight Technologies, a testing company technology provider.
While making AI explainable does not entirely prevent bias, understanding the cause of a bias is a critical step. Transparency is especially important when companies use AI programs from third-party vendors.
3. Institute Standards
When deploying AI, organizations must follow a framework that will standardize production while ensuring ethical models, Wright said.
Wright pointed out that the European Union’s Artificial Intelligence Act was a game-changer in an effort to rid the technology of any bias.
4. Test models before and after deployment
Testing AI and machine learning models is one way to avoid bias before releasing the algorithms into the wild.
Software vendors, specifically designed for this purpose, are becoming increasingly common. “That’s where the industry is going right now,” Wright said.
5. Use synthetic data
You want datasets that are representative of the entire population, but “just because you have real, real-world data doesn’t mean it’s unbiased,” Wright noted.
Indeed, AI learning about real-world biases is a risk. To solve this problem, synthetic data could be considered as a potential solution, said Harry Keen, CEO and co-founder of Hazy, a startup that creates synthetic data for financial institutions.
Synthetic datasets are statistically representative versions of real datasets and are often deployed when the original data is tied to privacy concerns.
Keen pointed out that using synthetic data to combat bias is “an open research topic” and that supplementing datasets – for example, introducing more women into models that veterinarians take over – could introduce another type of bias.
Synthetic data is most successful in “lower-dimensional structured data” like imagery, Keen said. For more complex data, “it can be a bit of a game of Whack-a-Mole, where you can fix one bias but introduce or amplify others. …Bias in data is a bit of a problem. thorny.
Still, it’s a problem that needs to be addressed, given that the technology is seeing an impressive annual growth of 39.4%, according to a study by Zion Market Research.
About the AuthorPoornima Apte is a skilled engineer turned writer specializing in robotics, AI, IoT, 5G, cybersecurity, etc. Winner of a reporting award from the South Asian Journalists Association, Poornima loves learning and writing about new technologies and the people behind them. Its client list includes many B2B and B2C outlets, who order features, profiles, white papers, case studies, infographics, video scripts, and industry reports. Poornima is also a carded member of the Cloud Appreciation Society.
#ways #prevent #bias