AI Fairness: Measuring and mitigating social bias in automated decision-making models
Data Analytics,Technology,Data Science
Glasgow
Automated models are extremely efficient in predicting human behavior. Unfortunately, they have tendency to discriminate towards protected characteristics by law. Despite these adverse outcomes, there are ways to detect and mitigate social bias. This however comes at a cost – the trade-off between fairness and an AI model’s accuracy is the most striking one. Fairer AI models might not be as efficient, likely leading to fall in profits.
Are private organisations expected to correct and pay for historical developments in society? Or is this the role of the regulator? The key to success probably lies somewhere in between – a collaboration between governments and businesses. First, we must settle on what is fair.
Come to the talk to find out what goes on behind the AI Fairness scenes!
Speaker Bios
Vlad Fojtik, Data Scientist at Aggreko
Vlad is a Data Scientist with experience in the banking and energy industries. Vlad wrote an award-winning Consultancy Project report, highlighting ways to detect and mitigate social bias in automated decision-making processes.
He previously acted as a Model Fairness Champion in a UK bank, where he was able to apply his expertise.
As a keen reader of economic, political, and behavioural literature, Vlad highlights the perils of Ethical AI as well as arguing that it is extremely unfair to punish private organisations for not deploying fair Machine Learning models.
Further hobbies include running, football as well as tearing his hair out over Fantasy Premier League outcomes!