UK benefits AI system found to show bias
File this under “the least surprising news ever”:
An artificial intelligence system used by the UK government to detect welfare fraud is showing bias according to people’s age, disability, marital status and nationality, the Guardian can reveal. An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.
The most interesting aspect of the report published is that currently “there is no established numerical or statistical benchmark at which referral or outcome disparity can be defined as within tolerance”.
I would have assumed a lack of bias, measured against a “false positive” rate — ie. benefits recipients who were selected for additional checks, who were then found to be legitimate and not committing fraud, should have been a design goal, and a critical KPI for such a system.
There are going to be a lot of similar examples in the years to come — here’s hoping this “bias measurement” KPI becomes established as a concept.