Skip to content

Archives

Sweden’s Suspicion Machine

  • Sweden’s Suspicion Machine

    Here we go, with another predictive algorithm-driven bias machine used to drive refusal of benefits:

    Lighthouse Reports and Svenska Dagbladet obtained an unpublished dataset containing thousands of applicants to Sweden’s temporary child support scheme, which supports parents taking care of sick children. Each of them had been flagged as suspicious by a predictive algorithm deployed by the Social Insurance Agency. Analysis of the dataset revealed that the agency’s fraud prediction algorithm discriminated against women, migrants, low-income earners and people without a university education. Months of reporting — including conversations with confidential sources — demonstrate how the agency has deployed these systems without scrutiny despite objections from regulatory authorities and even its own data protection officer.

    Tags: sweden predictive algorithms surveillance welfare benefits bias data-protection fraud