Artificial Intelligence in Government: Implementing Algorithmic Decision-Making Systems in Welfare Services – ABSTRACT

This review (in Hebrew) looks into the emerging trend of integrating AI-based technologies into governmental services; social welfare services, in particular. Our focus here is algorithmic decision-making (ADM) systems, designed to support humans in the decision-making process, or utterly replace them.

ADM systems are typically applied in two major areas of welfare services: 1) ​​welfare benefits (determining entitlement to benefits, and identifying welfare fraud); and 2) child protection services and protection of at-risk populations (flagging child abuse cases identifying families for the purpose of early intervention; and preventing the exploitation of underprivileged populations and their deterioration into crime). This growing trend is dubbed – the ‘digital welfare state.

This work cites the particular (technological, legal, ethical, social and other) challenges and existing barriers to this integration of smart technologies into a rather conservative and technology-sceptic public service. It further provides a more in-depth analysis of potential benefits alongside difficulties, and various ethical and legal implications for implementing ADM systems in the social welfare field.

Potential benefits include: increased efficiency in the provision of (public) welfare services; the employment of ADM systems as a reflection of governmental responsibility; objectivity and neutrality in decision-making, and the consequent promotion of public trust in digital services and government; the personalisation of welfare services; prevention and mitigation of future harms by flagging risk and allowing for early detection of individuals at-risk; the instrumental value of ADM systems for the promotion of social justice, the alleviation of workload for social workers and their empowerment; and more.

Foreseen ethical and legal difficulties associated with the implementation of ADM systems in welfare services, consist of: a) the potential compromising of a host of personal and citizen rights including inter alia, the right to fairness, the right to justice and (technological) due process, the right to equality and non-discrimination, the right to autonomy and the right to privacy; and b) ethical difficulties inherent to the characteristics of ADM systems, such as non-explainability and ‘black box’ problem; lacking responsibility and accountability for ADM systems’ welfare-related decisions; decision bias and algorithmic discrimination; and more. These ethical and legal difficulties are analysed against the background of the framework of principles for human(ity)-safe AI applications, being presently formulated by international entities, local governments, and corporations (i.e., self-regulation).

The conclusion of the review offers a set of (ADM systems’) developmental stage-related recommendations, acknowledging the inevitability of AI-based technologies’ integration into governmental services, alongside the necessity of precautionary steps for their human-friendly application.