08 January 2021 - Post by:Alex Shandro
Just before Christmas, the House of Lords Liaison Committee published a new report on Artificial Intelligence (AI): “AI in the UK: No Room for Complacency”. The report examines the progress made by the UK government in implementing the recommendations of the Select Committee on Artificial Intelligence in their 2018 report entitled “AI in the UK: ready, willing and able?”. The Select Committee had been appointed “to consider the economic, ethical and social implications of advances in artificial intelligence”, and its 2018 report made a large number of recommendations to the UK government on a range of technical, organisational and societal topics, as well as a recommended ethical framework for AI.
The new Liaison Committee report notes that, since the publication of the Select Committee report two and a half years earlier, investment in AI in the UK has grown significantly. It observes that AI has been deployed in the UK in a growing number of fields and use cases, and that the COVID-19 pandemic accelerated this trend still further as states and technology companies alike deployed AI to help tackle the pandemic (e.g. as part of test and trace technologies) and the various lockdown restrictions resulted in greater reliance on digital technologies.
The report suggests that this has placed far greater importance on topics such as transparency and data protection. It states that as the deployment of AI systems (and with it the processing of personal data) accelerates, “the public’s understanding of the technology, and the ability to give informed consent, could be left behind”.
The report concludes that this makes it all the more important that we focus on better understanding AI’s opportunities and risks, particularly in relation to data and privacy, and that our regulatory frameworks are equipped to manage these risks.
The report contains various recommendations for how the UK government can develop a policy framework addressing these (and other) topics. Some of the key recommendations are summarised below.
Public understanding of AI systems
The report recommends that the government rapidly develop policies to improve public understanding of AI systems and, in particular, to safeguard personal data used by AI systems before the ability to introduce those safeguards is outstripped by technological developments.
To this end, the report advocates for a stronger and more active role for the AI Council, the body set up by the UK government to explore how to develop and deploy safe, fair, legal and ethical data sharing frameworks.
The report also recommends pressing ahead with establishing data trusts. Data trusts were first proposed in an independent review carried out in 2017 by Dame Wendy Hall and Jerome Pesenti, and subsequently developed further by the Open Data Institute and the Office for AI and Innovate UK. The Hall-Pesenti review envisaged that data trusts would facilitate the sharing of datasets between individuals and organisations within a framework that allows for decisions about those datasets to be monitored. At a high level, the Hall-Pesenti review envisaged that data trusts could give individuals who have their personal data within these trusts insight into, and some say over, how the trust uses their data.
The Open Data Institute defines a data trust as “an approach to looking after and making decisions about data in a similar way that trusts have been used to look after and make decisions about other forms of asset in the past, such as land trusts that steward land on behalf of local communities.”
However, while recommending that data trusts be established, the report does not address how that would be achieved in practice and what form the trust would take. The original Hall-Pesenti review envisaged that data trusts “…would not be a legal entity or institution, but rather a set of relationships underpinned by a repeatable framework…to share data in a fair, safe and equitable way”.
A key focus of the report is on operationalising data ethics, and instilling ethical principles in the development and deployment of AI systems. The report acknowledges that a large number of organisations have published their own ethical codes of conduct, but suggests that self-regulation risks “a lack of uniformity and enforceability”. For instance, the report cites concerns that public trust in new technologies is to a large extent contingent on regulatory oversight of those technologies.
Among other recommendations in this area, the report recommends that the Centre for Data Ethics and Innovation (CDEI) establish national standards for the ethical development and deployment of AI. These standards must be capable of being well understood by the general public, as well as being generally applicable to developers of AI systems (acknowledging that some targeted sector-specific exceptions may be required). The report suggests that the standards should consist of two frameworks, one for the ethical development of AI, including issues of prejudice and bias, and the other for the ethical use of AI by policymakers and businesses. These two frameworks should reflect the different risks and considerations at each stage of AI use.
Jobs and skills
The report concludes that while we do not yet have a clear sense of the impact that AI will have on jobs or the speed at which this impact will be felt, complacency is a significant risk in the jobs market. The report suggests that as the COVID-19 pandemic recedes, and the government seeks to address the resultant economic fall-out, the nature of work will likely change and different jobs will be required. Accordingly, the report highlights a digital skills shortage in the UK, with recommendations for an expanded national retraining scheme to equip people to work alongside AI and, more generally, for improving the digital skills base in the UK. The report also recommends that the AI Council should identify those industries most at risk and the specific skills gaps in those industries, with the government then formulating training schemes to address those gaps.
In its 2018 report, the Select Committee concluded that “Blanket AI-specific regulation, at this stage, would be inappropriate. We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed. We welcome that the [General Data Protection Regulation] appears to address many of the concerns…regarding the handling of personal data…”.
The new report endorses this position and agrees that the challenges posed by the development and deployment of AI cannot currently be tackled by cross-cutting regulation.
However, the report does raise significant concerns about gaps in regulation, including deficiencies in the existing legal framework for the use of AI by social media companies or in facial recognition technology. It also states that a sector-specific approach relies on relevant regulators being skilled at understanding the risks of AI and how to mitigate them and, as Simon Taylor of the CDEI puts it, the gap in “understanding how to make sense of our existing laws, regulations and ethical standards” in the context of AI.
The report suggests that the CDEI and Office for AI can play a cross-cutting role, along with the Information Commissioner’s Office (ICO), to provide training and upskilling for sector-specific regulators. It recommends that the ICO (alongside the CDEI, the Office for AI and the Alan Turing Institute) develop a training course for use by regulators to ensure that their staff have a grounding in the ethical and appropriate use of personal data and AI systems, and its opportunities and risks. This training should be prepared and rolled out by July 2021.
Role of government
The report cites various leading AI practitioners commending the collaboration that is happening across government and research institutions, and the success in establishing appropriate bodies to steward the development of AI in the UK. That said, the report suggests that more needs to be done. It recommends that a Cabinet Committee be established to set the strategic direction of AI policy in the UK, and the strategy for the use of data and technology by national and local government. It suggests that the first task of this committee should be to commission and approve a five year strategy for AI.
The report also stresses the immediate need to appoint a Government Chief Data Officer. The government agreed the need for this role in 2017, and despite repeated commitments to the idea, has so far not taken steps to recruit for it.
The report also underlines the importance of maintaining the UK’s leading position on AI, and that attracting and retaining top AI talent should be a key focus of any future immigration policy.
At this stage, the recommendations contained in the report are just that, and we will need to see the government’s response to understand whether any will be adopted (and by when).