The Bank of England and the Financial Conduct Authority launched the Artificial Intelligence Public-Private Forum (AIPPF) on 12 October 2020. The purpose was to further dialogue on AI innovation between the public and private sectors. More specifically, the AIPPF sought to:
https://twitter.com/bankofengland/status/1494319897719054336
The AIPPF ran for one year, with four quarterly meetings and a number of workshops. It brought together a diverse group of experts from across financial services, the tech sector and academia, along with public sector observers from other UK regulators and government. The members acted in a personal capacity and all outputs of the AIPPF reflect their views as individual experts, rather than their institutions. The outputs do not reflect the views of the Bank or the FCA.
Previous work has shown that risks can arise at three levels within AI systems: Data, Model Risk and Governance. Issues at the data stage can get baked in as model inputs; but even with the best data, issues can arise in the modelling stage; while complex AI systems create their own governance challenges. The AIPPF meetings and the final report have been structured around these three key topics.
This report explores the various barriers to adoption, challenges and risks in each of these three areas. In order to further the debate, it also explores potential ways to address such barriers and challenges, as well as mitigate potential risks. These are highlighted in the key findings and examples of best practice. Ultimately, this report aims to advance the collective understanding and promote further discussions amongst academics, practitioners and regulators to support the safe adoption of AI in financial services.
Source: Bank of England