The time has come to scale up your investments in Artificial Intelligence. In fact, policymakers are themselves anticipating to be allocating significant resources to foster AI across many sectors of the economy. Not only do they seem incredibly supportive, but they are indeed inclined to facilitate the adoption of AI technology by suggesting the removal of unnecessary barriers against its healthy development. This is a unique opportunity for many of us, as great focus is being put on innovation and growth from the top down.
The short summary message we gather when reading various recent publications by the main authorities with respect to AI is incredibly positive. However, it is often explicitly mentioned that AI should not be used unconditionally. Responsible adoption of AI must be predicated in every corner so that real benefits may propagate to the society as a whole.
The increased significant interest in AI applications is widespread to the public more in general, we can all recognize. AI is not just a key item appearing on the manifesto of governing bodies as a means to drive economic growth. Over the last few years, we have indeed started to appreciate the power of AI through a number of important global applications, e.g. in medical treatments, in relation to climate change, etc. It is then not a surprise that something of this scale and with enormous potential impact on everyone’s life is required to be taken with appropriate care. Helping the diffusion of ethical guidelines for trustworthy AI is in fact the means by which leaders are seeking to support its establishment.
The AI growth opportunity
In a time of pandemic and recessions, it is important for leaders to be looking for ways to get back on track and seek longer-term growth. A recent analysis by Pwc based on a large scale dynamic economic model estimates that an increase of up to 14% in the global GDP in 2030 could be attributed to the benefits brought by AI on the economy as a whole. While China is likely to be the front runner (+26% GDP growth), mainly due to its higher long-term investment in AI, North America and Europe are estimated to see significant GDP growth of above 14% and 10%, respectively. A similar analysis focussed on the UK can be found in this other report. The figure for the UK is estimated to be around 10%.
According to the same report Retail, Financial Services and Healthcare are the sectors with the highest benefit potential. Productivity gains and increased consumer demand are expected to be driven by automatization of processes, empowerment/augmentation of existing (qualified) labour and customization of quality-improved products and services.
Initial guidance on soon-to-arrive AI regulation
As far as specific AI regulation, although things have only just started to move, they seem to have picked up pace. At this point, the policymakers for some of the major world economies have released some material that covers the matter, and the focus of this article is on the US, EU and UK where significant attention has been given to data protection and privacy over the last few years. Specifically, the White House and the European Commission have recently published a memorandum and a white paper, respectively on their website. The UK Government has not yet expressed a view directly but has made available written advice received from independent bodies, authorities in the field. Here we mainly focus on the AI roadmap authored by the UK AI Council. In terms of more specific guidance related to data protection, we refer the interested reader to a dedicated report available on the ICO’s (UK's independent body that upholds information rights) website.
The memorandum is a guidance to US regulators who are expected to come up with an initial proposal for an AI regulatory framework by May 2021. Yields.io will keep an eye on this in case of any changes brought by the new administration. As for the EC’s white paper, this was followed by a consultation period through to June last year. Initial EU AI regulation that factors in the feedback received from the public (1125 respondents mainly across business and industry, citizens, academia, and civil society, including from outside the EU) should come within the first quarter of this year (as per EC’s commitment). The one produced by the UK AI council was the latest of the documents to be published and the first of 2021 (Jan). A follow-up is not mentioned explicitly but it can be read between the lines that a UK AI regulation will be in place sooner than later.
Common themes that AI regulations are likely to follow
The headline in all three documents is that, as mentioned at the beginning of this post, AI is worth investing into given its universal potential, including in research, innovation, and in the development of new relevant skills to diffuse literacy in AI to the wider population. There’s agreement across the board that an AI-specific regulation, particularly needed in sectors such as energy, healthcare and transport, should be flexible, leverage as much as possible on existing regulations, and most importantly should not bring additional unnecessary complications.
As we wrote earlier, equal focus is on trustworthy AI. The topic is, in fact, covered overwhelmingly. Most of the principles/guidance mentioned in these papers are explicitly or implicitly linked to this overarching concept. Trustworthy AI in the sense that we can all benefit from the use of AI only if confidence in this powerful tool can be ensured. This can only be achieved if a sound risk management framework, in a broad sense, is in place. Above all, the protection of human rights should be the north, and it is clear from the reading that this must not be ignored as the use of AI applications increases.
In the same documents, it also appears that all policymakers agree on the need for the public, in general, to participate in discussions that helps shaping a more suitable as well as practical AI regulatory framework. They encourage wider cooperation among the interested parties as to align objectives as well as support a confident adoption of AI across sectors. It is further acknowledged across the board that this cooperation should extend at international level, at least among regulators themselves. This is in view of a possible convergent AI regulation that, on one hand, avoids unfavourable competition, and on the other hand considers shared best practices.
Some (quasi-)idiosyncratic guiding principles
Each of the three policymakers covered herein, or their advisors, puts more emphasis on aspects of the future regulation that reflect local needs and priorities as well as leverage on existing strengths. Therefore, it is reasonable that some aspects of AI that are jurisdiction-specific are covered already in the early stages.
Although possibly all regulators, and certainly the US and the EU from what we read, are expected to follow a tiering approach with respect to identifying and prioritizing risks that an AI regulatory framework must address, we can tell that the EU regulation is going to spell out a greater level of details. In fact, the white paper mentioned above is quite prescriptive already in defining high-risk AI applications – essentially based on both sector and usage considerations (e.g. self-driving automotives in public transports) – as well as their legal requirements to ensure an ethical and trustworthy AI. From these requirements, it also emerges the emphasis on the role of the human expected to be in the control of the outcomes produced by AI systems. Further guidance on the risk-based approach is expected to come soon as to address the recent feedback from the consultations where more clarity is asked for.
Another area where the EC dedicates quite some space to is on the implications that AI will have on existing laws when it comes to enforceability, for instance. In fact, laws are normally technology-agnostic by design (not just for the EU) so that they can be general enough to fit in a framework. An approach where suitable extensions and improvements are applied considering compliance with current legislation is what any regulator will most likely do. Further, the EU faces the unique challenge of coordinating the effort across the member states, as some nations (e.g. Germany, Denmark, Malta) are already at advanced stages when it comes to local AI regulations. This can either mean that the regulation will be detailed enough to cover different peculiarities of national laws, or we will see a more general framework subject to a degree of local interpretation. The latter being more plausible and practical, we think at Yields.io.
As far as the UK, the AI roadmap often refers to the practice of good governance; and quite a lot of remark is on data used for AI. The necessity of establishing a robust governance framework focused on data quality and privacy for businesses to rely on is called out. As for personal data protection, the process of auditing AI is already work in progress. (More material produced by the ICO on auditing AI, can be found here). On the same topic, but without specific reference to data, the EC anticipates that independent testing bodies/centres could be established to certify the compliance of AI systems to regulatory requirements. Yields.io strongly believes this is a good means to further enable transparency.
The one principle that stands out in the White House’s memorandum is the contemplation of non-regulatory actions as opposed to more prescriptive a-priori rules. With these, the US regulators are explicitly asked to carefully consider whether existing laws represent a sufficient mitigator of the impact derived from the use of AI applications. They will have to carefully assess the cost-benefit trade-off of introducing new regulatory requirements. The job is on them to perform a case-by-case assessment and carefully consider inputs from the interested parties along the way. Yields.io sees this as a thoughtful approach that is likely to be shared by other (non-US) regulators as the world’s leaders look to boost economic growth in the years to come.
|Country||Idiosyncratic principles||Common themes|
|US||Non-regulatory actions (Section “Non-Regulatory Approaches to AI”)||
- Need for AI-specific regulation
- Ensure trustworthy AI
- Internationally aligned baseline regulation to enable fair competition and encourage sharing of best practice
- Coordination across Member States (Section 5)
- Enforceability of existing laws (Section 5, paragraph B)
- “high-risk” AI applications: definition and requirements (Section 5, paragraph C and D)
|UK||Good Governance (Section 3, paragraph “Public trust and good governance”)|
How can Chiron, Yields.io platform, be used to meet the industry's needs?
Model risk management is to a large extent driven by cost and capital reduction. Due to the introduction of new model types (such as AI), new regulatory frameworks, and the war on talent, these goals become every year more challenging. As a consequence, many financial institutions are looking for ways to industrialize model risk management.
To answer this demand, Yields.io has created a data-centric model risk management platform called Chiron. Chiron allows financial institutions to:
- increase the efficiency and consistency of model validation through the use of templated validation scripts.
- keeping track of the linkage between data, analytics and reports, leading to maximal reproducibility.
- leveraging the modern big data technology stack to scale the computations to arbitrary large datasets.
- executing business process more efficiency through the introduction of workflow engines.
Thanks to Chiron, you are able to reduce the cost of model validation by a factor of 10. In addition, Chiron's monitoring functionality allows for the early detection of model failure, leading to better models and lower capital requirements.
Jos Gheerardyn, February 11, 2021