Other parts of this series:
Every firm is faced with the AI conundrum – how much and how fast to innovate while being able to still manage the risks. Firms should neither shy away from AI, nor jump into the space blinded.
Accenture’s recent “Global Risk Management Study Financial Services” highlights that while FS organizations know that integrating AI at scale is essential to remain competitive, there are unintended consequences. AI enablement, uncertainty of how to navigate the regulatory landscape, and challenges in enterprise-wide integration of AI some of the key aspects highlighted by survey respondents. Our 2021 Global Risk Management Study, 2019 Global Risk Management Study, and our AI Built to Scale report highlighted a series of drivers behind AI adoption, with 83% of C-suite executives saying their growth objectives are achieved through AI with 71% admitting they struggle to scale AI pilots, and 72% believing their businesses to be at risk due to AI scaling concerns. On the other hand, there are firms that are experiencing barriers to AI, with 33% holding back in AI adoption due to unclear regulation, 58% believing that disruptive technology poses too large a risk on their business, and 89% describing themselves as incapable of assessing the extent of AI-related risks.
With great power comes great responsibility. But, that is just it – the crux of ethical and compliant AI is the framework, methodologies, governance, policies and procedures around the design/development/testing/deployment and use of AI. AI requires a forward-looking approach, one that considers the build of AI for scale from the start. AI impacts /should be assessed enterprise-wide, with policies and processes to manage said risk.
In recent times, we have witnessed impacts of inadequate AI risk management. Apple Card and Goldman Sachs experienced the risk of gender discrimination in their credit card algorithms. Bias in AI-based ‘black box’ algorithms can have significant reputational impacts. There are increasing cases of inherent bias and discrimination based on factors such as gender, age, ethnicity etc. Clearview AI and Cambridge Analytica experienced risks with their facial recognition software. Moreover, with its power to infer sensitive details, AI significantly increases Privacy violation risk, and its consumption of personal data is causing concern. Industry experts, policymakers and eminent institutions have ramped up their research on potential uses of AI. The University of Oxford has investigated cyber security, fraud, adversarial attacks and data poisoning risks. Numerous high-profile organizations such as the FCA, OECD, the EU, global governments and tech behemoths such as Google and Microsoft have released AI ethical principles and guidelines to encourage ethical AI development.
Given all of this guidance, where should your firm go next? Read further into the final installment of this blog series – “The human-centric empowerment of AI”.