Other parts of this series:
In our last blog, we discussed how businesses and their first line control partners should proactively manage risk as part of a successful transition to cloud and new technology implementation. The emerging use of AI technologies in customer facing business processes increases the risks of unintended biases with associated increased regulatory scrutiny. Per the Federal Trade Commissions (FTC), “The Commission is considering initiating a rulemaking under section 18 of the FTC Act to curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.”1 To respond to this, Accenture recently published a blog on this topic discussing the best practices proposed by the FTC: “(a) utilizing complete and accurate data sets; (b) testing algorithms pre/post-use; (c) embracing transparency and independent standards; and (d) validating AI statements and promises. Future AI regulatory and enforcement actions by the FTC may generate additional standards and requirements for companies’ algorithmic decision-making practices.”
With this in mind, Risk and Compliance second line organizations need to evolve risk and control metrics, control testing, model governance and oversight to fully address with risks associated with the use of emerging technologies in coordination with the first line.
Review and Challenge of Technology Opportunities
During strategic planning, Risk and Compliance second line organizations should work with first line business partners to identify and estimate risk impacts as part of opportunity diagnosis. Risk partners should /confirm/verify that the benefits of strategic decisions are considered against not only by costs but also potential risk impacts when implementing new technologies. If risk managers (both first line and second line) within the enterprise do not have the expertise to fully evaluate these risks, the risk management organization needs to be upskilled or experts brought in to provide effective oversight before technological transformation occurs.
Similarly, the Compliance organization should be consulted during strategic planning for regulatory compliance implications. For instance, applicable regulations regarding data and privacy varies by location and corporate liability risk needs to be considered when assessing an opportunity. Upcoming regulations around AI continue to evolve as more and more companies begin using these tools to streamline their business. Per a new bill proposal, US Senator Ron Wyden aims to set new laws in this space: “As algorithms and other automated decision systems take on increasingly prominent roles in our lives, we have a responsibility to ensure that they are adequately assessed for biases that may disadvantage minority or marginalized communities. This bill requires companies to conduct impact assessments for bias, effectiveness and other factors, when using automated decision systems to make critical decisions.” Accenture has a proven interdisciplinary approach to implementing AI technologies responsibly.
Development and Implementation Oversight
Once product development and customization are underway, Risk & Compliance second line organizations need to validate that the product testing plan is comprehensive and appropriate to the organization’s particular product configuration. For example, when testing machine learning models, not only should APIs and algorithms be tested, but organizations should also /confirm/verify there is end-to-end integration testing and model dependencies with other ecosystem components and are tested in a sandbox environment. Risk should also confirm that controls are built to work in parallel with the design of a technology configuration and all controls are adequately tested prior to product release.
Ongoing Monitoring of New Technology
After initial implementation, the Risk organization should work with business partners and Technology to ensure that new deployments have set performance metrics and are performing as expected based on agreements with Technology and/or the vendor. Sample machine learning performance metrics may include model accuracy, recalls, queries answered per second, API response times, as well as performance over time. Business leaders and their Risk counterparts should monitor these agreed upon performance metrics, usually via dashboards, to confirm that unexpected behavior is not driving higher than expected residual risk or introducing new risks. Similarly, if incremental upgrades or changes to an implementation are made, Risk and Compliance needs to be consulted to help the business /assess/analyze if there is a Risk or Compliance impact or a change to the business’ risk profile. Should changes be made to AI models, special care should be taken to ensure no bias is introduced. Additionally, model risks and controls should be strengthened so that Algorithmic Decision-Making does not result in unlawful discrimination, along with periodic review around customer data privacy disclosures and protection controls.
Finally, to /confirm/verify new technology related risks are incorporated into existing risk frameworks, the Risk organization should partner with business partners in the first line to continually update risks and controls taxonomies and inventories as new risks are identified. As taxonomies are updated, process risks and controls inventories, risk assessments, controls testing and monitoring plans, and risk reporting should also be updated to incorporate new risks. By partnering with the first line throughout the lifecycle of a product or application, second line risk managers can help businesses assess, monitor, and control for emerging technology risk and ensure that the impact of these risks is minimized.