While many point to regulation as the way to manage and control AI risks, we, as individuals and firms, make decisions on the middle ground in which AI should play. Where we should leverage AI, and where we should not. Playing at opposite ends of the spectrum is not the answer. If you hold back on AI innovation you are at a disadvantage, while if you pursue it without the appropriate controls and governance in place, it could broaden risks the firm takes on and lead to more dangerous scenarios.

It all starts with ensuring that a human-centric aspect remains, assessing what values we would desire to be preserved in the instances of AI. This analysis should complement the technology with culture/socio-economic/science considerations and should be considered early on in the AI lifecycle from design, development and deployment. Progress can still be achieved while still enabling risk reduction and sustainability. Accenture has helped Organizations define AI, structure enterprise-wide AI policy and identify 6 key risk management themes that firms should prioritize, and address to /facilitate/help effective and integrated AI risk management.

  • Process Governance:
    • Integrated view of risks and effective processes for coordination and accountability across multiple stakeholders
    • Drive AI enablement using a consistent & risk-based approach while ensuring business continuity
  • Model Risk:
    • Verify adequate coverage of model development and validation framework throughout the AI/ML model lifecycle
    • Track and maintain ongoing AI / ML model performance with enhanced model risk tiering and assessment
  • Technology & Data Interactions:
    • Remediate gaps in technology capabilities – addressing aspects such as explainable AI, data drift etc.
    • Reconcile internal and external touchpoints such as cyber security, vendor solutions, data pipeline, data sourcing etc.
  • Regulatory Compliance:
    • Align risk assessment with legal and compliance and regulatory frameworks such as Fair Lending rules and Data Privacy (e.g., FCRA, FERPA, GLBA, HIPAA, GDPR etc.)
  • Qualitative Responsible AI:
    • Confirm transparency of AI systems in view of ethics and fairness considerations
    • Manage consumer perception and reputational risk effectively
  • People & Culture:
    • Establish an AI value-based culture that goes hand in hand with holistic risk management
    • Skillsets needed to understand, assess, and manage AI related risks

New AI use cases will continue to be developed for years to come. As we drive future innovation, we should ensure that all potential impacts are addressed, and the appropriate foundational elements are laid out to empower and sustain said innovation. New ethical/regulatory/business risks and costs should be assessed and accompanied with human-centric controls to arrive at a true competitive advantage and minimize AI’s destructive potential. And in this way, truly leveraging AI as a critical and positive tool in our toolbox for change.

Aisha Kafati

Aisha Kafati

Senior Manager – Strategy & Consulting

View Profile

Rob Nazara

Rob Nazara

Manager, Digital Risk & Compliance

View Profile

Liss Mendez

Liss Mendez

Management Consultant - Risk & Compliance

View Profile


Submit a Comment

Your email address will not be published. Required fields are marked *