The EU has led the charge in the increase of regulatory focus on AI. EU legislation has focused on promoting trustworthy AI while remaining committed to innovation and competition. Legislation points to a risk-based approach and “algorithmic impact assessments”, with sector and application specific risk assessments focused on those higher-risk technologies. Specific AI risks such as cybersecurity, protection of workforce, unlawful discrimination/bias, and privacy are called out. The EU Parliament has also provided a framework for ethical considerations of AI, robotics and other related technologies. These guiding principles touch upon human centric and human made AI, safety, transparency and accountability, mitigation of bias/discrimination, right to redress, privacy, and data protection. Similarly, the UK ICO has also opined on accountability and governance in AI – including data protection impact assessment (DPIA) relationships and distinctions between controllers and processors. It also highlights data protection in terms of fair, lawful and transparent processing through data minimization and information security.

With New White House regulations in AI at the end of 2020, the U.S. is not far behind. Efforts to promote the development of innovative AI use cases through an AI Center of Excellence (COE) are gaining traction in 2021.  The White House has turned its focus to public trust and participation, information quality, AI risk management, ROI, tech neutrality, transparent fairness testing and bias mitigation, and interagency coordination. Nationally, the U.S. has also considered the degree to which job automation will impact the workforce, as it prepares American workers for the jobs of the future. Additional concerns with malicious AI are at play and pose threats to digital/physical/political/economic security and how AI is managed and adopted in national defense (i.e., machine learning in social hacking, drone weaponization, surveillance/misinformation privacy concerns), as well as ethics and governance considerations. The U.S. Consumer Safety Technology Act of 2020 was aimed at tracking the use of AI on injury trends, identifying consumer product hazards, monitoring retail marketplace, and identifying unsafe consumer products. The U.S. has also counterbalanced the focus on these risks with the focus on enabling AI empowerment through R&D and innovation.

Hooked yet, and wondering how to tackle the many challenges with AI? Read further into the next installment of this blog series – “Navigating the challenges of AI”.

Aisha Kafati

Senior Manager – Strategy & Consulting

View Profile

Rob Nazara

Manager, Digital Risk & Compliance

View Profile

Liss Mendez

Management Consultant - Risk & Compliance

View Profile

Submit a Comment

Your email address will not be published.