If you followed the proceedings at this year’s World Economic Forum in Davos, I am certain you couldn’t help but notice the buzz around one technology in particular: artificial intelligence (AI). From the formation of global councils, the potential impact on the workforce, to the Fourth Industrial Revolution, the interest in what these technologies could do for mankind seemed unprecedented. But with so much stir and excitement around AI, it might be a good idea to pause and ask ourselves: are we developing and using AI responsibly? What even is responsible artificial intelligence?

In this series, I will try to answer not only these questions, but give you some specifics on creating and utilizing AI responsibly. I will also take a look at some potential impacts of responsible AI on the workforce, current and future.

AI affects more than business

Over the past few years, AI has become more than just an exciting new technology. In fact, AI has grown to the point where it often has as much influence as the people putting it to use.

It follows, then, that businesses can no longer just train AI to perform a given task. Instead, businesses have a responsibility to “raise” AI to act as a responsible representative and a contributing member of society. Because AI-based decisions have increasing impact on human lives, a new imperative becomes clear: businesses are tasked with “raising” their AI systems such that they reflect business and societal norms of responsibility, fairness and transparency.

Some enterprises still treat AI as a technology tool, not expecting it to “act” responsibly, to explain its decisions, or to work well with others. But with AI systems making decisions that affect people, companies should teach AI to do all of those things—and more.

Any business looking to capitalize on AI’s potential should also acknowledge its impact.

Across financial services, AI brings significant opportunities by changing how people work and how customers are served.

The transformative impact of AI is likely to:

  • Significantly increase productivity and efficiency.
  • Provide new methods of customer service and advice.
  • Improve the effectiveness of actions to combat fraud and financial crime.
  • Allow people to do what humans do best—imagine, innovate and create new products and services.

What makes AI “responsible?”

AI is a collection of advanced technologies that allows machines to sense, comprehend, act, and learn. AI technologies that are “raised” responsibly can not only scale operations but adapt to new needs via feedback loops from other deployed models—similar to how continuing education allows employees to adapt to new tasks.

By treating AI in a way that recognizes the impact it now has in society, companies have the opportunity to create a collaborative and powerful new member of the workforce.

But what is it exactly that makes AI “responsible?” When it comes to human decision makers, a “responsible” one might be described as reliable, well-trained, well-grounded, and as someone who delivers quality work and strives to make fair and balanced decisions.

Responsible AI requires many of the same qualities, including:

  • Transparency—AI needs to document the thinking process used to make a determination.
  • Training—AI needs to be trained with data representing an unbiased and comprehensive view that doesn’t unintentionally disadvantage a set of people for no reason.
  • Tuning—AI needs to be trained to ask for intelligent and experienced human support to help make decisions while providing new information for future decisions.

The burden of this responsibility sits on the shoulders of industry leaders. They have a duty to develop and use AI systems responsibly.

To protect humans and society, several factors should be taken into consideration when developing and applying responsible AI, including the following:

  • New skills required, while old skills will be made redundant.
  • AI is likely to create massive changes in workforces (which I will address later in the series in greater detail) and disruption in economies.
  • Ethics, bias and explainability to be part and parcel of decision making.
  • Economic policy, education, legal and regulatory frameworks needed to protect humans and society.

In my next post, I will give you some specifics around responsible AI.

In the meantime, have a look at our Accenture Technology Vision 2018 report. I also highly recommend my colleague Dominic Delmolino’s post on What Makes AI “Responsible?”.

Submit a Comment

Your email address will not be published.