What was it the late Stephen Hawking once said? … “AI could be the worst event in the history of our civilization”? That’s hard to imagine when artificial intelligence (AI) has undoubtedly become a critical part of the average person’s everyday life, from using the GPS system in our cars, to reading emails, to depositing checks. Indeed, the field of AI has accelerated with new developments in robotics, chat bots, image/speech recognition, search algorithms, autonomous vehicles, and beyond. Far from “new,” AI has been promoted and hailed as the future. Today, its far-reaching benefits are more apparent than ever before, becoming the fuel behind new scientific discoveries and relevant applications for combating climate change and the COVID pandemic. So how is it that AI could ever have been referred to as possibly ‘the worst event in the history of our civilization’?
Here is what Dr. Hawking actually said: “Unless we learn how to prepare for, and avoid the potential risks, AI could be the worst event in the history of our civilization.” The appeal of AI is obvious. Yet, as with any evolving technology, there are risks and drawbacks associated with its use, which if left unaddressed could further hinder the resolution of the very problems the technology attempts to resolve or create different issues altogether. For instance, increases in automation have the unintended consequence of job loss. In addition, lack of quality control in development and the inherent human error bias in programmers may equate to gender/racial bias and inequality. Further, the use of sensitive personal data, particularly with facial/location recognition, poses certain privacy/information security concerns. There is also cybersecurity risk with autonomous AI’s role in disastrous decision making.
What is the regulatory stance on AI? Read further into the next installment of this blog series – “AI Regulatory Journey”.