Guest post written by Phil Vokins who is Intel’s Interim Country Manager for Canada.
Artificial intelligence is rapidly moving out of the laboratory and into our daily lives. It has already made possible many things we now take for granted, including speech recognition, web searches, and in-car navigation systems. Soon, the incredible advances in software and the hardware needed to process all of the data that’s being generated will turn many other things that used to seem like science fiction into reality. The impact on society is—and will be—immense, so it’s important that public policy keeps pace with these developments.
There is no commonly accepted definition for artificial intelligence, but at Intel, we see it as a computerized system that performs tasks which traditionally have been associated with people. Narrow AI, which addresses a specific task or set of tasks such as helping radiologists evaluate X-ray scans or content delivery platforms recommend new movies or shows to watch, is the most common form of AI in use today. General AI, where a computerized system can portray human-like intelligence across a multitude of tasks, is still years away. This gives governments a brief breathing space to implement policies that will foster AI innovation and acceptance while at the same time mitigating its unintended societal consequences.
Canada has already taken an important step with the creation earlier this month of the Advisory Council on Artificial Intelligence, made up of representatives from industry and academic and research institutions, to advise the federal government on how to build on Canada’s existing AI strengths, identify ways AI can help to grow the Canadian economy and benefit all Canadians while ensuring AI advances reflect Canadian values.
Through our extensive involvement in the development of technologies that make AI possible, we have defined five key public policy principles governments we believe governments should follow when addressing this issue. These principles align closely with those recently approved by the Organization for Economic Co-operation and Development.
First, governments should foster innovation and open development, including through investment in artificial intelligence, public-private collaborations and measures to encourage adoption by society. They should lead by example, demonstrating the applications of AI in their interactions with citizens and investing in infrastructure to support and deliver AI-based services. And they should create the conditions necessary for the controlled testing and experimentation of AI in the real world, such as establishing self-driving test sites in cities so Canada can build on its already leading position in the development of technologies that will make autonomous vehicles possible.
As artificial intelligence and robotics will bring automation to a wide range of jobs while simultaneously creating new tasks and jobs that require an entirely different set of skills from those workers need today, governments need to understand how jobs will be affected and have plans to allow technology to help humans in their pursuit of work. This includes programs to mitigate AI’s impact on jobs and policies that promote employment and support the up-skilling and reskilling of the workforce.
AI cannot work without data and governments should, while preserving privacy and security, not unduly restrict the availability of data. That means such things as allowing anonymized medical and research data to be available, so AI can be used to find new treatments for diseases and making public datasets available, provided no personal or sensitive information is involved. To foster collaboration and improved machine learning, unwarranted restrictions on the secure transfer of data across borders should be eliminated through international agreements and legal tools.
Privacy regulatory frameworks should be comprehensive, technology neutral and promote the free flow of data. Horizontal legislation can encompass data uses and technologies that are still unforeseen.
Intel has long supported and implemented Privacy by Design, which embeds privacy impact assessments from the design to the deployment of technologies, to minimize risks for individuals. Moreover, given AI’s ability to help predict and detect cyber attacks, governments should reduce barriers to the sharing of data for cybersecurity purposes. Funding research in homomorphic encryption and federated learning could help to strengthen data security while enhancing safeguards for citizens.
Finally, governments should require accountability for ethical design and implementation of artificial intelligence. Trust in AI requires organizations to demonstrate that the technology is designed, implemented and operated responsibly. While AI algorithms have the potential to make less-biased decisions than people, this is only possible if deliberate or unintended biases are not part of their design. The Canadian government is leading by example with its Algorithmic Impact Assessment framework that all departments must complete to minimize risks associated with Automated Decision-Making Systems.
Accountability approaches by organizations and continuous dialogue with regulators especially for higher risk use contexts will be essential for society to trust AI. With well-thought-out public policies that are known and understood, and government programs designed to unlock the full the potential of AI and encourage public trust, businesses will be able to continue to develop innovative solutions that will have a maximum positive impact on society and help to maintain Canada’s position as one of the leading artificial intelligence countries in the world.
What do you think of Phil’s thoughts on artificial intelligence? Let us know in the comments below or on Twitter, or Facebook. You can also comment on our MeWe page by joining the MeWe social network.
The opinions and thoughts expressed in this guest post are those of its author and do not necessarily reflect those of Techaeris or its staff.
Last Updated on