Article

Let’s not fear the unknown

When it comes to unfamiliar areas of research, such as artificial intelligence, focus on the ultimate rewards.

When I was young, my next-door neighbour’s dad was an inventor and I thought that was simply sublime—mostly because he bought a swimming pool with the profits from a patent. On hot summer days, I used to tug on my braids and imagine gadgets that might possibly meet with similar success in order to bring a pool to our side of the fence.

Then I read Frankenstein. Fuelled by my vivid imagination, I now pictured the neighbour as a mad scientist in his garage lab, setting clones loose in our cul-de-sac on dark, moonless nights. I managed to make myself so afraid that I actually turned down an invitation to swim in that pool.

Why was I afraid? I was reacting to the unknown—to the possibility that something nefarious could come from scientific pursuit. Today, many of us feel the same foreboding when it comes to certain areas of research. We recognize the value of science and its potential to improve lives while contributing to social and economic development, but we tend to worry about fields with which we are not familiar: artificial intelligence, for example.

As reasonable people, we need to look at how the rewards of artificial intelligence (AI) could outweigh the risks. If we create machines that use AI, we still operate the controls. If we can imagine ethical dilemmas and their solutions, we can program machines to act accordingly. If the machines make mistakes, they are the very errors we ourselves have made.

For example, some people fear a driverless car would stop for a duck in the road, saving its life while causing a series of rear-end collisions. This scenario can happen, and actually has happened, to cars with human drivers. We can program a driverless vehicle to check behind before slamming on the brakes.

Machine learning allows a mechanical device to sort through numerous examples and respond to a situation based on the criteria we choose. In an emergency, that is exactly what people do, but perhaps a bit more slowly. We react based on information available derived from past experience and calculate the risks and benefits of our action. A machine can be programmed to do this. The secret is to include as many eventualities as possible. The better and more complete the information available, the more reliable the response. In the process, creating the program also pushes us to think more deeply and in a more disciplined way.

Will machines take our jobs? They will certainly take some but they will create the need for others that may be more interesting. There will, for example, be less need for those inputting data and more for those analyzing it. And we all would like more services to make our lives easier.

Our universities and colleges are well positioned to foster thinking in unconventional ways about bold, new research pursuits. Today, scientists are breaking down barriers between disciplines and are creating fields we hadn’t even heard of only a couple decades ago. In the process, they are opening the minds of their students, exposing them to endless possibilities. We want to cure diseases, extend life, and generally make lives more pleasant. We want to find ways to protect our environment for future generations while improving the economy.

Let us be inspired by the important questions that just might be the keys to our survival and success. Let us not fear the unknown, but strive to know all we can for the pleasure of accepting this challenge and the hope for a better future.

Roseann O’Reilly Runte is president and CEO of the Canada Foundation for Innovation.

This originally appeared in The Hill Times on Wednesday, October 17, 2018.