Article

It’s all in the algorithm

Making machine learning more affordable by tweaking how the machine interacts with the data sources that teach it
By
Elizabeth Howell
Institution(s)
University of Regina
Province(s)
Saskatchewan
Topic(s)
Artificial Intelligence
Many symbols and letters forming an algorithm written on a blue screen

Machine learning — which is essentially designing computer methods that can learn patterns in data — can help computers predict things such as the incidence of disease in a given population, or assessing the credit risk of somebody applying for a mortgage.

But like human learning, machine learning is only as good as the information the machine is exposed to. Sandra Zilles, Canada Research Chair in Computational Learning Theory and a researcher at the University of Regina is making machine learning more efficient by improving the way computers are exposed to sample information. We talked to Zilles to learn how better interactions between machine and data are key to realizing the full potential of artificial intelligence.

First, can you tell us how machines learn?

There are many approaches to machine learning. The most popular ones typically have a general “model” of the real world in mind and then use statistical estimates to adjust that model to the data they are provided with.

Most existing machine learning algorithms were developed assuming the data is a random sample of what you find in the real world. For example, as a machine learns to find tumours in MRI images, it does so using an algorithm that assumes a random sample of MRI images. Similarly, a machine that learns user preferences of a web service uses an algorithm that assumes that the interactions it receives as input are a random selection of that user’s interaction with the web service.  

So how will your work help machines learn faster?

Previous research showed that machine learning algorithms speed up if the data is selected carefully, instead of randomly. Machines learn faster from MRI images if the images used to train the machine are hand-selected by an expert. Or in the case of trying to predict user behaviours online, for example, machines learn user preferences faster if the user trains them by emphasizing some particularly relevant interactions.

My research takes this idea one step further: we modify machine learning algorithms so that they expect carefully selected data, rather than assuming randomly chosen data. This assumption can substantially speed up the machine learning process and make it much more economical, especially when data acquisition is expensive or cumbersome, because less data is needed.

Imagine a biology lab, for example, that spends hundreds of thousands of dollars every year on lab technicians and chemicals for producing tonnes of data. They could probably save money if they produced high-quality data on a few hand-selected data points — and let the machine’s algorithms, which are fine-tuned to expect selected data, complete the picture — rather than producing a large number of random data points, and using a less efficient algorithm to sort through them.

So how will you put these algorithms to use?

One of my Ph.D. students is working on improving communication between self-driving cars in order to improve traffic flow. A core problem in this context is to give each car, which is an autonomous agent, the ability to judge the reliability of messages received, depending on the sender and the situation. This is done by learning a trust model, based on evidence from previous interactions. Similar ideas will apply to many other multi-robot applications, like in computer game technologies, or load-balancing in mobile networks.

Another application is biology, in particular the study of genes and their functions. We analyze data obtained from laboratory experiments on a small model organism, like bacteria, and train a machine to learn regularities in these data. Identifying patterns in data, and deviations from these patterns, helps us to determine which genes are essential to which functions in the model organism — knowledge that is crucial, for example, in drug design.

We also design and analyze formal user preferences models, like the kind that learns what kinds of movies or books you like, which can help to find a preferred product for a customer. This work is helpful in things like e-commerce and marketing.

Will machine learning have an economic impact for Canada?

Nearly every area of the economy will be affected by machine learning research. Businesses and agencies use machine learning to improve their products and services, to target their advertising, or to make decisions about future investments.

One good example is the health sector. Machine learning is used to improve the quality of data analysis in the health sector to make the data collected from patients more effectively usable. For example, you can use machine learning to find new patterns in data that suggest that a certain genetic condition is a risk factor for a certain disease. Or it can be used to suggest to a doctor some likely diseases given a set of symptoms and a patient’s history. Machine learning can also be used to assess the likelihood of success of a certain treatment given the patient’s disease, symptoms and medical history.

The challenge is that machine learning typically requires so much data. In bioinformatics, for example, collecting data requires time-consuming laboratory experiments, or for learning user preferences, you can’t always expect a user to be willing to provide large amounts of data. My research can help make machine learning solvable at a lower cost in situations where current methods require too much costly data.

Return to The great minds behind intelligent machines