Music presents the greatest challenge to scientists concerned with creating new ways of recording, transmitting, and reproducing sound. So what better way to develop new sound systems than to place music and professional musicians at the focal point of sound research? And who better to direct that research than a sound engineer who also happens to be a musician?
Dr. Wieslaw Woszczyk is just the man for the job. Director of the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT)-a new cutting-edge music and sound research center at Montréal's McGill University-Dr. Woszczyk is at home both on the stage and in the lab.
At the heart of CIRMMT (pronounced Kermit) is the Electronic Concert Hall and Opera (ECHO), a state-of-the-art facility that will serve as the hub of virtually all of the centre's research activities.
Currently under development, the multi-media facility will function as a scoring stage, recording studio, and performance venue. Designed to offer one of the most advanced acoustical environments anywhere in North America, the hall is actually a giant sound laboratory that will give researchers the real-world reference point they need to conduct and verify their experiments. ECHO is the hub of an intricate web of research clusters, linked together by a high-speed fibre-optic network capable of transmitting signals down the hall or around the world.
Ultra Hi-Fi Webcasting
A major objective of the music centre will be to develop new ways of capturing the complete experience of being at a live concert and transmitting it intact to the world. The goal is to capture not only multi-channel sound, but also high-definition images-and even tactile experiences, such as the vibration felt through the floor as a drummer strikes a large drum.
But that's just the nuts and bolts of the whole experience. The ultimate goal will be to develop concert halls where an artist might stage a "live" performance in one sending hall, and then have it transmitted to several receiving halls in different cities-via a high-resolution broadcast network. Audiences in the receiving halls would enjoy the same concert experience as those at the sending hall.
If McGill's music centre has its way, the days of world-wide road tours may be numbered.
Imagine. A film completely free of humans. With a cast of synthetic actors that draw their performances from a database. Pure fantasy? Not really. It already exists-and audiences are getting used to the idea that the magic of an actor's craft can stream out of a database.
So what about synthetic musicians-and instruments? Seem impossible? Although years away, the Virtual Orchestra is already under development.
Scientists skilled in sound and music analysis plan to analyze the tonal qualities of the finest real-world instruments. In time, a vast database of desirable tonal qualities will be catalogued for integration into synthetic computer-based instruments.
The goal? To develop virtual instruments that so closely mimic the real thing that their sound is indistinguishable to the human ear. Did somebody say "Virtual Stradivarius?" Well, why not?
Using ECHO, scientists will invite exceptional musicians to perform a variety of musical selections. They'll then study their playing styles. And take an exhaustive inventory of breathing and bowing techniques, as well as musical phrasing. As the performance database grows, patterns of desirable musical styles will emerge. In time, software programs representing the finest synthetic musical talents could be devised.
What does the future hold? A composer might one day sit at the computer and have direct creative control over a completely synthetic orchestra. With such an orchestra literally at the composer's fingertips, it's conceivable that a whole symphony could one day be composed and performed-entirely within a computer. And then distributed to the world over the ultra-high-resolution Internet of the future.
By using music as its main experimental catalyst, CIRMMT is embarking on a new research path in its quest for scientific answers. And the implications are wide-ranging.
What can we hope to gain from the music-based research?
Hi-fidelity communications systems and electronic concert halls that might allow performers to tour the world-all from the convenience and comfort of a single studio "concert hall."
Intelligent sound systems that will recognize-in the same way that a human listener does-the type of music being played. Jazz. Classical. Rock. Then match their settings to deliver optimum sound quality.
Health-care providers with new therapeutic tools to relieve certain mental disorders-using the hidden, healing powers of music.
As demand for multi-media entertainment and communications systems grows worldwide, McGill's state-of-the-art facility will be a focal point of that research.
The centre's medical researchers intend to explore uncharted areas that hold the promise of improving our quality of life. The truly multi-disciplinary approach presents the possibility of wide economic and cultural benefits to Canada-and the world community.
CIRMTT brings together a diverse group of internationally recognized Quebec researchers-spanning the complementary areas of music science, engineering, and medicine. The researchers come from four different institutions: McGill University, University of Montreal, University of Sherbrooke, and the CEGEP of Drummondville. A $6.5-million contribution for infrastructure from the Canada Foundation for Innovation-with matching provincial funds-has attracted much interest and support from industrial partners looking for that next breakthrough in sound technology. The music centre has now joined forces with some of the most advanced industrial partners in the world.
Virtual reality systems and true 3-D sound
Audio equipment giant Bang & Olufsen has teamed up with CIRMMT to support its research into audio-visual interactions within the brain. The research will have applications for industry by helping them to design more accurate virtual-reality simulations, home theatre, gaming, and teleconferencing systems.
Pioneer Electronics of Japan is an active participant in research aimed at developing new ways of recording and reproducing multi-channel sound. These new techniques will give moviegoers and music fans a far more realistic 3-D sound experience in years to come.
How music travels through and affects the brain
Because of its complexity, tracking music as it passes through the brain offers researchers a unique way to better understand how our brains are organized. Dr. Robert Zatorre and his team at the Montreal Neurological Institute will be using functional neuroimaging techniques-such as Magnetic Resonance Imaging-to investigate brain activity as test subjects listen to music. This area of research has applications for developing new communication systems, intelligent hearing aids, and new therapies for certain mental disorders.
In the course of the research, musicians and neuroscientists will work together in novel ways. Neurologists will study the brains of highly trained musicians to see "what makes them tick" and to try to determine how the brain of a musician is different-and what allows some brains to develop perfect pitch.
Clearing the Air
Sound pollution has grown to near-crisis proportions in many of the world's cities. The effects of industrial noise on hearing and health are now coming under scrutiny at the University of Sherbrooke. Researchers will examine ways to actively suppress unwanted noise by creating quiet zones using microphones, digital processors, and amplified speakers. The result? Techniques already under development could one day be capable of "vacuuming" noise pollution out of an environment. In time, we might all look forward to quieter cars, airplanes, and workplaces thanks to new noise-canceling technology developed from this research.
Musilab is a commercial outgrowth of research projects conducted at the CEGEP of Drummondville. This music and sound technology centre has received international recognition through its development of innovative music software programs. One program called "Harmonis" is designed to support the composition and writing of musical arrangements by automating different parts of the composition. The software can write parallel harmonization of two to eight voices, as well as accompaniment, counter melody, and bass and drum lines. Chords can also be generated using this novel new program.