Friday, August 17, 2012

The EMIR Laboratory Nears Completion

The Exploration of Media Immersion for Rehabilitation (EMIR) Laboratory is in its final stages of implementation before going "live" over the coming fall. The project received a significant new impetus in 2012 with the engagement of the company Cine-Scène, which has helped us line up a series of "showcase demos" that explore the use of the lab for physical rehabilitation in a variety of areas and ways.


The lab is now organized into seven distinct "stations", each providing different immersive ensembles for the user :

  1. Interactive Floor
  2. Stereo Panorama with Motion Capture
  3. Tactile Interface
  4. Co-breathers
  5. Forward Spatialized Sound System
  6. Rear Spatialized Sound System
  7. Physiological Interface


In addition to these, 15 wireless headsets and 3 HD video cameras are available for additional initiatives. We expect to extend these supplementary systems over the come year to include other elements.


Typically, stations might be operated jointly in pairs - for example, the Stereo Panorama will be operated with the Forward Sound System, or with the 15 wireless headsets.


A coordinated controller is being incorporated into the Tactile Interface to allow a user to select the stations to use for any particular application.


With Cine-Scène, we have been constructing six "showcase demos" that will be used to promote the lab to other researchers and to clinical staff at the hospital. These include the development of a floor that perturbs balance, the development of a real-time avatar with reduced or enhanced mobility compared to the user, the exploration of different perceptions of the same immersive experience, the development of a laboratory experiment, the use of the co-breathers and the use of a physiological interface. I should have photos to show shortly!

Sunday, July 18, 2010

Designing Sound Spaces

Much of the work over the past several years undertaken by this chair has been concentrated towards visual and haptic experiences, but not so much towards sound experiences. Since the EMIR laboratory (see previous blog) is being equipped with a modern spatialized sound system (albeit not the most sophisticated version of such a system), and since some of the clientele of the lab have either visual or auditory challenges, it seemed and seems appropriate to investigate more actively the opportunities for designing useful and interesting sound spaces.

This work was at first hampered by the fact that we had no reliable tools for understanding auditory spaces. Over the past several years we have used several theoretical concepts to structure visual and haptic spaces (including Voronoi diagrams, a mathematical concept called a "panorama", image schemata, etc.), but these could not be readily applied to understanding how sound inhabits spaces. Sound, unlike light, moves around corners and through many quite substantial barriers. However, sound is usually limited to so-called "point sources" and we do not work with "sound images" the same way we work with visual images - that is, sounds organized spatially in matrices.

This past year I (G. Edwards) have been called upon to teach undergraduates in the geomatics program some basic physics of wave propagation and satellite orbits. While thinking over the challenges of developing a tool for understanding sound spaces, an idea emerged. This idea, after some development work, was shown to successfully resolve the problem of modeling the space for its sounds.

Our model uses a variation of the principle called "Huygen's Principle". Huygen's principle states that when propagating waves encounter an aperture, their movement through the aperture can be simulated by supposing that at each point within the aperture a new (circular) wave is generated with the same phase and intensity as the incident wave. This principle allows one to model the movement of sound waves around corners (actually light also moves around corners using a similar mechanism, but the size of the deviation is small in the case of light).

To develop a model that can be used to simplify how we understand sound spaces, we look for a "partitioning schema", that is, a way to partition space (a room, a theatre, a park, etc.) into spaces that are auditorially "homogeneous" (invariant), in the sense that the sound experience within a given region of the tiled space is similar throughout that region, but different from one region to the next. This idea is commensurate with our earlier visual and haptic models of space.

Figure 1 : Spatial partition for sounds previous to their encounter with absorbing barriers


In the model, sound experiences are generated by point sources (A,B,C and D) that propagate into circular regions. Sound sources with low overall gain generate small circles (C and D) - sources with large gain generate large circles (A and B). Barriers that the sounds may cross are shown as lines 1 and 2. A path through the space is then introduced (dashed line).

This kind of model is quite different from the results of a numerical simulation of sound propagation within a space, as the latter will produce a map of continuously changing values for the sound field. Numerical simulations can be very useful, but that take large chunks of processor time and/or high end computers to "do the job", and usually they also have to make a variety of simplifying assumptions to converge on a result within a reasonable time. The use of qualitative models, such as ours, which segment space into regions, can provide a useful, even powerful alternative to numerical simulation.

Our qualitative model of sound space, however, differs from our earlier models in that we are required to dynamically update the partitioning each time the sound passes across a barrier that dampens the signal. Hence, as long as the observer is located on the path previous to the location labeled "alpha", the sound sources as shown in the above diagram hold true. Once the observer passes alpha, however, the sound sources "behind" the barrier made by lines 1 and 2 and their extensions must be modified (see Figure below).

Figure 2 : Spatial partition for sounds posterior to their encounter with absorbing barriers 1 and 2


At this point in time, the sound B must be "re-sourced" at the location B2, generating a new, much smaller circle. Furthermore, as the observer moves beyond the zone of influence B2, along a path parallel with barrier #1, and within the initial zone of influence of B, the sound source B must be "re-sourced" to a point (B3) defined by the orthogonal to the barrier #1 that passes through the location of the observer. Usually, the size of the circle for B3 will be much smaller due to the absorptive properties of the barrier #1.

Thus by representing the movement of the observer along a path as a series of partitions that change every time the observer crosses a barrier, we can construct a model that predicts the set of sound experiences an arbitrary observer will receive.

Using this model, we developed software that can compute in real time the relative intensity and the direction from which each sound is heard (that is, the sounds are "re-sourced" at a new location) for one or more arbitrary observers. Using this software, we are able to design a virtual sound space and hence generate a realistic sound experience for a fictional space.

We did exactly this as part of a conference presentation of the new theory at a recent meeting in the small town of Las Navas del Marques, in Spain. The conference was organized around the theme "Cognitive and Linguistic Aspects of Geographic Space", actually, a review of 20 years of research in this area since an earlier meeting of the same group at the same location in 1990. For this event, in collaboration with Ms. Marie Louise Bourbeau (longtime collaborator for the Chair) and also Mr. René Dupéré, the talented composer who reinvented circus music for the Cirque du Soleil in the 1980s, we developed and implemented a transposition of the Ulysses Voyages into a fictional and virtual sound space that was updated in real time using our software.

The result is a 40 minute presentation including a 20-minute "show", an interactive component that demonstrates the real time nature of the experience and an explanation of the scientific theory leading to this work.

Tuesday, July 28, 2009

A Toolkit for the EMIR Laboratory

The EMIR Laboratory (Exploration of Media Immersion for Rehabilitation) is now well underway to becoming a reality. We have a space, albeit still temporary as we shall eventually be moving to a completely refurbished space a few doors down the corridor, several computers and are in the process of acquiring our first major piece, a floor projection system. Combined with our efforts in collaboration with Bloorview Kids Rehab, we will be working with the full range of human sensory perception - visual and audio of course, but also tactile, movement, physiological (heart rate, skin conductance, breathing, etc.), olfactive and even taste as well as using a brain-computer interface. The goal is to generate immersive experiences - creative, game-like, artistic, etc. - that challenge rehab patients, clinicians and/or researchers to view themselves in new ways.

However, few people have any understanding of what can be achieved or how to go about doing this. In addition, even our team, which has been exploring multisensory immersive environments for some time, needs good intermediate tools to support our ongoing research, and we are not always aware of what is possible either. With a view to both helping ourselves, but also encouraging collaboration and participation in the new laboratory, we have embarked upon the process of developing a "toolkit" for delivering multisensory immersive experiences with a minimum of technical expertise.

Called an Affordance Toolkit (because each tool affords different sets of activities - we are drawing on Gibson's affordance theory for this), the framework consists of matching a set of controller interfaces to a set of viewer modules as a function of particular tasks. Controllers include cameras that are able to read and interpret gestures, tactile screens and pressure carpets able to register different forms of body contact, microphones for recording and interpreting sounds, and sensors for recording physiological or neurological signals. Viewers include 1-, 2- or 4-wall projection, ceiling and floor projection, surround spatialized sound, motor-driven devices - both large and small, scent diffusers, and so on.

Tools under development that bridge these two sets of functionalities include the following :

1) Mirror Space - using webcams and full wall prujections where the real-time video images are horizontally flipped to generate a pseudo-mirror image (occupying 1, 2 or all for walls), combined with the addition of digital enhancements, virtual objects and annotations added to the projected image, we are able to deliver an environment that supports a variety of tasks, including various physical games (tug of war, zone avoidance, tag, etc.), cognitive games or tasks (draw in the outlines of objects, paint by numbers, etc.) or controlled exercise and/or balance task (raise your feet until they hit a gong, move along a virtual line, etc.);

2) Master at Work - using data gloves or alternate controllers for those unable to use their hands, use gestures and manipulation to create and modify sounds, visual objects, odors, etc. to make a "multisensory composition" akin to a musical composition. This might be done in a darkened room and avoid the use of vision;

3) Room of Presence - Similarly to the previous tool, this will allow for the materialization of virtual characters that then interact with the user. The user will be able to draw on a bank of virtual characters with a range of pre-deteermined bheaviors, or be able to create very simple "characters" with new behaviors;

4) Multisensory Logbook - In order to record, annotate, archive and playback the expriences created in the EMIR laboratory, we are working on the development of a multisensory logbook system involving video cameras and microphones as well as a computerized logbook of programmed functions;

5) Social Atlas - Using GPS for outdoor environments and RFID tracers combined with other location technologies for interiors, we will provide the ability to both track volonteers or friends and to represent these movements within the EMIR laboratory;

6) Experiensorium - Using geographical database structures, we shall be able to provide the possibility of navigating large and complex virtual environments filled with a multitude of sensory experiences. This will be particularly effective in the presence of non-realistic visuals or no visuals at all. For example, walking through a sketched farmyard, but hearing and smelling the animals, feeling thir presence through air currents and the occasional sense of touch. Within the experiensorium, it will be possible to play out games or narrative experiences.

In addition to these macro-tools, we will also be developing and using a range of microtools such as the ability to call up a pop-up menu on the wall-screens using gestures, to partition the visual, audio or tactile spaces, to inject text into these different spaces (e.g. written, audio or braille), and so on.

Each of the proposed tools represents significant research and development challenges, but working on them is both satisfying and engaging. We look forward to reporting on progress on the development of the toolkit over the coming months.

Thursday, April 30, 2009

Transformative Installations - Global perspective

Since the early development work on the Bloorview initiative (called at the time the "Hidden Magician" project), our efforts to develop a whole range of "transformative" or "resonant" installations has moved forward by leaps and bounds into several major initiatives. We are currently active in the development of a major "new generation environment" at Bloorview Kids Rehab that we are calling the "Living Walls Initiative". Within this project, we are developing a highly interactive, one might say "reactive" wall mural that responds to the presence of children with disability in many different ways. Our goal is to change the way the children understand their relationship to their surrounding space.

Children with disability struggle within environments which are highly disabling. Indeed, we call the children "disabled" but we might more usefully call the environments within which they (and we) function disabling environments. As a result, these children often feel like they are a burden on others, that they have to struggle with the environment, that they are what's "wrong". By developing new environments that are much more responsive to a variety of forms and levels of disability, we aim to challenge this understanding, to offer these children an insight into other possible relations they might have to the spaces that surround them and with which they engage.


An early conceptualization of the Living Walls Initiative

The Living Walls initiative is the first major attempt to do this. The overall concept is to develop a large wall mural (we're thinking 8 feet high by 20 feet long) that is made up of motorized elements that will respond, via appropriately designed interfaces, to children with various forms of disability. The mural will depict a scene of relevance to the hospital - a depiction of the ravine that drops away behind the hospital and which has already been incorporated in a number of ways into the design of the hospital building. This allows the children to be attuned to the presence of natural elements in the local environment of the hospital. We are designing into the mural elements which may change color and shape and hence depict the changing seasons. However, the main focus of the mural is to allow the children to interact with the scene and to make interesting changes to it. For example, we are building in animal figures that may hide or emerge at different moments, when the mural senses a child in its proximity. By making some sort of movement, whether using a wheelchair or a gesture, children will be able to change several aspects of the mural - the intensity of water flow in the built-in waterfall, the shape and color of leaves in the trees, the overflight of planes, and so on.

The project is moving from its conceptual design phase into the development of early prototypes that will be used to test the implementation before this is fully fleshed out. At the same time, funds are being sought, both from private donors and funding agencies, in support of the project. Many of the partnerships needed for its success are already in place.

A second "next generation environment" projet also aimed at helping children with disability has been named the "Ado-Matrix Project". This project focuses particularly on the plight of adolescents with disability, who face a situation where they tend to become isolated from their peers and are in a difficult position to build new friendships. To serve their needs, we are developing a tele-gaming environment that "equalizes" player access across different levels of ability, so that a severely handicapped adolescent may play on an equal footing as an able-bodied friend. Our project seeks to create remotely controlled robots that must work together in a common, physically real environment to achieve group goals. Each adolescent will control his or her own robot, an semi-independent webcam and will have access to group chat either through text or voice or a combination of these. Different robots will have different functionality, however. For this project we are still building partnerships and doing conceptual design.

A third installation project on which we are working addresses the issue of climate change and environmental responsibility. Here our aim is to develop an installation that can be taken to the urban public and which will sensitize participants not only to the issues of the environment but do so in a manner that is informed by an awareness of the inequities in urban life and how different elements of the community may learn to find common ground in addressing these issues. The project bears the title "Voices of Transition".

Tuesday, June 10, 2008

The Hidden Magician - A Resonant Installation for Children with Cerebral Palsy

Early in 2008, the Canada Research Chair in Cognitive Geomatics, in partnership with Bloorview Kids Rehab (BKR), the Institut de réadaptation en déficience physique du Québec (IRDPQ) and Studio BourbeauVoiceDynamics, began to work on the creation of a "resonant installation" addressing the needs of children with cerebral palsy and other motor deficits. Under the title "The Hidden Magician", this broad collaborative effort seeks to develop a participative, immersive installation in which children with cerebral and motor deficits can establish a different relationship with their immediate environment and feel more empowered and recognized for who they are.

Like our other installation initiatives, the approach adopted is to develop an installation design through a broad consultative process that includes researchers, artists, engineers, hospital administration staff, clinicians, students, parents, and the children themselves. Installations must address the needs of the children in ways that are conducive to enhancing their physical and emotional states of being, and yet also generate powerful experiences that are aesthetically interesting and are challenging, even transforming. We use new media technologies including surround projections, gesture recognition interfaces, spatialized sound and tactile environments, combined with engineering skills to develop specialized interfaces that provide enhanced environmental responsiveness for these children.

The project embraces a variety of research areas, from issues about design methodologies, questions concerning the impacts of immersive and participative experiences on children struggling with issues of growth and identity, and efforts to develop measurement and evaluation tools that can better characterize the effectiveness of these installations.

The design concept is still in its early stages. The overall concept has been presented to a broad cross-section of individuals - researchers, artists, clinicians and administrators where it has elicited a great deal of interest and support - both at Bloorview Kids Rehab in Toronto and the IRDPQ in Quebec City. The work is now moving forward into a second stage, focussed on the development of a series of workshops with this diverse clientèle that will feed the design process. Workshops involve a combination of physical activities that aim to allow participants to "think with their bodies" rather than "staying in their heads", and brainstorming and sharing exercises that explore design values and principles. We use dancers, clowns and other specialists in movement to facilitate these exercises.

It is expected that the installation, when completed, will be able to "go on tour" to other interested locations (hospitals, clinics, schools, etc.), and that it will serve as much to sensitize a broader public to the unique qualities of these children as it will enable both the children themselves and their caregivers to rethink their perceptions of who they are.

Sunday, February 24, 2008

Virtualities and Culturalities in Düsseldorf

Are you an insider or an outsider? Real or virtual? Do you have to be either one or the other? In collaboration with BourbeauVoiceDynamics and the Düsseldorf Stadtmuseum, and with the LAMIC and LANTISS, the Canada Research Chair is undertaking the preparation of a ten day exhibition to be held April 4 - April 13 in Düsseldorf. The exhibition, entitled "Virtualities and Culturalities in Düsseldorf" will present an interactive virtual event highlighting the multi-ethnicity of Düsseldorf.

Each community has its own understanding of the city, based on its commuting patterns and culturally-specific landmarks. The marketplaces, churches, synagogues and mosques, recreational centres and ethnic restaurants together with a person’s movement form a “heart map” of the city. These places also support events which highlight and celebrate each community’s cultural heritage. Traditional costumes, music and dancing can define one as being in or out of a community. When employed in the country of origin, they enhance the feeling of being « inside » an ethnic group. These same rituals, however, held in a host country, underline a ethnic group’s distinctiveness, and hence the feeling of being on the « outside », or periphery of society. The modern city offers an alternative to this polarity of exclusion, however, in the form of an eclectic fusion that draws from different traditions, celebrating the contribution of each and yet creating new spaces for identity. Within such fusions, it is possible to be both “inside” and “out”.

In this interactive exhibition, the public is encouraged to « mix and match » clothing from different ethnic traditions so as to create a fashion fusion rooted in folklore and tradition but with a distinct link to the present. Combined with “heart maps” for several different communities, and contact via avatars with virtual folk dances, the installation seeks to engage both the younger population via its innovative use of Second Life and virtual worlds, and the older population by its integration of ethnic traditions in fashion, dance and music. Vive la difference!

Monday, December 10, 2007

Breaking News – Canada funds an Unusual Laboratory for Rehabilitation

The Laboratory for the Exploration of Media Immersion for Rehabilitation (EMIR Laboratory), the first laboratory of its kind in the world, aims to develop and evaluate immersive experiences based on enhanced body awareness, with a view to supporting applications in rehab in particular. The laboratory will complement existing laboratories at Laval University that offer immersive experiences (the Laboratoire de muséologie et d’ingénierie culturelle or LAMIC, the Laboratoire des nouvelles technologies de l’image, du son et de la scène or LANTISS, and the Laboratoire de réalité géospatiale augmenté en réseau et déplacements or REGARD and the virtual reality cave at the Centre interdisciplinaire de recherche en réadaptation et intégration sociale or CIRRIS), but the EMIR Laboratory is destined to be integrated within a clinical hospital environment (the Institut de réadaptation en déficience physique de Québec or IRDPQ) during its third year of development.

The EMIR laboratory will consist of a variety of equipment, of a total value of about 300000$, including specialized devices to record and deliver high quality and spatialized sound, a 360 degree visual surround (including an interactive floor and video cameras), tactile interfaces for capturing touch and gesture, a brain-computer interface so that the environment can be controlled to some extent by thought patterns (of special interest for quadraplegics), and the ability to track movements outside the lab and to represent these within the laboratory environment. The immersive experiences that will be designed include, therefore, a multisensory combination of artistic, pedagogique and scientific elements structured in space, which will serve to support the development of experiences that challenge and transform the embodied identity of participants undergoing rehabilitation.

Partners in the project include the IRDPQ, LANTISS, LAMIC and CIRRIS, and MercanStream Technologies Inc.