Showing posts with label spaces. Show all posts
Showing posts with label spaces. Show all posts

Friday, May 16, 2014

EMIR Laboratory Now Functional

It has taken longer to get the lab completed and operational than expected, but this has finally been achieved. In conjunction with the efforts to complete the installation of the laboratory, we have developed five demos that showcase the potential of the laboratory to support diverse projects in rehabilitation and disability studies. These are as follows :

  1. Fragment de vie (Fragmented Life) : Demo that provides different viewpoints of disability by offering different soundtracks for the same film delivered to wireless headphones (collaboration with CinéScène Inc.)
  2. Vertiges (Vertigo) : Demo that provides an experience of vertigo by simulating walking across a narrow pathway suspended above Quebec City (collaboration with CinéScène Inc.)
  3. Deuxième peau (Second Skin) : Demo that provides an observer a vicarious experience of disability. The movements of a person in a motion capture suit are transfered in real time to an avatar in a virtual kitchen, but attempts to interact with the kitchen and manipulate objects are restricted because the avatar experiences various types of motor impairment (collaboration with CinéScène Inc.)
  4. Viscères (Visceres) : Demonstration of a body-based interface for virtual navigation of spaces (Google Street Map)
  5. Meta-laboratoire (Meta-laboratory) : Demonstration of the use of the lab to support immersive experiments, in this case, a study of the effects of heminegligence on spatial orientation and judgement in the far field (collaboration with CinéScène Inc. and Dr. Julien Voisin)
In addition to the demos, a growing number of research projects are taking place that harness the possibilities that the lab offers. I have subdivided these projects into six categories : Bimodal Environments, Trimodal Environments, Movement-based Environments, Visceral Environments, Virtual Environments and Geographic Environments.

  1. Bimodal Environments : Immersive environments that engage two major sense modalities

    Hemispheres (Hémisphères)
    Experimental study in planning stages of far-field effects of heminegligence (collaboration with Dr. Julien Voisin)
    Co-breather (Co-respirateur)
    Design and validation study underway to build and test four co-breathers for possible clinical applications. Co-breathers provide auditory-tactile immersion (collaboration with Ms. M.L. Bourbeau and C. Légaré)
  2. Trimodal Environments : Immersive environments that engage three major sense modalities

    Living Wall (Mur vivant)
    Project in preparation, aimed at developing a playful immersive installation for waiting areas in clinics for children and adolescents that offers an engaging and absorbing environment that provides motor training opportunities (dexterity exercises) for individuals with fine motor impairments (collaboration with Dr. Ernesto Morales; projected Ph.D. thesis of Walid Baccari)
    Pro(x)thèse (Pro(x)thesis)
    Project underway in design phase, aimed at developing a clinical tool for allowing people with disability to explore sexual/sensual imagery and providing the means to track image choices over time. The tool involves the use of a touch-sensitive smart garment and an immersive visual environment, and we are commissioning photos by a professional photographer (collaboration with Dr. Ernesto Morales and Dr Frédérique Courtois).
    Auric Space (Espace aurique)
    Current in the planning stage, this project seeks to provide a training environment for people who have difficulties locating sounds in their immediate environment. We will use a bimodal environment (visual and auditory and haptic) to provide cues to and test for sound location (projected Ph.D. thesis of Afnen Arfaoui)
  3. Movement-based Environments : Immersive environments that explore movement modalities

    Third Skin (Troisième peau)
    Project in planning stages that seeks to extend the work initiated in the project "Second Skin" to provide a variety of vicarious experiences of disability and ability
    Choreographic maps (Cartes chorégraphiques)
    Planned project that seeks to study how dance may contribute to emerging ideas about how children play (collaboration with Dr. Cora McLaren)
  4. Visceral Environments : Immersive environments that explore actions rooted in the engagement of the body's visceral organs

    Visceres II (Viscères)
    Project in planning stages that aims to test the hypothesis that viscerally learned spaces are more fully understood and remembered than spaces learned by more traditional means
    OrienT (OrienT)
    Planned project that seeks to use a smart garment to help people who get easily disoriented to resituate themselves in their environments (collaboration with Dr. Claude Vincent)
  5. Virtual Environments : Immersive virtual environments

    Virtuarch (Virtuarch)
    Project in design development stage that seeks to provide adolescents with disabilities who feel isolated and have a tendency towards depression, access to an environment and situation that engages them in the creative design of architectural spaces (collaboration with Dr. Ernesto Morales)
  6. Geographic Environments : Immersive environments that encourage an appropriation of geographic space

    Multimodal Online Mapping Interface (Interface multimodal pour la cartographie en ligne)
    Project in planning stage that seeks to design and implement an on-line multimodal interface for a mapping application that draws on cognitive design principles (collaboration with Dr. Mir Mostafavi; projected Ph.D. thesis of Bilel Saadani)
These different projects seek to serve a variety of populations of people with impairments, including the deaf, the deafblind, the blind, those with low vision, people with either gross or fine motor impairments, people with attention deficits and people with intellectual impairments. Furthermore, different projects have different scientific goals and involve distinct methodologies. These include projects that are experimental or involve evaluation and assessment (Hemispheres, Visceres, etc.), projects that have more educational or pedagogical objectives (Second/Third Skin, etc.), projects that seek the development of assistive technologies or that are focused on design issues (OrienT, etc.), projects aimed at developing training environments (Auric Space, etc.), projects aimed at enhancing personal development among those struggling with issues of disability and impairment (Virtuarch, Pro(x)thesis, etc.), and projects that are far more exploratory in nature (Choreographic Maps, etc.).

Sunday, July 18, 2010

Designing Sound Spaces

Much of the work over the past several years undertaken by this chair has been concentrated towards visual and haptic experiences, but not so much towards sound experiences. Since the EMIR laboratory (see previous blog) is being equipped with a modern spatialized sound system (albeit not the most sophisticated version of such a system), and since some of the clientele of the lab have either visual or auditory challenges, it seemed and seems appropriate to investigate more actively the opportunities for designing useful and interesting sound spaces.

This work was at first hampered by the fact that we had no reliable tools for understanding auditory spaces. Over the past several years we have used several theoretical concepts to structure visual and haptic spaces (including Voronoi diagrams, a mathematical concept called a "panorama", image schemata, etc.), but these could not be readily applied to understanding how sound inhabits spaces. Sound, unlike light, moves around corners and through many quite substantial barriers. However, sound is usually limited to so-called "point sources" and we do not work with "sound images" the same way we work with visual images - that is, sounds organized spatially in matrices.

This past year I (G. Edwards) have been called upon to teach undergraduates in the geomatics program some basic physics of wave propagation and satellite orbits. While thinking over the challenges of developing a tool for understanding sound spaces, an idea emerged. This idea, after some development work, was shown to successfully resolve the problem of modeling the space for its sounds.

Our model uses a variation of the principle called "Huygen's Principle". Huygen's principle states that when propagating waves encounter an aperture, their movement through the aperture can be simulated by supposing that at each point within the aperture a new (circular) wave is generated with the same phase and intensity as the incident wave. This principle allows one to model the movement of sound waves around corners (actually light also moves around corners using a similar mechanism, but the size of the deviation is small in the case of light).

To develop a model that can be used to simplify how we understand sound spaces, we look for a "partitioning schema", that is, a way to partition space (a room, a theatre, a park, etc.) into spaces that are auditorially "homogeneous" (invariant), in the sense that the sound experience within a given region of the tiled space is similar throughout that region, but different from one region to the next. This idea is commensurate with our earlier visual and haptic models of space.

Figure 1 : Spatial partition for sounds previous to their encounter with absorbing barriers


In the model, sound experiences are generated by point sources (A,B,C and D) that propagate into circular regions. Sound sources with low overall gain generate small circles (C and D) - sources with large gain generate large circles (A and B). Barriers that the sounds may cross are shown as lines 1 and 2. A path through the space is then introduced (dashed line).

This kind of model is quite different from the results of a numerical simulation of sound propagation within a space, as the latter will produce a map of continuously changing values for the sound field. Numerical simulations can be very useful, but that take large chunks of processor time and/or high end computers to "do the job", and usually they also have to make a variety of simplifying assumptions to converge on a result within a reasonable time. The use of qualitative models, such as ours, which segment space into regions, can provide a useful, even powerful alternative to numerical simulation.

Our qualitative model of sound space, however, differs from our earlier models in that we are required to dynamically update the partitioning each time the sound passes across a barrier that dampens the signal. Hence, as long as the observer is located on the path previous to the location labeled "alpha", the sound sources as shown in the above diagram hold true. Once the observer passes alpha, however, the sound sources "behind" the barrier made by lines 1 and 2 and their extensions must be modified (see Figure below).

Figure 2 : Spatial partition for sounds posterior to their encounter with absorbing barriers 1 and 2


At this point in time, the sound B must be "re-sourced" at the location B2, generating a new, much smaller circle. Furthermore, as the observer moves beyond the zone of influence B2, along a path parallel with barrier #1, and within the initial zone of influence of B, the sound source B must be "re-sourced" to a point (B3) defined by the orthogonal to the barrier #1 that passes through the location of the observer. Usually, the size of the circle for B3 will be much smaller due to the absorptive properties of the barrier #1.

Thus by representing the movement of the observer along a path as a series of partitions that change every time the observer crosses a barrier, we can construct a model that predicts the set of sound experiences an arbitrary observer will receive.

Using this model, we developed software that can compute in real time the relative intensity and the direction from which each sound is heard (that is, the sounds are "re-sourced" at a new location) for one or more arbitrary observers. Using this software, we are able to design a virtual sound space and hence generate a realistic sound experience for a fictional space.

We did exactly this as part of a conference presentation of the new theory at a recent meeting in the small town of Las Navas del Marques, in Spain. The conference was organized around the theme "Cognitive and Linguistic Aspects of Geographic Space", actually, a review of 20 years of research in this area since an earlier meeting of the same group at the same location in 1990. For this event, in collaboration with Ms. Marie Louise Bourbeau (longtime collaborator for the Chair) and also Mr. René Dupéré, the talented composer who reinvented circus music for the Cirque du Soleil in the 1980s, we developed and implemented a transposition of the Ulysses Voyages into a fictional and virtual sound space that was updated in real time using our software.

The result is a 40 minute presentation including a 20-minute "show", an interactive component that demonstrates the real time nature of the experience and an explanation of the scientific theory leading to this work.

Tuesday, July 28, 2009

A Toolkit for the EMIR Laboratory

The EMIR Laboratory (Exploration of Media Immersion for Rehabilitation) is now well underway to becoming a reality. We have a space, albeit still temporary as we shall eventually be moving to a completely refurbished space a few doors down the corridor, several computers and are in the process of acquiring our first major piece, a floor projection system. Combined with our efforts in collaboration with Bloorview Kids Rehab, we will be working with the full range of human sensory perception - visual and audio of course, but also tactile, movement, physiological (heart rate, skin conductance, breathing, etc.), olfactive and even taste as well as using a brain-computer interface. The goal is to generate immersive experiences - creative, game-like, artistic, etc. - that challenge rehab patients, clinicians and/or researchers to view themselves in new ways.

However, few people have any understanding of what can be achieved or how to go about doing this. In addition, even our team, which has been exploring multisensory immersive environments for some time, needs good intermediate tools to support our ongoing research, and we are not always aware of what is possible either. With a view to both helping ourselves, but also encouraging collaboration and participation in the new laboratory, we have embarked upon the process of developing a "toolkit" for delivering multisensory immersive experiences with a minimum of technical expertise.

Called an Affordance Toolkit (because each tool affords different sets of activities - we are drawing on Gibson's affordance theory for this), the framework consists of matching a set of controller interfaces to a set of viewer modules as a function of particular tasks. Controllers include cameras that are able to read and interpret gestures, tactile screens and pressure carpets able to register different forms of body contact, microphones for recording and interpreting sounds, and sensors for recording physiological or neurological signals. Viewers include 1-, 2- or 4-wall projection, ceiling and floor projection, surround spatialized sound, motor-driven devices - both large and small, scent diffusers, and so on.

Tools under development that bridge these two sets of functionalities include the following :

1) Mirror Space - using webcams and full wall prujections where the real-time video images are horizontally flipped to generate a pseudo-mirror image (occupying 1, 2 or all for walls), combined with the addition of digital enhancements, virtual objects and annotations added to the projected image, we are able to deliver an environment that supports a variety of tasks, including various physical games (tug of war, zone avoidance, tag, etc.), cognitive games or tasks (draw in the outlines of objects, paint by numbers, etc.) or controlled exercise and/or balance task (raise your feet until they hit a gong, move along a virtual line, etc.);

2) Master at Work - using data gloves or alternate controllers for those unable to use their hands, use gestures and manipulation to create and modify sounds, visual objects, odors, etc. to make a "multisensory composition" akin to a musical composition. This might be done in a darkened room and avoid the use of vision;

3) Room of Presence - Similarly to the previous tool, this will allow for the materialization of virtual characters that then interact with the user. The user will be able to draw on a bank of virtual characters with a range of pre-deteermined bheaviors, or be able to create very simple "characters" with new behaviors;

4) Multisensory Logbook - In order to record, annotate, archive and playback the expriences created in the EMIR laboratory, we are working on the development of a multisensory logbook system involving video cameras and microphones as well as a computerized logbook of programmed functions;

5) Social Atlas - Using GPS for outdoor environments and RFID tracers combined with other location technologies for interiors, we will provide the ability to both track volonteers or friends and to represent these movements within the EMIR laboratory;

6) Experiensorium - Using geographical database structures, we shall be able to provide the possibility of navigating large and complex virtual environments filled with a multitude of sensory experiences. This will be particularly effective in the presence of non-realistic visuals or no visuals at all. For example, walking through a sketched farmyard, but hearing and smelling the animals, feeling thir presence through air currents and the occasional sense of touch. Within the experiensorium, it will be possible to play out games or narrative experiences.

In addition to these macro-tools, we will also be developing and using a range of microtools such as the ability to call up a pop-up menu on the wall-screens using gestures, to partition the visual, audio or tactile spaces, to inject text into these different spaces (e.g. written, audio or braille), and so on.

Each of the proposed tools represents significant research and development challenges, but working on them is both satisfying and engaging. We look forward to reporting on progress on the development of the toolkit over the coming months.