This work was at first hampered by the fact that we had no reliable tools for understanding auditory spaces. Over the past several years we have used several theoretical concepts to structure visual and haptic spaces (including Voronoi diagrams, a mathematical concept called a "panorama", image schemata, etc.), but these could not be readily applied to understanding how sound inhabits spaces. Sound, unlike light, moves around corners and through many quite substantial barriers. However, sound is usually limited to so-called "point sources" and we do not work with "sound images" the same way we work with visual images - that is, sounds organized spatially in matrices.
This past year I (G. Edwards) have been called upon to teach undergraduates in the geomatics program some basic physics of wave propagation and satellite orbits. While thinking over the challenges of developing a tool for understanding sound spaces, an idea emerged. This idea, after some development work, was shown to successfully resolve the problem of modeling the space for its sounds.
Our model uses a variation of the principle called "Huygen's Principle". Huygen's principle states that when propagating waves encounter an aperture, their movement through the aperture can be simulated by supposing that at each point within the aperture a new (circular) wave is generated with the same phase and intensity as the incident wave. This principle allows one to model the movement of sound waves around corners (actually light also moves around corners using a similar mechanism, but the size of the deviation is small in the case of light).
To develop a model that can be used to simplify how we understand sound spaces, we look for a "partitioning schema", that is, a way to partition space (a room, a theatre, a park, etc.) into spaces that are auditorially "homogeneous" (invariant), in the sense that the sound experience within a given region of the tiled space is similar throughout that region, but different from one region to the next. This idea is commensurate with our earlier visual and haptic models of space.
In the model, sound experiences are generated by point sources (A,B,C and D) that propagate into circular regions. Sound sources with low overall gain generate small circles (C and D) - sources with large gain generate large circles (A and B). Barriers that the sounds may cross are shown as lines 1 and 2. A path through the space is then introduced (dashed line).
This kind of model is quite different from the results of a numerical simulation of sound propagation within a space, as the latter will produce a map of continuously changing values for the sound field. Numerical simulations can be very useful, but that take large chunks of processor time and/or high end computers to "do the job", and usually they also have to make a variety of simplifying assumptions to converge on a result within a reasonable time. The use of qualitative models, such as ours, which segment space into regions, can provide a useful, even powerful alternative to numerical simulation.
Our qualitative model of sound space, however, differs from our earlier models in that we are required to dynamically update the partitioning each time the sound passes across a barrier that dampens the signal. Hence, as long as the observer is located on the path previous to the location labeled "alpha", the sound sources as shown in the above diagram hold true. Once the observer passes alpha, however, the sound sources "behind" the barrier made by lines 1 and 2 and their extensions must be modified (see Figure below).
At this point in time, the sound B must be "re-sourced" at the location B2, generating a new, much smaller circle. Furthermore, as the observer moves beyond the zone of influence B2, along a path parallel with barrier #1, and within the initial zone of influence of B, the sound source B must be "re-sourced" to a point (B3) defined by the orthogonal to the barrier #1 that passes through the location of the observer. Usually, the size of the circle for B3 will be much smaller due to the absorptive properties of the barrier #1.
Thus by representing the movement of the observer along a path as a series of partitions that change every time the observer crosses a barrier, we can construct a model that predicts the set of sound experiences an arbitrary observer will receive.
Using this model, we developed software that can compute in real time the relative intensity and the direction from which each sound is heard (that is, the sounds are "re-sourced" at a new location) for one or more arbitrary observers. Using this software, we are able to design a virtual sound space and hence generate a realistic sound experience for a fictional space.
We did exactly this as part of a conference presentation of the new theory at a recent meeting in the small town of Las Navas del Marques, in Spain. The conference was organized around the theme "Cognitive and Linguistic Aspects of Geographic Space", actually, a review of 20 years of research in this area since an earlier meeting of the same group at the same location in 1990. For this event, in collaboration with Ms. Marie Louise Bourbeau (longtime collaborator for the Chair) and also Mr. René Dupéré, the talented composer who reinvented circus music for the Cirque du Soleil in the 1980s, we developed and implemented a transposition of the Ulysses Voyages into a fictional and virtual sound space that was updated in real time using our software.
The result is a 40 minute presentation including a 20-minute "show", an interactive component that demonstrates the real time nature of the experience and an explanation of the scientific theory leading to this work.
Hello,
ReplyDeleteAfter looking through your blog, I think you may be interested in the Geospatial Revolution Project. The project explores the world of digital mapping and how it has changed how we think, behave, and interact. The project will feature a web-based serial release of video episodes - each telling an intriguing geospatial story. The first episodes will be released in mid-September.
The project can be found here: http://www.geospatialrevolution.psu.edu/. Also, if you are on twitter, be sure to follow @geospatialrev for updates from the production team.