Sonification is a tool that allows the transformation of any type of information into sound. We can capture in sound 2D images, traffic data in communications networks, stock exchange quotes, …
It is present in our daily lifes, especially in the computer, from alerts of arrival or sending emails, sounds when opening or closing a file, … to the creation of audio-games (computer games without visual component, only with sound).
Sonification is a young discipline that integrates a wide variety of professional fields.
As in other media, where digitization produced a radical transformation, the 1990s was key to the conceptualization of the phenomenon.
The general consensus defines sonification as the technique that uses data as a source to generate sounds from their transformation, that is, it is a representation of data by means of sounds. Sonification would therefore be the counterpart of “visualization”.
In order to be called sonification, this task has to meet several requirements, of which the most relevant are:
a) The sound reflects objective properties or relationships of the data used
(b) The transformations are systematic, i.e. there is a precise definition of how the data are operated to produce the sound
c) Sonification is reproducible: given the same data and identical transformation operations, the result must be structurally identical.
Types of sonification
Sonification makes use of multiple methods or techniques. Among them, the most important are audification, parametric mapping and model-based sonification. In general, complex cases are mixed and use several of these techniques.
Audification is the prototype method and most direct form of sonification, which consists of translating any type of one-dimensional signal into a function of amplitude with respect to time.
The first most famous hearing aids are:
- Phone, Bell 1876
- Phonograph, Edison 1877
- Radiotelegraphy, developed by Marconi in 1895
devices that transformed acoustic waves into electrical signals and vice versa.
Audification was also used to listen to measurements in science, such as the Geiger counter, developed in 1908.
The measurement produces a sound, so each click corresponds to a radioactive decay. From the characteristics of the sound, the intensity of the radiation can be deduced, thus listening to what cannot be seen.
There are articles from the end of the 19th century describing experiments of the so-called “muscular telephone”, where the frequencies of the nerve currents of muscle cells were audited using a loudspeaker.
Any type of elastomechanical wave, which follows the same physical laws as acoustic waves, is an area of special interest for audification. This also includes making the imperceptible perceptible, such as being able to hear the ultrasounds of bats transporting their frequencies into the audible range for humans.
As an aid to the development of scientific work, the audification method has been used in very diverse branches such as seismology with data from earthquakes and seismic data, medicine (electroencephalograms), quantum oscillations of atoms, stock exchange quotations, and we have also witnessed it when listening to a fax or modem.
Audification includes all kinds of transformations, such as those used in the musique concrète developed by Pierre Schaeffer: manipulation of the time axis, transpositions, inverse reproduction, application of filters, …
Parametric mapping associates changes in a dimension of the data with changes in an acoustic dimension. Since sound has different magnitudes and properties, this allows the representation of multidimensional data.
A simple example is to monitor the change in water temperature in a water kettle by associating it with frequency.
The choice of parameterization is practically infinite and the aim is to find a compromise between intuitive, not annoying (unless you are looking for this effect) and precise.
You can add more information using other elements of sound and in more complex cases use all kinds of variables to represent different properties of the data, playing with frequency, amplitude, timbre, time, envelopes, …
One of the most recent techniques is model-based sonification. Instead of associating data parameters with sounds and “play” them, in this case what is created is a global virtual data model that produces acoustic responses to the information entered by the user. The model is therefore a virtual instrument with which the user interacts and plays to explore and understand, much like an interactive screen. Sonification in this case is the reaction of the model (abstracted from the data) to the actions of the user.
Sonification is therefore a very active discipline with a wide range of applications ranging from the scientific exploration of data, through its pedagogical use to its artistic use.
In these last two fields it is necessary to distinguish the creation of sounds based on or inspired by sonification from those that can really be defined as sonification (following the criteria described above).
Sonification has been a source of inspiration and a tool for artistic creations (sometimes called musification), examples that include the sonification of data sets with DNA sequences, or time series such as solar activity, tides or historical meteorological data. An example would be the works of Marty Quinn, such as “seismic sonata” or “climatic symphony”.
A very interesting recent genre is the “Live Coding” or “Live Programming”, in which the artist programs on site producing different algorithms which generate sounds and images: a sound and artistic visualization show.