Speech and music, effable and ineffable:
Participants: Kristin Butchatsky, Stela Solar,
Emery Schubert and Joe Wolfe
Can we obtain insight into the big questions about music
by looking at the coding? Speech and music both use categorical
perception, and thus have the advantages of digital communication
and signal processing. But they digitise in complementary
ways. In speech, phonemes are created by digitising aspects
of timbre, whereas pitch and rhythm are analog variables.
In music, pitch and rhythm are digitised, whereas timbre is
the carrier signal.
This plenary sessions follows a technique borrowed from
Galileo: the arguments are presented in the form of a discussion
or debate among people representing the different views. Extensive
musical, as well as phonetic examples are used. Words and
music by Joe
- The complete
transcript of the session is available in .pdf format,
including the music (390k).
- The surtitles are also available as a powerpoint
sound recording in .mp3. Warning: 15 Mbyte.
Other photographs and a video were made. They may be added
to this site in the future.
For those interested in pursuing further the questions about
coding, information content and categorical perception in
music, but presented in more orthodox formats, some scientific
papers discussing these issues may be downloaded:
- Wolfe, J. (2003) "From
ideas to acoustics and back again: the creation and analysis
of information in music". Proc. Eighth Western Pacific
Acoustics Conference, Melbourne. (C. Don, ed.) Aust.
Acoust. Soc., Castlemaine, Aust. (Plenary Lecture.)
- Wolfe, J. (2002) "Speech
and music, acoustics and coding, and what music might be
'for'". Proc. 7th International Conference on Music
Perception and Cognition, Sydney. K Stevens, D. Burnham,
G. McPherson, E. Schubert, J. Renwick, eds., pp 10-13.