Com Sci 295
Digital Sound Modelling
Spring 2000
Courses in the
Department of Computer Science
The University of Chicago
Instructions for Class Project
Open Project (Non)Rules
The class project for Com Sci 295 is an open project. It
is open in two senses.
- There are no restrictions on how you accomplish project
work. You are encouraged to collaborate, share work, use ideas that
you hear from others, find information in books and articles or
figure out for yourself, whatever works. I will evaluate your
project work entirely from your presentation of the insights that
you gain. You may present the work of others, as well as your own,
but you must explain what it means. Of course, you must acknowledge
the source of each idea that you use. You must share your own work
freely with the class.
- You may change the definition of the project. I will provide a
default work schedule, but you are at liberty to vary any parts of
it that you like. In order to get useful credit toward a grade,
though, you must be able to demonstrate the materials that you
develop. I will provide a sound-capable Linux system with all of the
software that comes up in class. If you want any other facilities,
you must take the initiative to arrange them for the final
interview. If you make serious changes in the standard project, it's
a good idea to discuss them with me, to make sure that I will
appreciate their value in the final interview.
Basic Goal of the Project
The goal of the project is to evaluate intuitively the quality of
several common models for synthesizing sound, by simulating a musical
instrument with each of those models, and listening carefully to the
results. With each synthesis model, the idea is to discover its
natural good and bad qualities. So, there's no point in trying to
perfect the instrument simulation with each model. Rather, we want to
find the best simulation that uses the model in an intuitively natural
fashion.
The Simulation Exercises
We will simulate the sound of the flute using 4 different synthesis
models. In each case, we will create a simulation that can play
individual notes of a second or two at a moderate articulation and
dynamic, for each pitch of the chromatic scale over the normal range
of the instrument. The 4 types of synthesis are:
- Wavetable synthesis, using a single sampled period of flute
sound, playing it a various speeds to produce different pitches, and
using a single amplitude envolope for the articulation.
- Additive sinusoidal synthesis with a single amplitude
envelope. This will sound just like step 1, but prepare us for step
3.
- Additive sinusoidal synthesis with a separate amplitude envelope
for each partial.
- Additive sinusoidal synthesis with a separate amplitude envelope
for each partial, passing the results through a broadband
filter.
Project Steps
More detail will appear as the quarter progresses.
- Take a low recorded flute note from the recording project at
the University of Iowa
Electronic Music Studios. Inspect it and listen to it with
mxv
and any other tools that you find helpful. Find a
region in the middle of the note that seems to represent the overall
quality of the sustained portion of the sound reasonably well.
- Save a single period from the chosen region as a separate file,
and make it into a wavetable in Csound.
- Create a Csound amplitude envelope for the horn
note. You may do this intuitively by eye, or you may use software
tools.
- Use Csound to play simulated horn scales.
- Listen and critique. This is the most important
part. Describe as clearly as you can the qualities of horn sound
that are captured by your simulation, and those that are not.
Resources
- Csound
manual
- Trivial
Csound example for you to modify
- Discussion of a similar project step from
1996
and
1999.
- My own preliminary work on the horn.
- I used MixView (
mxv
) to select a
726-sample period from a low horn note, resampled it to 1024
samples (using the phony sample rate 44100*1024/726=62201), and
saved it as a wav
file. Then, I used the
``GEN01
''
routine in Csound to read it in as a wave table, and
played a 3-octave scale. Here are my files:
- I refined the amplitude envelope a bit, using the
Csound function
linsegr
.
Optional variations
- Compare different amplitude envelopes. In particular, try more
and less accurate envelopes, using more and less break points.
- Try wavetables taken from recorded notes at different pitches.
You should do this step quickly, over the week 24-30 April. It
won't produce much in the way of interesting new sounds---it's mostly
just a programming step to position you for step 3, which has the most
interesting listening exercises.
- Choose one period from a recording of a flute note, and find its
sinusoidal components by taking a Fourier Transform. Lots of
different software tools will do a Fourier Transform for you, but I
found Scilab to be most convenient. You may also try out
data from the
SHARC database.
- Construct a simulation of the flute in Csound by
additive synthesis, adding up some reasonable number of the partials
analyzed by the Fourier Transform above. Ignore the phase of the
partials, and use only the amplitude information in the magnitude of
the Fourier Transform. Use a single envelope to control the attack,
sustain, and decay of each note, just as you did in step 1. Add more
partials until you can't hear the difference.
- Improve the simulation to avoid aliasing, by omitting partials
above about 20,000 Hz. This requires a conditional form, since the
actual frequency of a partial depends on the pitch of the note.
- Add phase information.
- Listen and critique. Given the same recorded
period and the same amplitude envelope, the results of this step
should sound almost the same as the results of step 1. You may be
able to hear some differences due to
- omission of some partials,
- approximation of the amplitudes of partials,
- omission and approximation of the phase of partials,
- aliasing in the wavetable synthesis, which is avoided in the
additive synthesis.
You should also look at the waveforms resulting from different
styles of additive synthesis, and correlate the visible differences
with the audible differences.
Resources
Here is my preliminary work with the horn from 1999. You should be
able to update it with flute data.
- My Csound score file,
adsyn_horn.sco
.
- My recorded period,
horn_period_1.wav
.
Since the Fourier Transform function in Scilab does not
require the number of samples to be a power of 2, I did not resample
the period.
- A transcript of my
Scilab session to compute partial amplitudes and phases
from my horn period.
- My Csound orchestra file,
adsyn_horn_1.orc
,
performing additive synthesis with 18 partials. The higher partials
appear to be so small that they probably don't have much audible
impact. I ignored the phase information, and used only the
magnitudes from the Fourier Transform.
- My Csound orchestra file,
adsyn_horn_2.orc
,
with conditional code to avoid aliasing problems.
- My Csound orchestra file,
adsyn_horn_3.orc
,
using the measured phase of each partial
I structured the Csound orchestra files fairly carefully for
convenient experimentation with different variations. I normalized the
size of the envelope kenv
to 1, so that the amplitude
value from the note is mentioned only once. To avoid scaling the
amplitude of each individual partial, I divided by the sum of all the
partial amplitudes (391.34) in the final calculuation of
aout
. You should be able to do lots of interesting
experiments just by changing the numbers in my orchestra files, and
possibly adding or deleting lines to handle more or fewer partials,
but without changing the basic structure in any way.
This step is the most complicated and labor intensive in the
project. It may turn out to use up the rest of our time.
- Study the recorded flute notes, especially the attack portion of
each note, both by listening and by looking at plots of the
waveforms. Try to identify a few interesting notes and some
characteristics of those notes to try to reproduce by additive
synthesis.
- Perform time-frequency analysis of one or more interesting
notes. Since the flute notes are highly periodic, and they are played
with very little variation in pitch, you can get pretty good results
by taking discrete Fourier transforms of individual periods, using
the Scilab functions that I designed. Or, you can try the
Sndan analysis that Joseph Jurek demonstrated in class.
- Synthesize scales of flute-like notes in Csound, using
additive synthesis with separate amplitude envelope (programmed
using
linsegr
for different partials. Start with the minimum number of partials
that produced reasonably satisfying results in step 2.
- Try more vs. fewer partials, and different approximations to the
amplitude envelopes, to discover how much audible difference such
details produce. Try grouping several partials together with a
single envelope. Since some of the attack and decay profiles for the
flute have an exponential appearance, you might get good results
using the
expsegr
function to generate exponential envelopes.
Resources
- My Csound score file,
adsyn_horn.sco
,
the same as the one I used in step 3.
- My recorded horn note,
horn_mf_B2.wav
.
- A transcript of my
Scilab session to compute time-frequency analyses from my
horn note.
- Scilab function
definitions to help with your time-frequency analysis.
- My Csound orchestra file,
adsyn_horn_se_1.orc
,
performing additive synthesis with 6 partials, using a separate
amplitude envelope for each partial, and leaving all the phases at
the default.
Optional variations
- Vary the frequencies of partials, as well as their amplitudes,
in the early stages of the attack.
- Add a small amount of noise at the initiation of the note.
- Vary the decay characteristics of different partials.
Step 4: add a filter
Optional step 5: add random jitter for liveness, or add breath noise
Maintained by Michael J. O'Donnell, email:
odonnell@cs.uchicago.edu
Last modified: Mon Apr 24 21:20:01 CDT 2000