Digital   Filters

Peter Billam


Hobart, October 2017

Once upon a time,

long long ago

and far far away,

in a magical land,

called Switzerland,

in the beautiful old town of Biel,

or Bienne in french - it's completely bilingual -

there lived


a young electronic engineer,

who worked for a firm called Filtek, and designed audio electronics for the German radio and TV broadcast studio market, including developing the circuitry,
but also laying out the printed-circuit boards and designing the front panels.
In that market, the mixing desks were built up of pluggable modules, which were very strictly specified and all intercompatible, electrically and mechanically.
The desks, and the rest of the studios, were built by Telefunken and Siemens.

So Filtek made replacement filter, compressor, expander, and fader modules -
drop-in replacements for the Telefunken and Siemens originals,
but with more modern circuitry, and lots more knobs on the front panel.

His bookshelf was full of books and papers,
including some classics of filter theory: Temes and Mitra (1973), and Daniels (1974)
and Moschytz and Horn (1981). I used these books a great deal.

The book on the right, Introduction to Digital Filtering, by Bogner and Constantinides, I found more confusing. But we never needed it at the time, and I only made a serious attempt to understand it forty years later, in July 2017.

In analog filters I became very fluent, and even published two papers in learned journals.
The first of these, for the JAES, is good, and launched a whole field of study,   though the second, for the IEEE, is seriously flawed.
Both are available on-line at

Analog filters use resistors, capacitors, and inductors and/or op-amps,
and they are analysed by the Fourier Transform or, better, the Laplace Transform of their input and output signals.

Starting from the n-th order differential equation of the network, applying the Laplace transform, we get the Transfer Function T(s) as the ratio of two polynomials.
Because we're using complex numbers, these polynomials can always be factorised into their poles and zeros.
Because the final answer must be real, if these poles pj or zeros zi are not real, they have to occur in complex-conjugate pairs.

Different polynomials give different frequency-responses.
The zeros tend to determine if it's a low-pass, highpass, or bandpass,
and the poles determine the filter "type".
Favourite types are Butterworth, Chebyshev and Bessel.
Butterworth gives maximally-flat frequency-response in the passband,
Chebyshev gives a faster roll-off but some ripple in the passband, and
Bessel gives a very slow roll-off but maximally-flat delay in the passband.
Also useful are the Inverse-Chebyshev, Elliptical (or Cauer), and Legendre types, which I haven't implemented.

Then forty years later, in July 2017, I got an idea for synthesising Pink Noise . . .
White Noise is what you get if every separate sample is a gaussian random number with mean zero, so it's very simple to generate. It has a constant, flat power-spectrum per unit of frequency. So in the audio range to 20Khz half the power lies in the top octave, 10Khz-20Khz. We humans perceive it as a harsh high-frequency noise, because we don't hear frequency linearly, we hear it logarithmically - in octaves, for example.
Red Noise is also easy to generate; you take the running total of gaussian random numbers, which wanders up and down in what's called a random walk. It has a 1 / f2 power spectrum, and we hear it as a more low-frequency rumbling sound.
Pink Noise is in between, and has a 1 / f power spectrum. 1/f spectra occur in many natural processes, like the size of the floods on the river Nile, or in noise generated by transistor junctions, and also in art, like the melodic shapes of Bach's Brandenburg Concerti. And yet there is no known algorithm for generating Pink Noise !

So I thought: why not generate ten white noises, filter the first at 20Kz, the second at 10Khz, the third at 5Kz and so on in octaves all the way down, then mix them in some proportion ?
It's an approximation, but it should be realistic . . .
So I decided to investigate Digital Filters again.

Digital filters work numerically, on regularly sampled data, like audio data at 44100 samples per second, or scientific data at any other rate.
When you calculate the new current output-value, you have access to
1) the current input-value and the input-values from the recent past
2) the output-values from the recent past
3) if you can accept some latency then also input-values from the near future

So the crucial operation is the one-sample time-shift,
which is called z.
The previous sample is z-1, the current sample is z0, the next sample is z+1, and so on.
So instead of a Fourier Transform to the frequency domain, digital filter theory works with a z-1 transform to the z domain.
Much of the theory concerns converting the desired frequency-response from its frequency-domain, into the parameters you need to multiply the time-samples in the z-1 domain.
1) find the normalised frequency-domain poles for a lowpass filter with cut-off frequency=1
2) then convert the Laplace transform to its equivalent z-1 transform
3) and calculate the numerator to morph the lowpass into your desired shape, and denormalise it to the desired cut-off frequency.
See Rorabaugh pp.151, 156, 189 and Constantinides p.56

To quote ; "The design of digital filters is a deceptively complex topic. Although filters are easily understood and calculated, the practical challenges of their design and implementation are significant and are the subject of much advanced research."

In the literature I have, the notation is often confusing. For example, in Temes/Mitra p.152 the general z-1 transfer-function is given with parameters A2 in the numerator equal to zero.
Constantinides sometimes uses u and v to mean the real and imaginary parts of the frequency ω, and sometimes to mean the input and output signals of a digital filter; Rorabaugh, however, (p.156) uses X(z) and Y(z) to mean the input and output signals of a digital filter.
Rorabaugh sometimes uses q to mean the quality of filter-section, sometimes to mean the location of a zero in the z-1 plane.
Constantinides sometimes uses a and b to mean the coefficients of the transfer function in the frequency-domain, and α and β to mean the coefficients of the transfer function in the z-1 domain, but he often uses a and b to mean the coefficients of the transfer function in the z-1 domain.
Or, comparing Moschytz p.5 with Daniels p.3, the meanings of a and b have been swapped, More importantly, in the z-domain, comparing Constantinides p.36 with Rorabaugh p.156, the meanings of a and b have been swapped, as have the meanings of G(z) and H(z).
In the   sox biquad b0 b1 b2 a0 a1 a2   option, b* is numerator and a* is denominator, agreeing with Rorabaugh; so I will, sometime, change my code over to use that "standard".

The digitalfilter module API offers only one function: new_digitalfilter(), which takes only one argument: a table of the options that specify the filter.
new_digitalfilter() returns a closure, which is the filter-function that you then call, once per timestep,
and which stores all its internal state: the options, and the recent input and output values that it needs.
This is a very highly redacted version of the key bits of the code . . .
The function you call from the API is new_digitalfilter() (below)
It first puts together an array of functions which implements the second-order sections that will make up your desired filter.
It then returns the closure function that you will use to convert each input-sample to its output-sample.
To generate its array of section-functions, it first calls freq_sections() to turn your desired filter into an array (section_a012b012s) of the a0, a1, a2, b0, b1, b2 coefficients that implement your filter in the frequency domain.
Then it calls freq_a012b012_to_zm1_A012B012() to convert those into the equivalent coefficients in the z-1 domain.
Then it calls new_filter_section() (above) to generate a function which does the calculation, remembering those recent input- and output-values it needs.
We'll have a quick look at freq_a012b012_to_zm1_A012B012
It unpacks the array argument into six scalar variables, does the complex arithmetic of the conversion (the imaginary parts cancel out :-)
and returns an array of the equivalent coefficients in the z-1 domain.

To Do

  • The variable-names in the code still use Constantinides' convention of a0, a1, a2 in the numerator and b0, b1, b2 in the denominator, but Rorabaugh has been a more helpful reference, and he uses b0, b1, b2 in the numerator and a0, a1, a2 in the denominator, as does the sox biquad option. So at some stage I'll swap my variable names.
  • new_digitalfilter() should allow you to change filter options, like frequency, in real time.
  • Currently the bandpass and bandstop filters are limited to second-order.
  • The Inverse Chebyshev and Elliptical filter-types should be available.
  • The code currently uses the "bilinear transformation" to the z-1 domain, which preserves frequency-response, but not the delay-response which is the Bessel filter's main claim to fame. So perhaps a different transformation is needed for Bessel filters.   OTOH . . .
  • In analog, Bessel filters are mostly used to delay a signal without messing up its shape. In digital, this could be done directly, with complete fidelity and almost no CPU time. So perhaps a delay filtertype should be offered.

The tests

My pink noise generator


  listen to the mp3 . . .
(first white,
  then pink,
  then red noise)

See also :

"Digital Filter Designer's Handbook",   C. Bitton Rorabaugh, TAB Books (McGraw-Hill)
"Modern Filter Theory and Design",   Gabor C. Temes and Sanjit K. Mitra, Wiley, 1973
"Approximation Methods for Electronic Filter Design",   Richard W. Daniels, McGraw-Hill, 1974
"Introduction to Digital Filtering",   R.E. Bogner and A.G. Constantinides, Wiley, 1975
"Active Filter Design Handbook",   G.S. Moschytz and Petr Horn, Wiley, 1981
aptitude show octave-signal
sox -n -d synth pinknoise



And, finally,
a word from our sponsor
at the JAES . . .