Search This Blog

Everyday DSP for Programmers: Sampling Theory

It's time to tackle sampling theory. The last couple posts dealt with concepts that are equally applicable to the analog and digital worlds of signal processing, but once a signal has been sampled, we have landed squarely in digital signal processing because the signal has been digitized. Sampling theory sounds like a big, scary thing that takes months of study to comprehend and makes your brain melt. It isn't. Sure, it can get complicated if you dig down deep enough, but we're going to steer clear of those dark trenches and focus on the high-level concepts that will build up your intuition for the issues surrounding digital signals.


Sampling a Signal

Conceptually, sampling a signal is easy. First, you need to pick a rate at which to sample the signal, called the sample rate fs. Normally this is a constant rate, and it is also referred to as its inverse, the sample period ts. Then, at these regular intervals, you store the value of the signal, producing an array of values. In a program these values are what's stored in a memory buffer. To calculate the time for a given sample in memory, the sample period must be known, and that sample period multiplied by the index of the sample to recover the time.

Mathematically, sampling our favorite fundamental signal would look like this:

f(n) = A sin(2πf·n·ts)

The time variable has been replaced by n for the sample number, and time is represented as ts. If we take a look at a particular sine wave with an amplitude of 4 and a frequency of 0.5 Hz and choose a sample rate of 5 Hz, we will get a sampled signal like this:


This graph reveals another way to think about sampled signals. That vertical line in the middle of the graph should look familiar. It's the impulse signal from our set of basic signals. Sampling can be thought of as taking the unit impulse signal, sweeping it over the signal of interest, and multiplying it by the signal at fixed phase intervals. So sampling can be built up from a basic signal and a couple of transforms.

We can also generate sampled signals programmatically. For the sine wave above, the first 100 samples could be generated like this:
function generateSine(a, f, t_s, n) {
  var samples = []
  for (var i = 0; i < n; i++) {
    samples.push(a*Math.sin(2*Math.PI*f*n*t_s))
  }
  return samples
}

var sine_buffer = generateSine(4, 0.5, 0.2, 100)
Then we just have to decide what to do with those freshly made samples. You're only limited by your imagination. But wait, what happens if the frequency of the sine wave is higher than the sample rate?

Aliasing

It's not possible to determine the correct frequency of a sine wave if the frequency is higher than the sample rate used to digitize the signal. Instead of getting multiple points within a single period of the sine wave, you end up getting less than one point per period, and the signal appears to have a lower frequency than it really has. We can create such an aliased signal to our 0.5 Hz sine wave by increasing the frequency by the 5 Hz sample rate:

f(n) = 4sin(πn·ts) = 4sin(11πn·ts)

The 2π(0.5 Hz) reduces to π in the first equation, and 2π(5.5 Hz) becomes the 11π in the second equation. These two equations are equivalent for all integers n. Visually, this equivalence looks like this:


You can see that all of the digitized points lie at the intersection of the two sine waves. After sampling, the higher frequency sine wave is indistinguishable from the lower frequency wave, like Ethan Hunt in a Mission Impossible movie when he's wearing one of his disguises. You cannot tell them apart! The only difference in the case of the sine waves is that the high frequency sine wave won't ever tear off its mask at the critical moment and save the nation from an imminent threat. No, the information has been lost, and the two waves are forever confused. So at what point does this aliasing start happening?

The Nyquist Frequency

To accurately determine the frequency of a sine wave, you need to sample at least twice during it's period. Any less of a sample rate, and you will confuse the actual frequency with a lower frequency. This frequency that sits at half of the sample rate is called the Nyquist Frequency. Using this principle, we would need to sample the red 5.5 Hz signal above at slightly more than 11 Hz to accurately recover the original sine wave.

Why do I say slightly more than 11 Hz? Because weird things happen at the Nyquist Frequency. If you have a sine wave with a frequency that is exactly twice the Nyquist Frequency, you will have no idea what its amplitude is. Take a look at this graph:


If you click on the graph, the sine wave will increase in amplitude while maintaining the same positions of the sample points and the value of its frequency. We know that the amplitude is at least the amplitude of the sampled points, but it could be anything more than that, even thousands or millions of times more. If the samples happened to land at the zero points in the graph, the sine wave would also be indistinguishable from a DC signal. (Another fun fact: if you sample a sine wave exactly at it's own frequency, it will always look like a DC signal because you'll be sampling it at the same point in its period every time.)

If the sample rate, and thus the Nyquist Frequency, was infinitesimally higher, then the sine wave amplitude would be precisely known. You can reason that this is true because no matter where the samples started, eventually we would get samples at the peaks of the sine wave. However, those peak samples aren't necessary to derive the sine wave parameters. Now that we know about aliasing and the Nyquist Frequency, how do we eliminate unwanted aliasing from our sampled signals?

Anti-Aliasing Filters

Of course. Just like if you want to repel bugs you would use bug repellent, or if you wanted to repel vampires you would use garlic. Okay, that last one didn't flow that well. Anyway, before getting into what an anti-aliasing filter is, let's convince ourselves that we really need one. Let's assume we have a 1 Hz sine wave that we're sampling at 5 Hz. It's going to have an aliased signal at 6 Hz, like this:


If you click on the graph, the sample rate will increase to try to remove the aliasing. A higher sample rate does eliminate the alias of a certain frequency, but there's always another one to take its place. This process could go on forever. Your computer will run out of memory before physics will run out of frequencies.

Even at a fixed sample rate, there are an infinite number of signals aliased to a given signal. This is because you can keep changing the frequency in steps of the sample rate to keep coming up with more aliased sine waves. Mathematically, it looks like this:

f(n) = A sin(2πf·n·ts) = A sin(2π(f+k·fs)n·ts)

Where k is any integer value and fs is the sample rate. Since there is no way to get rid of aliasing by sampling faster, we need to remove the aliasing some other way. Unfortunately for us programmers, that task needs to be done in the analog domain before the signal is digitized. An anti-aliasing filter can be as simple as a low-pass RC circuit where the cutoff frequency of the filter is below the Nyquist Frequency. Filter design gets more complicated from there, and it's an entire discipline unto itself.

If we assume we have a good anti-aliasing filter and have acquired a set of unaliased samples, how do we get the original sine wave back?

Reconstruct the Signal with Sinc Functions

It may seem like a good way to recover a signal from a set of samples would be to simply connect the dots, so to speak, but that turns out to be less than ideal. Since the line segments join to form corners, these discontinuities introduce extra frequency content into the signal that wasn't there before. Instead, the ideal way to reconstruct a signal from a set of samples is to use the sinc function. The normalized sinc function is defined as

f(t) = sin(πt)/(πt)

It's normalized to have a positive peak of one at t = 0 and zero-crossings at all integer values. This function can be scaled and offset like any other signal. For example, a sinc function centered at 1 second with a positive peak of 4 and zero-crossings in 0.2 second intervals would be

f(t) = 4 sin(5π(t-1))/(5π(t-1))

Notice how the numerator and denominator both change because the time shift and time scale are both applied to t. Graphically, this function looks like this:


The 0.2 second zero-crossing interval was chosen on purpose because our sample rate has been 5 Hz. Click the graph to see how sinc functions can be scaled for each sampled point on our 0.5 Hz sine wave. To recover the original sine wave, you could add up every sinc function along the sine wave at each point that you wanted to calculate. At the sample points, the peak of one sinc function is the value of that sample point and all of the other sinc functions are zero. In between sample points the sinc functions will add up to a nice smooth curve. Of course, there are an infinite number of sinc functions to add together because they go on forever, so this computation is somewhat expensive and impractical. It's the ideal, though. How do we strive to achieve it?

Interpolation Filters

Summing up continuous sinc functions is too expensive, but we can get close enough for practical purposes, depending on what we're trying to achieve. If we want to output the signal in analog form for something like an audio system, then we would pass the samples through a DAC (Digital-to-Analog Converter) and an output filter that is a type of interpolation filter. The output of the DAC is a stair-step looking signal, and the output filter removes the high frequency content, similar to an anti-aliasing filter, leaving a smooth signal. It looks something like this:


We don't always want to convert back to analog, though. Sometimes we just need more digital samples, so we want to interpolate purely within the digital domain. In this case, we can design an interpolation filter that is essentially a transformed, truncated, and sampled sinc function. This filter is swept across the signal after each sample is replicated however many times it needs to be, and the output of this Finite Impulse Response (FIR) filter is a smooth curve.

There's a lot more to these interpolation filters and the closely related decimation filters (when you want to reduce the number of samples you have), but that will need to be saved for another post. We've covered a lot by following the process of sampling a signal, making sure we're not aliasing unwanted frequencies in the sampled signal, and reconstructing the original signal from sampled points. That about wraps up the introductory concepts in DSP, so next week we'll get into how to do something useful to signals with averaging and actually look at some programming algorithms.


Other DSP posts:
Basic Signals
Transforms
Sampling Theory
Averaging
Step Response of Averaging
Signal Variance
Edge Detection
Signal Envelopes
Frequency Measurement
The Discrete Fourier Transform
The DFT in Use
Spectral Peak Detection
FIR Filters
Frequency Detection
DC and Impulsive Noise Removal

No comments:

Post a Comment