#### The Signal

In the real world signals are not perfect, so, much like past posts, I'm going to add a twist to this signal. In this case the signal is not made up of a single, varying frequency, but sits on top of a low frequency sine wave that we have to deal with somehow. Here's what it looks like (click to set the signal in motion and stop it):The main frequency of the signal has a higher amplitude and varies randomly over time. This is the frequency we're trying to measure. Then there's a low frequency undulation to the signal that we want to ignore. Maybe it's 60 Hz line noise that's coming through the sensor that we're measuring. Maybe it's a seasonal change in the data that we're analyzing. Whatever it is, we want to get rid of it so that it won't affect the measurement of the frequency we're actually interested in. Let's see what we can do.

#### Finding the Center: A First Attempt

The basic idea of this frequency measurement algorithm is to measure how long it takes for the signal to cross the same value twice in the same direction. The first crossing starts the timer, and the second crossing stops the timer. The sample rate should be known, so the sample rate divided by this time will produce the frequency of the signal. The best place to look for the crossing is at the midpoint between the signal's peaks because the signal is moving the fastest at that time, and the crossing is more likely to be clean.The signal we're trying to measure doesn't have a fixed center, though, so we have to calculate it. We'll start by trying to use our old standby, the exponential average. As a reminder, here's how it's calculated as a JavaScript function:

```
function ExpAvg(sample, avg, w) {
return w*sample + (1-w)*avg
}
```

The weighting w determines how much smoothing the exponential average will do, and we're trying to find the low-frequency part of the signal to remove, so we'll try a low weighting of about 0.015. Here's what the averaged signal, in red, looks like:For once, the exponential average doesn't look like it's going to work. There's still too much ripple on the averaged signal, especially when the main frequency is on the low side of its range, and it doesn't follow the midpoint of the signal very well at all. That's too bad, because exponential averaging is such a cheap calculation, but we're going to need something more powerful to find the center of this signal.

#### Filtering to the Rescue

The problem with the exponential average is that it lets too much of the higher frequencies through, when all we want is the lower frequencies to find the center of the signal. We could experiment with a moving average, but since we're going that far, why don't we go all the way and use an FIR filter. This type of filter does an excellent job of smoothing out and removing higher frequencies from a signal. To use it, remember that we first have to generate the taps of the filter with this function:```
function GenFilter(k) {
var taps = []
for (var i = 0.5-k/2.0; i < k/2.0; i+=1.0) {
taps.push(Math.sin(2*Math.PI*i/k) / (2*Math.PI*i/k))
}
var gain = taps.reduce(function(a, b) { return a + b; })
taps = taps.map(function(a) { return a / gain })
return taps
}
```

The parameter k is the number of taps in the filter, and it will determine the amount of averaging that the filter will do. This is also known as the filter's frequency response. (Changing the sinc function parameters will also change the frequency response, but we'll get into that in a later post.) It turns out that making the number of taps equal to half the number of samples in one period of the low-frequency sine wave results in a filter that does a good job of removing the higher frequency components and leaving the low frequency part, which is the center of the signal. To run the filter over the signal, we can use this function (also from the Averaging post):```
function Filter(samples, taps) {
if (samples.length < taps.length) {
var gain = taps.slice(0, samples.length).reduce(function(a, b) { return a + b; })
taps = taps.map(function(a) { return a / gain })
}
return samples.reduce(function(a, b, i) { return a + b*taps[i] }, 0)
}
```

Clearly, the FIR filter comes with the cost of needing a buffer at least the length of the filter taps to hold the samples and a fair amount more processing power than the exponential average, but that's the price we must pay to get a much better estimate of the signal's center:The averaging line has expanded into a grey box to show the number of samples needed to compute the filtered signal as the samples roll past the filter. The ripple from before is almost entirely gone, but the filtered signal has a distinct delay from the input signal. That's going to make finding the zero crossings difficult, so we need to do something about that.

#### Wait for it...

It turns out that the delay is very well defined. It's half the number of taps in the filter. Also, we needed a buffer to hold the samples from the signal for calculating the filter results, so it's an easy matter to align the filtered signal with the input signal. All we have to do is delay the input signal by the filter's delay. The equivalent thing to do visually is to shift the filter to the right by half the number of taps:Now we have a nice, clean signal running through the middle of the input signal. All that's left to do is count the samples between crossings.

#### Zero Crossing

Even though we're not comparing input samples to zero in this case, this process is normally referred to as finding zero crossings, and there are two ways to do it—the fast way and the accurate way. If your sample rate is high enough that you're guaranteed to have a sample close enough to the zero crossing that it will meet your accuracy requirements, or your accuracy requirements are loose enough to allow it, you can simply take the first sample after a zero crossing is detected and start counting samples from there. Otherwise, you can use linear interpolation to calculate the fraction of time between samples where the actual zero crossing occurred. This is another reason why using the midpoint of the signal, where it is moving the fastest, is the best thing to do. The signal is most linear at this point, so linear interpolation works quite well.First, we'll look at the function for doing it the fast way, and then we'll add in the interpolation for better accuracy. (It's not that much more difficult.) The fast zero crossing version of the frequency measurement function looks like this:

```
function ZeroCrossingPeriod(samples, zeros) {
var count = 0;
var periods = [0];
for (var i = 1; i < samples.length; i++) {
if (samples[i-1] <= zeros[i-1] && samples[i] > zeros[i]) {
periods.push(count);
count = 0;
} else {
periods.push(periods[-1]);
count += 1;
}
}
return periods;
}
```

This function takes an array of samples and the corresponding filtered midpoints as an array of zeros, and returns an equally sized array of periods. The period value changes each time there's a zero crossing, and the period can be easily converted to frequency by calculating the sample rate divided by the period. Notice how the period is updated only when a zero crossing is detected. Otherwise, the count is incremented for each sample until the next zero crossing.For clarity, I've shown this function using a complete array of samples, but it could easily be adapted for real-time use when only one sample is arriving at a time. An enclosing FrequencyMeasurement object could keep track of a previousSample, previousZero, and the count, and the function would take a single sample and zero as parameters to produce a single period measurement.

To increase the accuracy through interpolation, all we have to add is a function that calculates the fraction of a sample that occurs before the zero crossing. Then the updated period can include that fraction of a sample, and the fraction of the sample after the zero crossing can be added to the next period. Recalling some basic algebra, we arrive at this function:

```
function InterpolateZero(y1, y2) {
return -y1 / (y2 - y1);
}
function ZeroCrossingPeriod(samples, zeros) {
var count = 0;
var periods = [0];
for (var i = 1; i < samples.length; i++) {
if (samples[i-1] <= zeros[i-1] && samples[i] > zeros[i]) {
var fraction = InterpolateZero(samples[i-1] - zeros[i-1], samples[i] - zeros[i]);
periods.push(count + fraction);
count = 1 - fraction;
} else {
periods.push(periods[-1]);
count += 1;
}
}
return periods;
}
```

Since the fractional value will always be between zero and one, count will never go negative. After this slight modification, we can see what our final algorithm looks like with a graph:The green signal shows the sample count as it increments between zero crossings and resets at the zero crossings, and the white signal shows the final period count as it gets updated at the end of each period. You'll notice that the period is less stable when the main frequency is low. That's because the filter for calculating the midpoints isn't perfect. In fact, the changes in the period mirror the changes in the red midpoint signal because of the subtraction in the zero crossing calculation. To improve the frequency measurement further, you could either adjust the filter to do a better job, or if that's not possible, average over multiple zero crossings.

Now that we have a nice algorithm for measuring the frequency of a signal that has only one frequency (or a small number of frequencies) at any given time, we'll move on to measuring the frequency content of an arbitrary signal. Next week we'll tackle the major DSP frequency measurement tool: the Discrete Fourier Transform (DFT).

**Other DSP posts:**

Basic Signals

Transforms

Sampling Theory

Averaging

Step Response of Averaging

Signal Variance

Edge Detection

Signal Envelopes

Frequency Measurement

The Discrete Fourier Transform

The DFT in Use

Spectral Peak Detection

FIR Filters

Frequency Detection

DC and Impulsive Noise Removal

## No comments:

## Post a Comment