Documentation
Tools for embedded systems
|
In many embedded systems applications, signals are captured and may be subject to "noise." To reconstruct the original signal, a "smoothing filter" is applied to "clear" the signal of this noise. The filter attenuates high frequency components and emphasizes slow changes in values, making trends easier to discern. The qLibs library offers the qSSMoother module, which includes a collection of commonly used and efficient smoothing filters for easy signal processing.
Below is the list of filters supported by qSSMoother :
A first-order low-pass filter passes signals with a frequency lower than a chosen cutoff frequency and reduces signals with frequencies higher than the cutoff. This results in a smoother signal by removing short-term fluctuations and highlighting longer-term trends. The discrete-time equation of this filter is defined as follows, where \(w\) is the cut-off frequency given in \(rad/seg\)
The difference-equation of the output of this filter is given by:
where,
This filter has similar properties to the 1st order low pass filter. The main difference is that the stop band roll-off will be twice the 1st order filters at 40dB/decade (12dB/octave) as the operating frequency increases above the cut-off frequency \(w\).
The difference-equation of the output of this filter is given by:
where,
This low-pass filter calculates the moving median of the input signal over time using the sliding window method. A window of a specified length is moved sample by sample and the median of the data in the window is computed.
For each input sample, the output is the median of the current sample and the previous \(N - 1\) samples. \(N\) represents the length of the window in samples To compute the first \(N - 1\) outputs, he algorithm fills the window with the first current sample when there is not enough data yet.
The difference-equation of the output of this filter is given by:
This filter has a time complexity of \(O(n)\)
This filter operates on the same principle as the previous filter. The main difference is the treatment of the sliding window, which is implemented as a TDL (Tapped Delay Line) using a circular queue. This structure has \(N - 1\) taps in a line of delays, each tap extracts the signal at a fixed integer delay relative to the input.
The circular queue approach prevents the shift of all the \(N - 1\) samples stored in the sliding window.
This filter detects and removes outliers in the input signal by computing the median of a moving window for each sample. If a new incoming sample deviates from the median by a specified margin \(\alpha\), it is replaced by the moving median \(\overline{m(k)} \).
This filter uses the following criteria for outlier removal:
This filter has a time complexity of \(O(n)\)
Similar to the previous filter, but with a constant time complexity of \(O(1)\) by using a qTDL data structure.
This filter uses a kernel for smoothing, which defines the shape of the function that is used to take the average of the neighboring points. A Gaussian kernel has the shape of a Gaussian curve with a normal distribution.
The coefficients of the kernel are computed from the following equation:
were \( -(L-1)/2 \leq n \leq (L-1)/2 \) and \(\alpha\) is inversely proportional to the standard deviation, \(\sigma\), of a Gaussian random variable. The exact correspondence with the standard deviation of a Gaussian probability density function is \( \sigma = (L – 1)/(2\alpha)\) .
This filter has a time complexity of \(O(n)\)
This filter uses a weighting factor that is computed and applied to the input signal in a recursive manner. The weighting factor decreases exponentially as the age of the data increases, but never reaches zero, meaning more recent data has more influence on the current sample than older data
The value of the forgetting factor \(\lambda\) determines the rate of change of the weighting factor. A smaller value (below 0.5) gives more weight to recent data, while a value of 1.0 indicates infinite memory. The optimal value for the forgetting factor depends on the data stream.
The moving average algorithm updates the weight and computes the moving average recursively for each data sample that comes in by using the following recursive equations:
The Kalman Filter is an efficient optimal estimator that provides a recursive computational methodology for getting the state of a signal from measurements that are typically noisy, while providing an estimate of the uncertainty of the estimate. Here, the scalar or one-dimensional version is provided and only three design parameters are required, the initial covariance \(P(0)\), the signal noise covariance \(Q\) and the measurement uncertainty \(r\).
The recursive equations used by this scalar version are:
predict
compute the Kalman gain
update the state and the state uncertainty
Where
\(A\) is the state transition model, \(H\) the observation model, \(x(k)\) the filter output (state estimation), \(P(k)\) the covariance of the predicted state, \(u(k)\) the signal measurement and \(K(k)\) the kalman gain.
Is a time series forecasting technique used to analyze and predict trends in data. It is an extension of simple exponential smoothing that incorporates trend information into the forecast.
The double exponential smoothing filter calculates two smoothing coefficients, one for the level of the series and one for the trend. These coefficients are used to give more weight to recent observations and dampen the effect of older observations. The level and trend are then updated using the following equations:
Level:
Trend:
where \(\alpha\) and \(\beta\) are the smoothing coefficients for the level and trend, respectively.
The double exponential smoothing filter can be used to generate forecasts by extrapolating the level and trend components into the future. The forecast for time \(t+k\) is given by:
Where \(k\) is the number of time periods into the future.
Overall, double exponential smoothing is a simple but effective technique for forecasting time series data with trends. It can be easily implemented 3 and provides accurate forecasts for short-term predictions. However, it may not perform well for longer-term forecasts or for data with complex seasonal patterns.
It is a digital filter that adjusts its parameters based on the characteristics of the input signal and the desired output signal.
The adaptive linear filter works by using an algorithm to iteratively adjust the filter coefficients in response to changes in the input signal \(x(t)\). The algorithm seeks to minimize the difference between the filter output and the desired output, known as the error signal \( e(t)\). This is done by adjusting the filter coefficients \(w_i(t)\) in the direction that reduces the error signal. For this, the Least Mean Squares (LMS) algorithm is being used. The LMS algorithm updates the filter coefficients by multiplying the error signal by the input signal and a small step size parameter, and then adding this product to the existing filter coefficients.
It is particularly useful in situations where the characteristics of the input signal are unknown or change over time, as it can adapt to these changes and continue to provide accurate filtering or modeling.
One potential drawback of the adaptive linear filter is that it can be computationally intensive, especially if the input signal is large or the filter has many coefficients. Additionally, the performance of the filter can be sensitive to the choice of step size parameter, which must be carefully chosen to balance convergence speed with stability.
The adaptation rule for every filter coefficient is given by
The equation also includes a momentum term that adds a fraction of the previous coefficient update to the current coefficient update. This can help to reduce oscillations in the filter output and improve convergence speed.
The following is an example of how a smoothing filter can be implemented. In this scenario, a Gaussian filter is used to smooth a speed sensor signal, reducing noise and improving overall signal quality. The filter is set up by defining the necessary parameters and instantiating the filter. The filter is then executed at a sampling rate of 100 milliseconds. This process involves sampling the signal and running it through the filter, resulting in a cleaned and enhanced signal that can be further processed without the need to account for noise and disturbances.