Impressing with big Numbers

Author: Fred Steinhauser, OMICRON electronics GmbH, Austria

Impressing with big Numbers

In many specifications, a kind of powerplay is going on. It is often attempted to impress us with big numbers. The customers should be convinced that a product is better just because it has a bigger number somewhere in the spec sheet. Technical options are sometimes exploited regardless of the necessities of the problem to solve.

Power Quality seems to be such a topic. IEC 61000 started out with the 40th harmonic, but the numbers quickly got boosted. Requirements to measure the 50th, 80th, or 100th harmonic showed up. Recently, I came across the idea to calculate up to the 511th harmonic, connected with the misconception that this can be done from measurements that are sampled at 1024 times the fundamental frequency.

The Audio CD serves as an example for an application involving digital signal processing that could be eased with going to a higher sampling rate. At the time when it was developed (in the late 1970s), a tough compromise between the desired fidelity and the sampling rate had to be made. To achieve an audio bandwidth of 20 kHz, a sampling rate of just 44.1 kHz had been chosen. This places the Nyquist frequency at 22.05 kHz, only little above the upper limit of the frequency band. This resulted in a requirement for a rather steep anti-aliasing filter of considerable high order.
Such filters are susceptible to component tolerances, so many accurate and costly parts were needed. Since this could not be always justified in price sensitive customer devices, early CD players sometimes had to compromise on fidelity.
To overcome this situation, oversampling was soon introduced. This means interpolating further samples before the digital-to-analog conversion takes place and thus artificially raising the sampling rate. This also pushes the Nyquist frequency up and allows the use of simpler filters with the additional benefit of a reduced noise level. In this case, raising the sampling rate definitely provided benefits in several aspects.

Pushing the publishing rates of phasor measurement units (PMUs) also seems to be popular nowadays. While knowledgeable experts claim that electrical power networks can be well observed with about 10 phasors/s, others want to push the rate up to sub-cycle intervals, e.g. 120 phasors/s or even 240 phasors/s.  But for calculating a phasor, a certain amount of data is required and taking samples from one cycle of the power system frequency seems to be a reasonable lower limit. Of course, there are phasor estimates from sub-cycle data possible, but then the accuracy decreases, making it literally more of an estimate, and less a measurement. From its nature, the process of phasor calculation contains a kind of low pass characteristic, limiting the dynamics of the measurement algorithm itself. Thus, sliding a one-cycle window over the data and calculating phasors in sub-cycle intervals will surely deliver more phasors per second, but not necessarily more information about the power system.

Nevertheless, leaping rather than stepping up can unlock new applications that may be even become disruptive. Moving the sampling rate from a couple of Kilohertz into the Megahertz range is such a leap. Applications like travelling wave fault location and fault detection become feasible this way, opening new ways of power system protection.
But in this case, it is not suggested that an existing application is done a bit better just because of a bigger number in the spec sheet. Something different became possible and was done.

Let?s start with organization in protection testing