Maria Montero

How to analyze data from a custom PCB sensor subsystem

Learn a method to analyze data from a custom precision sensor system, translating the sensor data into usable noise measurement information.

I recently designed a high precision inclinometer subsystem that is so sensitive to environmental forces that it needed a custom casing on a granite slab to function properly.

Throughout the design process, I have presented the bill of materials, schematic, PCB layout, case layout, and firmware. I have also gone through a test and measurement phase to characterize the noise that the board generates.

My last step in this process is to analyze the data that I can collect from my subsystem. This article looks at the data captured off the board and shows how I chose to visualize the data.

Custom finished PCB

For more information on the rest of the project, see the links below:

Data analysis

The LTC2380IDE-24, the The Successive Approach Register (SAR) Analog-to-Digital Converter (ADC) that I chose to use in my design has built-in data averaging that is easy to implement. The results of the conversion are stored in internal memory and combined with the previous results until an SPI transaction occurs.

To average two results, toggle the CNV pin to logic high twice before reading the data. To average 65,535 results, toggle the CNV pin 65,535 times before reading the data.

The data that the sensor produces is 40 bits long: 24 bits for the sensor reading and 16 bits to indicate how many samples were averaged (note that the count is indexed to 0, that is, a value of 0 indicates that 1 sample was averaged, a value of 1 indicates that 2 samples were averaged, and so on). If you look at the data file attached at the end of this document, you will notice that I added an additional 16 bits to the data to track the measurement number (these numbers were not used in the analysis).

I transferred the data from the PCB as ASCII sequences “0” and “1” and processed it on the computer with Mathematica. The first 24 bits were converted to decimal notation and multiplied by a scale factor of $ frac {15 °} {2 23}% 0 $$. The next 16 bits were converted to a decimal number and appear in parentheses on the left side of the footer of each subsequent graph as the number of repeated measurements. Each test consisted of 1023 samples, and each sample consisted of n average readings (1,2,4,8,…, 32768).

All tests were performed consecutively in a single run without a significant pause between measurements.

Each test is presented with the same set of graphs and calculations. The mean and standard deviation are calculated for the raw data and used to create a probability density function. The raw data is grouped into containers and is also displayed in a histogram. A scatter plot shows the data points after processing through an N-tap Moving Average (FIR) filter. Finally, the colored triangles are used to indicate the maximum, mean + standard deviation, mean, mean – standard deviation, and minimum data points on three different scales (100%, 1%, 0.01%).

We’ll take a look at the data first, and then we’ll discuss the significance of the results.

As you may recall from statistics class, the mean is the simple average of all measurements. The standard deviation provides an indication of spread. For our purposes, we would like the standard deviation to be as small as possible.