Material recognition using Triboelectric Nanogenerator-based sensing and machine learning

Experimental Setup

A triboelectric nanogenerator (TENG) is mounted to a motor which continuously pushes into a fixed glass plate.

The experiment is repeated multiple times for different materials:

  • only the glass plate
  • thin film of Kapton
  • foil (bubble wrap)
  • piece of white foam
  • piece of black, rigid foam
  • piece of cloth from a medical mask

To achieve a (near) constant frequency between the different experiments, the voltage of the motor was always set to 8.2 V.

For each material, 60 one-minute voltage and current recordings were made using a Source Measure Unit (SMU). The used software is m-teng.

Experimental Setup: A TENG on a motor pushing against a glass plate

Development of a capable machine learning model

The goal of the project is to determine if the voltage and current output of the TENG can be used to recognize which material it is being pushed against.

Since the data is sequential, a LSTM was choosen as model. The implementation is made using python-pytorch.

Data preparation

  1. Resample/interpolate the data, so that all voltage and current reading have the same time interval between them
  2. Split each recording into pieces of the same length of n points (using DataSplitter(n))

Phase 1

For the initial model, 32 different models/training settings were tested with the data.

Common Settings

  • number of features: 1 - only voltage data
  • train/test data split Ratio: 0.7
  • common random state/seed 42 for all shufflers (to get the same training and validation data every time)
  • number of epochs: 60
  • loss function: CrossEntropyLoss

Varied Settings

  • num_layers:
    1. 2 (331 points)
    2. 4 (165 points)
  • hidden_size:
    1. 5 (251 points)
    2. 10 (245 points)
  • bidirectional:
    1. True (299 points)
    2. False (197 points)
  • scheduler:
    1. ExponentialLR(gamma=0.95, optimizer=Adam (initial_lr: 0.1, ...), ...) (307 points)
    2. ExponentialLR(gamma=0.95, optimizer=Adam (initial_lr: 0.25, ...), ...) (189 points)
  • splitter:
    1. DataSplitter(100) (266 points)
    2. DataSplitter(300) (230 points)

The settings were ranked by assigning the rank of the corresponding model as points to the setting. For example, the best model out of the 32 models was bidirectional, so bidirectional = True got 32 points for that.
Most of the models achieved validation accuaries of ~20%, which is just a bit more than chance. 4 models achieved > 70% and one got 80%.

Training summary

Accuracy, learning rate and loss for each epoch

The plot shows the accuracy, learning rate and loss for each epoch during the training of the best performing model. We see that the accuracy only starts to significantly rise as the learning rate goes below 0.04, so we should that as initial learning rate in the future. The model achieved a validation accuracy of 80,81% (on data it had not seen during training), which could probably improved a bit further by raising the number of epochs.

Validation

The plot shows the predictions for each label for the best performing model. One row contains the all predictions of the corresponding label. We see that the model has problems differentiating between kapton and glass, it often predicts glass when the correct material is kapton. The Kapton film was very thin and the voltage signal looked very similar for both, which might explain what we see. Interestingly, the model does not confuse glass for Kapton,only Kapton for glass. This suggests that the recordings for the Kapton film were not very stable.

Predictions for each label

Interpretation

  • more layers and larger hidden size not necessarily better (however: best model had hidden_size = 10 and num_layers = 4)
  • initial learning rate should be reduced to about 0.04
  • bidirectional LSTM seems superior in this case
  • sequence lengths of 100-300 are optimal

Phase 2

Phase 2 used the lessons from Phase 1, ie. always using bidirectional LSTM and lower learning rate. With those, the best model from Phase achieved an accuracy of 90.1%, using 4 layers, a hidden size of 8 and a sequence length of 200.

Normalizing the data

For testing purposes, some models were trained on data where the voltage readings were normalized between 0 and 1. This makes it independent of voltage amplitude, which will vary when the sensor is pushed into the material with a different force. This is very likely to occur even when using the same setup with the same material again.
However, all models using the normalized data achieved accuaries of ~15%, which is less than pure chance.


Phase 3

In the third phase, I used 4 different scheduler settings and a larger batch size of 64, which led to the best performing models so far. The best one achieved 94,78% validation accuracy and managed to be at least 90% accurate for every label respectively. It used a StepLR scheduler instead of the ExponentialLR that was used until then.


Final model specs

Model parameters:

  • num_features = 2
  • num_layers = 3
  • hidden_size = 8
  • bidirectional = True

Training data:

  • transforms = [ConstantInterval(interval=0.01)]
  • splitter = DataSplitter(100)
  • labels = ['cloth', 'foam', 'foil', 'glass', 'kapton', 'rigid_foam']

Training info:

  • scheduler = StepLR(step_size=8, gamma=0.4, optimizer=Adam(initial_lr: 0.03, ...), last_epoch=0)
  • loss_func = CrossEntropyLoss
  • num_epochs = 80
  • batch_size = 64
  • n_predictions = 15241
  • final accuracy = 96.41755790302473
  • highest accuracy = 96.43068040154846

Validation info:

  • n_predictions = 6628
  • accuracy = 94.77972238986119
Accuracy, learning rate and loss for each epoch Predictions for each label

Source code

The complete source code for the data collection script, as well as the machine learning (model training + model evaluation) can be found on my gitea.