Publication details

IMPROVING MACHINE LEARNING PREDICTION OF CONSTRUCTS: MENTAL FATIGUE

Authors

FORMÁNEK Vojtěch JUŘÍK Vojtěch

Year of publication 2024
Type Article in Proceedings
Conference INPACT24 Proceedings: Psychological Applications and Trends 2024
MU Faculty or unit

Faculty of Arts

Citation
Web https://inpact-psychologyconference.org/wp-content/uploads/2024/05/202401OP062.pdf
Doi http://dx.doi.org/10.36315/2024inpact062
Keywords Machine learning; fatigue; generalizability; reliability
Description Mental fatigue is a psychophysiological state that plays an important role in various domains of human machine interaction where it may increase risk of injury or accidents. To prevent threats to life and property, novel techniques combining psychological and computational approaches are needed and thus explored. Previous research has focused on training machine learning (ML) models on different types of fatigue input data and experiment settings, and recently on the generalizability of the models. However, current ML development struggles with various issues such as unclear analysis of what the model is actually learning. When trained on data that are only partially correctly labeled, it can learn artifacts of the dataset construction instead of the construct state. Psychometric measures that are used to label data have usually imperfect/questionable reliability, thus even if administered correctly may label some data incorrectly. The most widely used method for labeling mental fatigue states are subjective scales, which also possess limitations on construct validity. In this contribution, an iterative procedure to improve both reliability and validity of the labeling based on generalizability theory is proposed. This labeling procedure is constructed from components already present in the dataset and relevant to the construct being predicted. In the case of mental fatigue, a subjective scale, performance decrease and environmental reference extracted in 7 datasets collected on different sites is used, with several methods used to induce fatigue, all with heart rate variability as input data. The quality of combinations and levels of the label is assessed by analyzing unwanted variances and by using an equivalent of reliability from generalizability theory. Applying this procedure, components can be added to a label and created labels can be directly compared. Considering the iterative nature of this process, labels can be dynamically adjusted based on added new data. The whole procedure adds flexibility to dataset design, allowing for easier integration of datasets, even those that were not originally intended for ML. As a result, we enhance increasing variability and amount of data that is available for researchers, promoting its use beyond the ML-based mental fatigue prediction.
Related projects:

You are running an old browser version. We recommend updating your browser to its latest version.

More info