Multi-Modal Learning from Video, Eye Tracking, and Pupillometry for Operator Skill Characterization in Clinical Fetal Ultrasound

Abstract

This paper presents a novel multi-modal learning approach for automated skill characterization of obstetric ultrasound operators using heterogeneous spatio-temporal sensory cues, namely, scan video, eye-tracking data, and pupillometric data, acquired in the clinical environment. We address pertinent challenges such as combining heterogeneous, small-scale and variable-length sequential datasets, to learn deep convolutional neural networks in real-world scenarios. We propose spatial encoding for multi-modal analysis using sonography standard plane images, spatial gaze maps, gaze trajectory images, and pupillary response images. We present and compare five multi-modal learning network architectures using late, intermediate, hybrid, and tensor fusion. We build models for the Heart and the Brain scanning tasks, and performance evaluation suggests that multi-modal learning networks outperform uni-modal networks, with the best-performing model achieving accuracies of 82.4% (Brain task) and 76.4% (Heart task) for the operator skill classification problem.

Publication
IEEE 18th International Symposium on Biomedical Imaging (ISBI) 2021

BibTex

@INPROCEEDINGS{sharma_multi_2021,
 author={Sharma, Harshita and Drukker, Lior and Papageorghiou, Aris T. and Noble, J. Alison},
 booktitle={2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)}, 
 title={Multi-Modal Learning from Video, Eye Tracking, and Pupillometry for Operator Skill Characterization in Clinical Fetal Ultrasound}, 
 year={2021},
 volume={},
 number={},
 pages={1646-1649},
 doi={10.1109/ISBI48211.2021.9433863}
}

Related