Bagci Lab
Search

Eye Tracking

Eye Tracking

Eye tracking (ET) has been used extensively for research in marketing, psychology, and medical image interpretation. It has also been used in medical imaging research to elucidate differences in how expert and novice radiologists interpret images.

Recent High-lighting Work

GazeSAM: What You See is What You Segment

Leveraging the real-time interactive prompting feature of the recently proposed Segment Anything Model (SAM), we present the GazeSAM system to enable users to collect target segmentation masks by simply looking at the region of interest. GazeSAM tracks users' eye gaze and utilizes it as the input prompt for SAM, generating target segmentation masks in real time.

Research in Biomedical Imaging Analysis

Why Eyetracking:

Eye Tracking for Deep Learning Segmentation Using Convolutional Neural Networks

Deep learning with convolutional neural networks (CNNs) has experienced tremendous growth in multiple healthcare applications and has been shown to have high accuracy in semantic segmentation of medical (e.g., radiology and pathology) images. However, a key barrier in the required training of CNNs is obtaining large-scale and precisely annotated imaging data. We sought to address the lack of annotated data with eye tracking technology.

C-CAD: Collaborative Computer Aided Diagnosis

In this study, we aim to develop a paradigm shifting CAD system, called collaborative CAD (C-CAD), that unifies CAD and eye-tracking systems in realistic radiology room settings. We first developed an eye-tracking interface providing radi- ologists with a real radiology reading room experience.

Others

  1. Stember, Joseph N., Haydar Celik, E. Krupinski, Peter D. Chang, Simukayi Mutasa, Bradford J. Wood, A. Lignelli et al. “Eye tracking for deep learning segmentation using convolutional neural networks.” Journal of digital imaging 32 (2019): 597-604.
  2. Stember, Joseph N., Haydar Celik, David Gutman, Nathaniel Swinburne, Robert Young, Sarah Eskreis-Winkler, Andrei Holodny et al. “Integrating eye tracking and speech recognition accurately annotates MR brain images for deep learning: proof of principle.” Radiology: Artificial Intelligence 3, no. 1 (2020): e200047.
  3. Khosravan, Naji, Haydar Celik, Baris Turkbey, Ruida Cheng, Evan McCreedy, Matthew McAuliffe, Sandra Bednarova et al. “Gaze2Segment: a pilot study for integrating eye-tracking technology into medical image segmentation.” In Medical Computer Vision and Bayesian and Graphical Models for Biomedical Imaging: MICCAI 2016 International Workshops, MCV and BAMBI, Athens, Greece, October 21, 2016, Revised Selected Papers 8, pp. 94-104. Springer International Publishing, 2017.
  4. Celik, Haydar, Baris Ismail Turkbey, Peter Choyke, Ruida Cheng, Evan McCreedy, Matthew McAuliffe, Naji Khosravan, Ulas Bagci, and Bradford J. Wood. “Eye Tracking System for Prostate Cancer Diagnosis Using Multi-Parametric MRI.
  5. Wang, Bin, Hongyi Pan, Armstrong Aboah, Zheyuan Zhang, Ahmet Cetin, Drew Torigian, Baris Turkbey, Elizabeth Krupinski, Jayaram Udupa, and Ulas Bagci. “GazeGNN: A Gaze-Guided Graph Neural Network for Disease Classification.” arXiv preprint arXiv:2305.18221 (2023).
Scroll to Top