| Zhao et al., 2022 [31] | China | 5290 images from 1711 patients with chronic atrophic gastritis | High-quality clear images | U-Net network | The sensitivity, specificity, AUC (95% CI), and Kappa value of this diagnostic model were 92.73%, 92.24%, 0.932 (0.916 - 0.948), and 0.796, respectively. | The U-Net deep learning-based diagnostic model for chronic atrophic gastritis showed high accuracy and good agreement with pathological diagnosis. |
Wu et al., 2021 [32] | China | 5496 images from 928 patients | Electronic gastroscopy images and complete video recordings of gastroscopy examinations | CNN | In the human-machine classification competition, the model achieved sensitivities and positive predictive values of 90.33% and 95.41%, respectively. Accuracy of lesion localization decreased as the overlapping area increased. Video verification showed sensitivity of 89.5% for identifying early gastric cancer and 92.3% for identifying non-early gastric cancer. | The model demonstrated good recognition ability for static images of early gastric cancer and benign lesions, accurate localization of gastric cancer lesions, and real-time dynamic identification of early gastric cancer. | |
Goto et al., 2022 [33] | Japan | 500 training images, 200 test images | White light imaging | AI classifier | Accuracy, sensitivity, specificity, and F1 score measured using AI classifier, endoscopists, and a diagnostic method combining AI and endoscopic experts were 77% vs. 72.6% vs. 78.0%, 76% vs. 53.6% vs. 76.0%, 78% vs. 91.6% vs. 80.0%, and 0.768 vs. 0.662 vs. 0.776, respectively. | Collaboration between artificial intelligence and endoscopic experts improved the diagnostic capability for determining the depth of early gastric cancer invasion. |