Online Poster Portal

  • Author
    Tara Shahrvini
  • Discovery PI

    Dr. Hannah Milch

  • Project Co-Author

    Dr. Melissa Joines

  • Abstract Title

    Radiologist versus Artificial Intelligence False Positives in Screening Mammography

  • Discovery AOC Petal or Dual Degree Program

    Masters in Business Administration at Anderson, UCLA

  • Abstract

    Area of concentration: MBA Dual Degree (research abstract)

    Specialty: Radiology

    Keywords: screening mammography, false positives, artificial intelligence

    Background: Previous research on artificial intelligence (AI) integration with breast radiology has focused on reducing physician workload while maintaining care quality. Reducing false positive recall rates represents a prime target for advancement of value-based care.

    Objective: To identify differences in radiologist versus AI false positives in a real-world breast cancer screening population.

    Methods: A leading commercial AI tool was retrospectively applied to 3,183 digital breast tomosynthesis (DBT) screening mammograms from 2013-2017 in this IRB-approved study. The AI tool assigns a score from 1-10, with >87% of malignancy cases receiving a score of 10. False positives were defined as exams given an AI score of 10 (AI cohort) or a BI-RADS category 0 (radiologist cohort) not resulting in a cancer diagnosis within one year. Patients with implants were excluded.

    Results: There were 308 false positives in the radiologist cohort, 304 in the AI cohort, and 74 overlapped (i.e. received both a BI-RADS category 0 and AI score of 10). Prior breast cancer history was more prevalent in the AI cohort than the radiologist cohort (13% vs 4%, p<.001). AI and overlap cohorts had more prior surgical procedures (38-39% vs 15%, p<.001). There was a greater percentage of dense breasts in the radiologist cohort compared to the AI cohort (p<.001). The number of findings flagged was greater in the AI cohort (mean 3.4 vs 1.4 in radiologist cohort, p<.001). Those in the AI group were more likely to develop future breast cancer (9% vs 3% in radiologist cohort, p=.002).

    Conclusions: AI and radiology false positives had minimal overlap. AI false positives were greater in women with less dense breasts and a history of breast cancer, likely flagging benign post-surgical change. Interestingly, the AI group was more likely to develop future breast cancer. Further work is underway to understand the specific imaging findings marked by AI vs radiologists.