Mammography screenings are performed regularly on women in order to detect early signs of breast cancer, which is the most common form of cancer. During an exam, X-ray images (called mammograms) are taken from two different angles and reviewed by a radiologist. If they find a suspicious lesion in one of the views, they confirm it by finding the corresponding region in the other view. Finding the corresponding region is a non-trivial task, due to the different image projections of the breast and different angles of compression needed during the exam. This thesis explores the possibility of using deep learning, a data-driven approach, to solve the corresponding regions problem. Specifically, a convolutional neural network (CNN) called U-net is developed and trained on scanned mammograms, and evaluated on both scanned and digital mammograms. A model based method called the arc model is developed for comparison. Results show that the best U-net produced better results than the arc model on all evaluated metrics, and succeeded in finding the corresponding area 83.9% of times, compared to 72.6%. Generalization to digital images was excellent, achieving an even higher score of 87.6%, compared to 83.5% for the arc model.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:liu-190314 |
Date | January 2022 |
Creators | Eriksson, Emil |
Publisher | Linköpings universitet, Institutionen för medicinsk teknik |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0019 seconds