Historical photographs can generate significant cultural and economic value, but often their subjects go unidentified. However, if analyzed correctly, visual clues in these photographs can open up new directions in identifying unknown subjects. For example, many 19th century photographs contain painted backdrops that can be mapped to a specific photographer or location, but this research process is often manual, time-consuming, and unsuccessful. AI-based computer vision algorithms could be used to automatically identify painted backdrops or photographers or cluster photos with similar backdrops in order to aid researchers. However, it is unknown which computer vision algorithms are feasible for painted backdrop identification or which techniques work better than others. We present three studies evaluating four different types of image embeddings – Inception, CLIP, MAE, and pHash – across a variety of metrics and techniques. We find that a workflow using CLIP embeddings combined with a background classifier and simulated user feedback performs best. We also discuss implications for human-AI collaboration in visual analysis and new possibilities for digital humanities scholarship. / Master of Science / Historical photographs can generate significant cultural and economic value, but often their subjects go unidentified. However, if these photographs are analyzed correctly, clues in these photographs can open up new directions in identifying unknown subjects. For example, many 19th century photographs contain painted backdrops that can be mapped to a specific photographer or location, but this research process is often manual, time-consuming, and unsuccessful. Artificial Intelligence-based computer vision techniques could be used to automatically identify painted backdrops or photographers or group together photos with similar backdrops in order to aid researchers. However, it is unknown which computer vision techniques are feasible for painted backdrop identification or which techniques work better than others. We present three studies comparing four different types of computer vision techniques – Inception, CLIP, MAE, and pHash – across a variety of metrics. We find that a workflow that combines the CLIP computer vision technique, software that automatically classifies photo backgrounds, and simulated human feedback performs best. We also discuss implications for collaboration between humans and AI for analyzing images and new possibilities for academic research combining technology and history.
Identifer | oai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/115739 |
Date | 11 July 2023 |
Creators | Dodson, Terryl Dwayne |
Contributors | Computer Science and Applications, Luther, Kurt, Quigley, Paul, Huang, Lifu |
Publisher | Virginia Tech |
Source Sets | Virginia Tech Theses and Dissertation |
Language | English |
Detected Language | English |
Type | Thesis |
Format | ETD, application/pdf |
Rights | In Copyright, http://rightsstatements.org/vocab/InC/1.0/ |
Page generated in 0.0022 seconds