Recent developments in surveying and mapping technologies have greatly enhanced our ability to model and analyze both outdoor and indoor environments. This research advances the traditional concept of digital twins—static representations of physical spaces—by integrating real-time data on human occupancy and movement to develop a dynamic digital twin. Utilizing the newly constructed mixed-use building at Virginia Tech as a case study, this research leverages 11 terrestrial lidar sensors to develop a dynamic digital model that continuously captures human activities within public spaces of the building.
Three distinct object detection methodologies were evaluated: deep learning models, OpenCV-based techniques, and Blickfeld's lidar perception software, Percept. The deep learning and OpenCV techniques analyzed projected 2D raster images, while Percept utilized real-time 3D point clouds to detect and track human movement. The deep learning approach, specifically the YOLOv5 model, demonstrated high accuracy with an F1 score of 0.879. In contrast, OpenCV methods, while less computationally demanding, showed lower accuracy and higher rates of false detections. Percept, operating on real-time 3D lidar streams, performed well but was susceptible to errors due to temporal misalignment.
This study underscores the potential and challenges of employing advanced lidar-based technologies to create more comprehensive and dynamic models of indoor spaces. These models significantly enhance our understanding of how buildings serve their users, offering insights that could improve building design and functionality. / Master of Science / Americans spend an average 87% of their time indoors, but mapping these spaces has been a challenge. Traditional methods like satellite imaging and drones do not work well indoors, and camera-based models can be invasive and limiting. By contrast, lidar technology can create detailed maps of indoor spaces while also protecting people's privacy—something especially important in buildings like schools.
Currently, most technology creates static digital maps of places, called digital twins, but these do not show how people actually use these spaces. My study aims to take this a step further by developing a dynamic digital twin. This enhanced model shows the physical space and incorporates real-time information about where and how people move within it.
For my research, I used lidar data collected from 11 sensors in a mixed-use building at Virginia Tech to create detailed images that track movement. I applied advanced computer techniques, including machine learning and computer vision, to detect human movement within the study space. Specifically, I used methods such as YOLOv5 for deep learning and OpenCV for movement detection to find and track people's movements inside the building.
I also compared my techniques with a known software called Percept by Blickfeld, which detects moving objects in real-time from lidar data. To evaluate how well my methods worked, I measured them using traditional and innovative statistical metrics against a standard set of manually tagged images. This way, I could see how accurately my system could track indoor dynamics, offering a richer, more dynamic view of how indoor spaces are used.
Identifer | oai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/119239 |
Date | 03 June 2024 |
Creators | Karki, Shashank |
Contributors | Geography, Pingel, Thomas, Baird, Timothy D., Ogle, J. Todd |
Publisher | Virginia Tech |
Source Sets | Virginia Tech Theses and Dissertation |
Language | English |
Detected Language | English |
Type | Thesis |
Format | ETD, application/pdf |
Rights | Creative Commons Attribution-NonCommercial 4.0 International, http://creativecommons.org/licenses/by-nc/4.0/ |
Page generated in 0.0019 seconds