1 |
Using Lidar to Examine Human Occupancy and Collisions within a Shared Indoor EnvironmentFlack, Addison Harris 04 June 2024 (has links)
Indoor spaces, where we spend the majority of our lives, greatly impact our work, social interactions, and well-being. In recognition of the central role that buildings play in our lives, architects and designers have increasingly focused on creating spaces that intentionally promote interaction and collaboration between building occupants. One challenge arising from this trend is evaluating the efficacy of new designs. This study used object tracking data for the Fall 2023 semester from a collection of lidar sensors installed in a portion of a mixed-use academic building on a university campus to algorithmically detect occupancy and serendipitous collisions between people - patterns of simultaneous movement and pause that indicate that two or more individuals have stopped and had a meaningful interaction. The algorithm detected over 14,000 collisions throughout the semester with high spatial and temporal precision. Occupancy and collisions were highly related over several scales of temporal and spatial analysis. Furthermore, several interesting patterns emerged, including (a) collisions peaked early in the semester, then declined before leveling off, (b) occupancy peaked in mid-afternoon, while collisions peaked in the late afternoon and early evening, (c) collisions peaked later in the week than did occupancy, and (d) specific hotspots were apparent at important nodes such as the bottom of stairs and near elevators. The patterns found in this study can provide insight as to how interactions can be measured using remote sensing data, and can aid designers in attempting to increase collaboration in shared indoor environments. / Master of Science / We spend lots of our times in buildings, and they are very important for our well-being. Designers have recently been focusing on promoting collaboration and interaction between people within building spaces. Despite their importance, these interactions within buildings have been challenging to categorize and analyze. This study used object-tracking data for the Fall 2023 semester from a collection of lidar sensors, which were intermittently placed in the ground-floor public spaces of a new hybrid residential-academic university building on Virginia Tech's Blacksburg campus. A computer program was written to parse through this data, and detect unplanned collisions between people; patterns of movement and pause that indicate that two or more people have stopped and had a meaningful interaction (for example, running into a friend while walking down the hallway). This study was able to detect collisions relatively well using a computer algorithm. The patterns and distributions of these collisions were then analyzed in time and space. The number of collisions and the number of people present in the space were highly related on all scales of time and space. In terms of time itself, collisions happened the most at the beginning of the semester, where they then dropped off. Collisions happened more frequently both later in the day (in afternoon, evening, and night hours) and later in the week (on Thursday, Friday, and Saturday). In terms of space, these collisions happened most frequently in the areas around the elevator, at the base of the stairs, and in the building's main lobby area. They happened less in hallways and near some seating areas. The patterns revealed from this study can help us better understand how to detect interactions between people within buildings, and can help designers increase the amount of these interactions.
|
2 |
Tracking Human Movement Indoors Using Terrestrial LidarKarki, Shashank 03 June 2024 (has links)
Recent developments in surveying and mapping technologies have greatly enhanced our ability to model and analyze both outdoor and indoor environments. This research advances the traditional concept of digital twins—static representations of physical spaces—by integrating real-time data on human occupancy and movement to develop a dynamic digital twin. Utilizing the newly constructed mixed-use building at Virginia Tech as a case study, this research leverages 11 terrestrial lidar sensors to develop a dynamic digital model that continuously captures human activities within public spaces of the building.
Three distinct object detection methodologies were evaluated: deep learning models, OpenCV-based techniques, and Blickfeld's lidar perception software, Percept. The deep learning and OpenCV techniques analyzed projected 2D raster images, while Percept utilized real-time 3D point clouds to detect and track human movement. The deep learning approach, specifically the YOLOv5 model, demonstrated high accuracy with an F1 score of 0.879. In contrast, OpenCV methods, while less computationally demanding, showed lower accuracy and higher rates of false detections. Percept, operating on real-time 3D lidar streams, performed well but was susceptible to errors due to temporal misalignment.
This study underscores the potential and challenges of employing advanced lidar-based technologies to create more comprehensive and dynamic models of indoor spaces. These models significantly enhance our understanding of how buildings serve their users, offering insights that could improve building design and functionality. / Master of Science / Americans spend an average 87% of their time indoors, but mapping these spaces has been a challenge. Traditional methods like satellite imaging and drones do not work well indoors, and camera-based models can be invasive and limiting. By contrast, lidar technology can create detailed maps of indoor spaces while also protecting people's privacy—something especially important in buildings like schools.
Currently, most technology creates static digital maps of places, called digital twins, but these do not show how people actually use these spaces. My study aims to take this a step further by developing a dynamic digital twin. This enhanced model shows the physical space and incorporates real-time information about where and how people move within it.
For my research, I used lidar data collected from 11 sensors in a mixed-use building at Virginia Tech to create detailed images that track movement. I applied advanced computer techniques, including machine learning and computer vision, to detect human movement within the study space. Specifically, I used methods such as YOLOv5 for deep learning and OpenCV for movement detection to find and track people's movements inside the building.
I also compared my techniques with a known software called Percept by Blickfeld, which detects moving objects in real-time from lidar data. To evaluate how well my methods worked, I measured them using traditional and innovative statistical metrics against a standard set of manually tagged images. This way, I could see how accurately my system could track indoor dynamics, offering a richer, more dynamic view of how indoor spaces are used.
|
Page generated in 0.0486 seconds