Spelling suggestions: "subject:"building 5construction managemement"" "subject:"building 5construction managementment""
11 |
A SIMULATED POINT CLOUD IMPLEMENTATION OF A MACHINE LEARNING SEGMENTATION AND CLASSIFICATION ALGORITHMJunzhe Shen (8804144) 07 May 2020 (has links)
<p>As buildings
have almost come to a saturation point in most developed countries, the management
and maintenance of existing buildings have become the major problem of the
field. Building Information Modeling (BIM) is the underlying
technology to solve this problem. It is a 3D semantic representation of
building construction and facilities that contributes to not only the design
phase but also the construction and maintenance phases, such as life-cycle
management and building energy performance measurement. This study aims at the
processes of creating as-built BIM models, which are constructed after the
design phase. Point cloud, a set of points in 3D space, is an intermediate
product of as-built BIM models that is often acquired by 3D laser scanning and
photogrammetry. A raw point cloud typically requires further procedures, e.g. registration,
segmentation, classification, etc. In terms of segmentation and classification,
machine learning methodologies are trending due to the enhanced speed of
computation. However, supervised machine learning methodologies require labelling
the training point clouds in advance, which is time-consuming and often leads
to inevitable errors. And due to the complexity and uncertainty of real-world
environments, the attributes of one point vary from the attributes of others.
These situations make it difficult to analyze how one single attribute
contributes to the result of segmentation and classification. This study
developed a method of producing point clouds from a fast-generating 3D virtual
indoor environment using procedural modeling. This research focused on two
attributes of simulated point clouds, point density and the level of random errors.
According to Silverman (1986), point density is associated with the point
features around each output raster cell. The number of points within a
neighborhood divided the area of the neighborhood is the point density.
However, in this study, there was a little different. The point density was
defined as the number of points on a surface divided by the surface area. And
the unit is points per square meters (pts/m<sup>2</sup>). This research
compared the performances of a machine learning segmentation and classification
algorithm on ten different point cloud datasets. The mean loss and accuracy of
segmentation and classification were analyzed and evaluated
to show how the point density and level of random errors affect the performance
of the segmentation and classification models. Moreover, the real-world point
cloud data were used as additional data to evaluate the applicability of
produced models.</p>
|
12 |
DEVELOPING A DECISION SUPPORT SYSTEM FOR CREATING POST DISASTER TEMPORARY HOUSINGMahdi Afkhamiaghda (10647542) 07 May 2021 (has links)
<p>Post-disaster temporary housing has been a significant
challenge for the emergency management group and industries for many years. According
to reports by the Department of Homeland Security (DHS), housing in
states and territories is ranked as the second to last proficient in 32 core
capabilities for preparedness.The number of temporary housing required in a
geographic area is influenced by a variety of factors, including social issues,
financial concerns, labor workforce availability, and climate conditions. Acknowledging
and creating a balance between these interconnected needs is considered as one
of the main challenges that need to be addressed. Post-disaster temporary
housing is a multi-objective process, thus reaching the optimized model relies
on how different elements and objectives interact, sometimes even conflicting,
with each other. This makes decision making in post-disaster construction more
restricted and challenging, which has caused ineffective management in post-disaster
housing reconstruction.</p>
<p>Few researches have studied the use of Artificial
Intelligence modeling to reduce the time and cost of post-disaster sheltering.
However, there is a lack of research and knowledge gap regarding the selection
and the magnitude of effect of different factors of the most optimized type of Temporary
Housing Units (THU) in a post-disaster event.</p>
The proposed framework
in this research uses supervised machine learing to maximize certain design
aspects of and minimize some of the difficulties to better support creating
temporary houses in post-disaster situations. The outcome in this study is the
classification type of the THU, more particularly, classifying THUs based on
whether they are built on-site or off-site. In order
to collect primary data for creating the model and evaluating the magnitude of
effect for each factor in the process, a set of surveys were distributed
between the key players and policymakers who play a role in providing temporary
housing to people affected by natural disasters in the United States. The
outcome of this framework benefits from tacit knowledge of the experts in the
field to show the challenges and issues in the subject. The result of this
study is a data-based multi-objective decision-making tool for selecting the
THU type. Using this tool, policymakers who are in charge of selecting and
allocating post-disaster accommodations can select the THU type most responsive
to the local needs and characteristics of the affected people in each natural
disaster.
|
13 |
UTILIZING T-O-E FRAMEWORK FOR EVALUATING THE USE OF SIMULATORS IN SNOWPLOW DRIVER TRAININGYanchao Zheng (14277284) 20 December 2022 (has links)
<p> </p>
<p>Driving simulators have been introduced by some American states’ department of transportation (DOT) as a supplementary tool to train their snowplow drivers. This is a costly investment that requires careful planning. Yet, there is a current lack of recent studies that holistically evaluate factors affecting the decision-making process on adopting the driving simulator in winter snowplow driver training. The current research aims to fill this gap by employing the theoretical framework of Technology-Organization-Environment (T-O-E) to explore factors affecting state DOTs’ decision-making process of adopting snowplow driving simulator in driver training. Relevant factors were identified first using a scoping review of literature, and then validated by interviews with DOT stakeholders. Subsequent findings from the state DOT survey suggests that perceived long-term effect on public safety, cost related to simulator training, and ease of the relocation of simulator are the top contributors when it comes to the factors affecting decision-making on adoption of snowplow driving simulator in training. The resulted T-O-E framework contains 11 factors cross-verified from various sources, in which most factors such as simulator fidelity and relative advantage were categorized to technology context under the T-O-E framework, while the main environment factor relates to normative and mimetic pressure. The paper contributes to academic research by applying T-O-E to offer decision making support for using simulator technology for training of snowplow drivers; and to practitioners by providing state DOT decision makers a framework to analyze different factors on adopting snowplow driving simulators in training.</p>
|
14 |
<b>Enhancing Highway Safety and Construction Quality Control Through Friction-Based Approaches</b>Jieyi Bao (19180027) 19 July 2024 (has links)
<p dir="ltr">Pavement friction is fundamental to the safety of road networks. A precise assessment of friction levels is essential for the strategic development of maintenance practices and policies by state highway agencies. Typically, assessments of pavement friction have been conducted individually, focusing on particular segments of roadways. Nevertheless, this approach does not offer a thorough evaluation of roadway friction conditions at the network level. This study combines the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and the Gaussian Mixture Model (GMM) to evaluate the ratings of pavement friction throughout the entire state’s road system. A dataset oriented towards safety, serving as input for clustering models across various data dimensions, has been established. Through comparative and statistical analyses, six friction performance ratings have been identified and subsequently validated. The findings not only facilitate a deeper comprehension of the interrelations among friction levels, crash impact, and additional factors impacting safety, but also provide substantial insights for the advancement of road safety, management, and development.</p><p dir="ltr">Pavement markings play an essential role in regulating traffic flow and improving traffic safety. Beyond facilitating road safety via visual cues to drivers, the frictional properties of pavement marking surfaces are a pivotal element in safeguarding roadway safety. However, the friction characteristics of pavement marking surfaces have not been sufficiently investigated. Additionally, the integration of glass beads or other particles with pavement markings to enhance reflectivity and retroreflectivity complicates the study of their friction properties compared to bare pavements. To tackle these problems, this research utilizes the British pendulum tester (BPT), the circular track meter (CTM), the dynamic friction tester (DFT), and the three-wheel polishing device (TWPD) to evaluate the friction performance of various pavement markings. Eighteen specimen groups, comprising six types of markings (i.e., waterborne paint, preformed tape, epoxy paint, polyurea paint, MMA paint, and thermoplastics) with various glass beads and particles, were investigated to assess their impact on dry and wet friction, mean profile depth (MPD), and durability. The outcomes of this study serve as valuable resources for advancing safety measures and providing insights into emerging traffic management technologies.</p><p dir="ltr">Currently, there is an absence of established standards or methods for assessing and evaluating the friction characteristics of road markings. This lack of standardization has a pronounced impact on vulnerable road users-motorcyclists, bicyclists, and pedestrians-due to the potential for inadequate friction from road markings. To address the problem, this study has developed five friction levels based on the wet British pendulum number (BPN). Leveraging international standards and practical considerations, a tentative BPN range is advocated for crosswalks, symbols, and letters to enhance the safety of pedestrians and other susceptible road users.</p><p dir="ltr">Friction metrics, like MPD and friction number (FN), have been central to enhancing quality assurance and control (QA/QC) processes in chip seals. These metrics evaluate chip seal performance by examining problems such as aggregate shedding or significant bleeding, potentially leading to lower friction values or surface textures. However, instead of leading to slippery conditions, the loss of aggregate-particularly as a consequence of snow-plow operations-may result in the formation of uneven surface textures. The relationship between increased MPD or FN and enhanced chip seal quality is complex and not easily defined. This study introduces a groundbreaking method utilizing machine learning techniques, designed to improve the QC procedure for chip seals. A hybrid anomaly detection approach was applied to a dataset consisting of 183,794 MPD measurements, each representing the average mean segment depth (MSD) over 20-meter segments, gathered from real-world chip seal projects throughout the six districts managed by INDOT. A two-phase QC process, specifically tailored for chip seal quality assessment, has been developed. Validation analysis performed on four chip seal projects shows a strong concordance between field inspection, friction measurements, and the results predicted using the introduced approach. The developed method sets a foundational chip seal QC procedure, augmenting efficiency in acceptance processes and overall safety through data-driven techniques, while reducing the practitioners' time on site.</p><p dir="ltr">Surpassing the constraints of traditional approaches, this paper develops a series of scientific methodologies for evaluating friction on pavement and pavement marking surfaces through extensive in-field and laboratory experiments. Additionally, it establishes rational and efficient quality control procedures for chip-seal applications. The methodologies and conclusions presented in this paper can assist engineers in Departments of Transportation (DOTs) with ensuring the safety of all stakeholders, including road users, engineers, and construction practitioners. Furthermore, they offer valuable insights for the timely execution of road maintenance activities.</p>
|
15 |
Leveraging Lessons from Earth to Space: Failure Analysis Framework Based on the Notion of "Embedded Pathogens” for Designing and Building Safe Extraterrestrial SystemsTakaharu Igarashi (19734085) 25 September 2024 (has links)
<p dir="ltr">The deep human drive to explore and inhabit new frontiers underpins the emerging field of space architecture, as evidenced by the proliferation of diverse design concepts and growing momentum in extraterrestrial construction. However, the endeavor of building off-world environments remains in its infancy, facing numerous challenges due to remoteness, novelty, complexity, and other inherent characteristics. Significant technical and methodological gaps exist between visionary designs and the practical engineering and management intricacies of complex projects. These gaps must be addressed to overcome the challenges in extraterrestrial construction.</p><p dir="ltr">The field is limited by the scarcity of both failures and successes in extraterrestrial construction from which to draw lessons. Furthermore, the understanding of construction system failures is constrained by the limited application of systems perspectives in construction safety literature. Existing accident causation models in system safety literature are ill-suited for describing construction system failures.</p><p dir="ltr">The overall objective of this research is to inform the planning and design of extraterrestrial construction to mitigate the risk of building and installing defective systems in space. To achieve this goal, the research process followed iterative cycles of failure case studies, extracting insights through the lens of a new accident causation model. The cycles of model application evolved through three major stages: (1) developing a model specifically tailored to construction system failures, (2) deriving concepts and principles for interpreting the model depictions, and (3) compiling lessons from a systematically selected set of failure cases for planners, designers, and key stakeholders of future extraterrestrial construction projects.</p><p dir="ltr">Following a comprehensive literature review, this study developed an accident model termed the framed-and-layered accident pathogen propagation (FLAPP) model. The basic concept of the model consists of frames representing the temporal dimension, layers depicting the hierarchical aspects of the sociotechnical framework of construction projects, and graphical notation illustrating the sequence of defective processes and pathogens embedded in the constructed artifact.</p><p dir="ltr">To interpret the graphical illustrations of failure cases, the study introduced the concept of pathogen threads to describe multiple sequences of defective processes ultimately interacting with the physical artifact. To further explain the underlying mechanisms of how design decisions incubate into system failures, the concept of design-induced strains was proposed. This concept represents the lingering effects that design solutions can have on physical artifacts and downstream organizational processes. Additionally, categories of inadequate organizational factors from the Human Factors Analysis and Classification System (HFACS) were incorporated to explain the underlying causes within project organizations that allowed the mishandling of physical and organizational strains.</p><p dir="ltr">To evaluate the applicability of the FLAPP model and observe general trends in system failures across various contexts, the study selected failure cases attributable to embedded pathogens with sufficient available information. Ensuring that the case set included diverse cases and covered all specific technical challenges of extraterrestrial construction, the selection resulted in a set of 40 cases comprising 10 space systems, 13 buildings, 10 bridges and tunnels, and 7 systems in specialized environments. Results highlighted typical and exceptional cases among different systems and the general trends of occurrence prevalence of concept instances among different system contexts (space, building, bridge/tunnel, specialized environment) and duration ranges (short, medium, long). The comparison of Earth-based and space-based systems revealed several critical questions for the space domain: how to ensure, maintain, and improve the reliability and quality of unique and site-specific systems; how to establish and enforce minimum quality standards to reduce unintended physical strains; and how to prepare for extended project timelines with dynamic and transient organizational networks.</p><p dir="ltr">Future research should address the theoretical and methodological limitations of the FLAPP model by extending it to investigation methods and risk assessment techniques. Additionally, the failure case study approach should be enhanced by expanding the case set, exploring source documents, and applying additional analysis approaches. This dissertation concludes by demonstrating that guidance and principles with discipline-specific vocabulary can be derived by referencing documented best practices in systems engineering and project management.</p><p><br></p>
|
16 |
Construction Decision making using Virtual RealitySwaroop Ashok (8790986) 01 May 2020 (has links)
<p>We make decisions every day, some with the potential for a
huge impact on our lives. This process of decision-making is crucial not only
for individuals but for industries, including construction. Unlike the
manufacturing industry, where one can make certain decisions regarding an
actual product by looking at it in real time, the nature of construction is
different. Here, decisions are to be made on a product which will be built
somewhere in the near future. The complex and interim nature of construction
projects, along with factors like time essence, increasing scale of projects
and multitude of stakeholders, makes it even more difficult to reach consensus.
Incorporating VR can aid in getting an insight on the final product at the very
beginning of the project life cycle. With a visual representation, the
stakeholders involved can collaborate on a single platform to assess the
project, share common knowledge and make choices that would produce better
results in all major aspects like cost, quality, time and safety. This study
aims at assessing decision-making in the earlier stages of construction and
then evaluating the performance of immersive and non-immersive VR platforms.</p>
<br>
<p> </p>
|
17 |
ENHANCED PRODUCTION PLANNING AND SCHEDULING METHOD FOR CONSTRUCTION PROJECTSQais Amarkhil (6616994) 05 July 2022 (has links)
<p> </p>
<p>Available literature indicated that construction projects have been experiencing significant time overruns from their planned duration. In many cases, the primary reasons for project delays were ineffective planning and scheduling methods, poor communication and collaboration between the key stakeholders, and the construction operations and task requirements have been overlooked.</p>
<p>Construction project planning and scheduling are extensively studied topics, and several techniques have been developed to solve construction project scheduling problems. Traditional production planning and scheduling techniques are based on the push planning strategy, such as linear and network scheduling techniques. In the traditional method, the project scheduler calculates activity durations and then sequences them to determine when to complete the work. These techniques and planning methods have been criticized for lacking collaboration between project workers and realistic integration of the project time, location, and other essential resources to create a reliable work schedule. Furthermore, the inability to account for site operations, tasks, and workflow leads to waste and delay. </p>
<p>Consequently, Ballard and Howell (1990) proposed the last planner system, and then Ballard et al. (2000) further developed the method. In the last planner system, all key stakeholders and the project management team actively communicate and coordinate to accomplish the project’s planned milestones. The last planner system and pull planning scheduling objective is improving workflow and increasing plan reliability. However, the pull planning scheduling method has some limitations. For instance, this method cannot be used to determine the available work capacity in each working space and show how much work can be completed at a given time. In addition, the pull planning and LPS system are highly descriptive and experienced-based, relying on the decision and experiences of the site supervisors. </p>
<p>Available literature concerning construction project delay also indicated that ineffective planning and scheduling, slow decision-making, and poor communication and coordination had been the top critical causes of construction project delay.</p>
<p>Therefore, this research was conducted to minimize construction project time and cost overrun due to poor scheduling and production planning. The study has been conducted in two main parts. In the first part of this study, critical causes of project delay have been analyzed, and the contribution of poor planning and scheduling to construction project delays in different environments has been assessed. The relative importance index and Spearman’s coefficient techniques have been utilized to analyze the collected data. The second section of this research work was conducted to investigate the construction scheduling reliability and production efficiency and developed the enhanced production planning and scheduling method to improve schedule reliability and production plan efficiency. The reason for developing the enhanced production planning and scheduling method was to find the best work option to optimize work duration and efficiently plan required resources per category of the identified activities. In addition, this study has automated the scheduling input data capturing from the project BIM model by utilizing the developed visual program.</p>
<p>The study finding in the first section indicated that the top ten critical causes of identified causes of project delay in specified environments were significantly different. However, Ineffective project planning and scheduling had been among the most critical causes in all three conditions. Ineffective planning and scheduling were ranked number one in developed environment conditions, second in developing countries, and fifth in high-risk environments.</p>
<p>Study results in the second section have shown that the case study's executed schedule had experienced significant changes in the planned dates of individual tasks, project milestones, and resource allocation. The project schedule critical path and critical activities were changed repeatedly after each update, and the project structure work was delayed for 30 days from its initial plan, as illustrated in figures 33 to 35. Subsequently, the enhanced planning methodology has been applied in the selected case study to validate the developed method and evaluate the result of the case study. The case study implemented work plan has been compared with the enhanced planning-based developed schedule. The total duration of the enhanced planning-based method has been calculated to be 240 work days, which shows 30 days less time from implemented case study plan and 50 days from the project base plan in the construction document, as illustrated in figures 39 to 41. </p>
<p>Furthermore, the production schedule sensitivity analysis has shown that the production schedule and the case study base plan tasks duration have not been significantly different since both schedules were created based on the similar size of the work crew, but in terms of the number of the planned task, the production schedule had been created based on the most suitable work option. Another advantage of the production schedule is that it is more reliable because the schedule is created for a shorter duration, not long before the project work starts, and it is created after multiple collaborations and assessment steps. In addition, the developed program in this study using Revit dynamo automated the extraction of input data from the BIM model to create the project schedule. </p>
<p>In conclusion, based on the case study results, the enhanced production planning methodology and developed metrics and indices can be applied to various building construction projects to find the most suitable work option and create a reliable and resource-efficient work schedule. </p>
|
18 |
Cellulose-Based Hydrogels for High-Performance Buildings and Atmospheric Water HarvestingNoor Mohammad Mohammad (17548365) 04 December 2023 (has links)
<p dir="ltr">Smart windows, dynamically adjusting optical transmittance, face global adoption challenges due to climatic and economic variability. Aiming these issues, we synthesized a methyl cellulose (MC) salt system with high tunability for intrinsic optical transmittance (89.3%), which can be applied globally to various locations. Specifically, the MC window has superior heat shielding potential below transition temperatures while turning opaque at temperatures above the Lower Critical Solution Temperature (LCST), reducing the solar heat gain by 55%. Such optical tunability is attributable to the particle size change triggered by the temperature-induced reversible coil-to-globular transition. This leads to effective refractive index and scattering modulation, making them prospective solutions for light management systems, an application ahead of intelligent fenestration systems. MC-based windows demonstrated a 9°C temperature decrease compared to double-pane windows on sunny days and a 5°C increase during winters in field tests, while simulations predict an 11% energy savings.</p><p dir="ltr">Incorporating MC-based phase change materials in passive solar panels indicated optimized energy efficiency, offering a sustainable alternative. Real-time simulations validate practical applicability in large-scale solar panels. Furthermore, a temperature-responsive sorbent with a dark layer demonstrates an optimal optical and water uptake performance. Transitioning between radiative cooling and solar heating, the sorbent exhibits high water harvesting efficiency in lab and field tests. With an adjustable LCST at 38 ℃, the cellulose-based sorbent presents a potential solution for atmospheric water harvesting, combining optical switching and temperature responsiveness for sustainable water access. Furthermore, the ubiquitous availability of materials, low cost, and ease-of-manufacturing will provide technological equity and foster our ambition towards net-zero buildings and sustainable future.</p>
|
19 |
NATURAL LANGUAGE PROCESSING-BASED AUTOMATED INFORMATION EXTRACTION FROM BUILDING CODES TO SUPPORT AUTOMATED COMPLIANCE CHECKINGXiaorui Xue (13171173) 29 July 2022 (has links)
<p> </p>
<p>Traditional manual code compliance checking process is a time-consuming, costly, and error-prone process that has many shortcomings (Zhang & El-Gohary, 2015). Therefore, automated code compliance checking systems have emerged as an alternative to traditional code compliance checking. However, computer software cannot directly process regulatory information in unstructured building code texts. To support automated code compliance checking, building codes need to be transformed to a computer-processable, structured format. In particular, the problem that most automated code compliance checking systems can only check a limited number of building code requirements stands out.</p>
<p>The transformation of building code requirements into a computer-processable, structured format is a natural language processing (NLP) task that requires highly accurate part-of-speech (POS) tagging results on building codes beyond the state of the art. To address this need, this dissertation research was conducted to provide a method to improve the performance of POS taggers by error-driven transformational rules that revise machine-tagged POS results. The proposed error-driven transformational rules fix errors in POS tagging results in two steps. First, error-driven transformational rules locate errors in POS tagging by their context. Second, error-driven transformational rules replace the erroneous POS tag with the correct POS tag that is stored in the rule. A dataset of POS tagged building codes, namely the Part-of-Speech Tagged Building Codes (PTBC) dataset (Xue & Zhang, 2019), was published in the Purdue University Research Repository (PURR). Testing on the dataset illustrated that the method corrected 71.00% of errors in POS tagging results for building codes. As a result, the POS tagging accuracy on building codes was increased from 89.13% to 96.85%.</p>
<p>This dissertation research was conducted to provide a new POS tagger that is tailored to building codes. The proposed POS tagger utilized neural network models and error-driven transformational rules. The neural network model contained a pre-trained model and one or more trainable neural layers. The neural network model was trained and fine-tuned on the PTBC (Xue & Zhang, 2019) dataset, which was published in the Purdue University Research Repository (PURR). In this dissertation research, a high-performance POS tagger for building codes using one bidirectional Long-short Term Memory (LSTM) Recurrent Neural Network (RNN) trainable layer, a BERT-Cased-Base pre-trained model, and 50 epochs of training was discovered. This model achieved 91.89% precision without error-driven transformational rules and 95.11% precision with error-driven transformational rules, outperforming the otherwise most advanced POS tagger’s 89.82% precision on building codes in the state of the art.</p>
<p>Other automated information extraction methods were also developed in this dissertation. Some automated code compliance checking systems represented building codes in logic clauses and used pattern matching-based rules to convert building codes from natural language text to logic clauses (Zhang & El-Gohary 2017). A ruleset expansion method that can expand the range of checkable building codes of such automated code compliance checking systems by expanding their pattern matching-based ruleset was developed in this dissertation research. The ruleset expansion method can guarantee: (1) the ruleset’s backward compatibility with the building codes that the ruleset was already able to process, and (2) forward compatibility with building codes that the ruleset may need to process in the future. The ruleset expansion method was validated on Chapters 5 and 10 of the International Building Code 2015 (IBC 2015). The Chapter 10 of IBC 2015 was used as the training dataset and the Chapter 5 of the IBC 2015 was used as the testing dataset. A gold standard of logic clauses was published in the Logic Clause Representation of Building Codes (LCRBC) dataset (Xue & Zhang, 2021). Expanded pattern matching-based rules were published in the dissertation (Appendix A). The expanded ruleset increased the precision, recall, and f1-score of the logic clause generation at the predicate-level by 10.44%, 25.72%, and 18.02%, to 95.17%, 96.60%, and 95.88%, comparing to the baseline ruleset, respectively. </p>
<p>Most of the existing automated code compliance checking research focused on checking regulatory information that was stored in textual format in building code in text. However, a comprehensive automated code compliance checking process should be able to check regulatory information stored in other parts, such as, tables. Therefore, this dissertation research was conducted to provide a semi-automated information extraction and transformation method for tabular information processing in building codes. The proposed method can semi-automatically detect the layouts of tables and store the extracted information of a table in a database. Automated code compliance checking systems can then query the database for regulatory information in the corresponding table. The algorithm’s initial implementation accurately processed 91.67 % of the tables in the testing dataset composed of tables in Chapter 10 of IBC 2015. After iterative upgrades, the updated method correctly processed all tables in the testing dataset. </p>
|
20 |
<b>GREENHOUSE GAS EMISSIONS AND TIME-USE PATTERNS UNDER WORK FROM HOME: AN ACTIVITY-BASED INDIVIDUAL-LEVEL MODEL</b>Hongyue Wu (19183129) 20 July 2024 (has links)
<p dir="ltr">Work from home (WFH) moves work into home life, reshaping the residential, workplace, and commuting activities, which further impacts greenhouse gas (GHG) emissions. Although existing work has explored individual time-use patterns under WFH, there is a lack of complete consideration of diverse activities, their durations and timelines, as well as the comparisons with traditional life at home and Work in Office (WIO). Also, existing studies have examined GHG emissions under WFH, while individual-level estimation using activity-specific data covering all major activities is lacking. In particular, limited studies explored individual time-use patterns and quantified activity-based emissions for the construction workforce. Therefore, this dissertation aims to (1) develop an activity-based individual-level model to estimate GHG emissions under WFH, (2) compare individual time-use patterns and activity-based GHG emissions between traditional life at home, WFH, and WIO to understand how WFH affects work, life, and the environment, especially for the construction workforce, and (3) propose activity-based decarbonization strategies to reduce GHG emissions. By employing the proposed model, high-resolution calculations of individual time-use patterns and activity-based emissions were achieved, revealing major activities’ durations and timing and highlighting major contributing activities to emissions under WFH. When shifting from traditional life at home to WFH, individuals reduced sleeping and leisure hours to incorporate work activity, resulting in an 11.34% reduction in GHG emissions. When comparing WFH to WIO, individuals reduced work and commuting time to include more cooking and leisure activities at home, mitigating GHG emissions by 29.11%. Demographic groups and climate regions showed different results mainly because of the varied work and household duties and the characteristics of regions. In addition, the construction workforce reduced GHG emissions by 13% and 46% under WFH compared to traditional life at home and WIO, respectively. Compared to the general public, the construction workforce had more reduction in work and commuting hours and associated emissions when shifting from WIO to WFH. The findings could help envision how WFH influences work, life, and the environment as well as assist both individuals and policymakers in achieving decarbonization and adopting low-carbon living during the work arrangement transition, which could contribute to sustainable development.</p>
|
Page generated in 0.1418 seconds