• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • Tagged with
  • 24
  • 24
  • 24
  • 11
  • 11
  • 11
  • 10
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

BIOCHEMICAL METHANE POTENTIAL TESTING AND MODELLING FOR INSIGHT INTO ANAEROBIC DIGESTER PERFORMANCE

Sarah Daly (9183209) 30 July 2020 (has links)
<p>Anaerobic digestion uses a mixed, microbial community to convert organic wastes to biogas, thereby generating a clean renewable energy and reducing greenhouse gas emissions. However, few studies have quantified the relationship between waste composition and the subsequent physical and chemical changes in the digester. This Ph.D. dissertation aimed to gain new knowledge about how these differences in waste composition ultimately affect digester function. This dissertation examined three areas of digester function: (1) hydrogen sulfide production, (2) digester foaming, and (3) methane yield. </p> <p>To accomplish these aims, a variety of materials from four different large-scale field digesters were collected at different time points and from different locations within the digester systems, including influent, liquid in the middle of the digesters, effluent, and effluent after solids separation. The materials were used for biochemical methane potential (BMP) tests in 43 lab-scale lab-digester groups, each containing triplicate or duplicate digesters. The materials from field digesters and the effluents from the lab-digesters were analyzed for an extensive set of chemical and physical characteristics. The three areas of digester function were examined with the physical and chemical characteristics of the digester materials and effluents, and the BMP performances. </p> <p>Hydrogen sulfide productions in the lab-digesters ranged from non-detectable to 1.29 mL g VS<sup>-1</sup>. Higher H<sub>2</sub>S concentrations in the biogas were observed within the first ten days of testing. The initial Fe(II) : S ratio and OP concentrations had important influences on H<sub>2</sub>S productions. Important parameters of digester influents related to digester foaming were the ratios of Fe(II) : S, Fe(II) : TP, and TVFA : TALK; and the concentrations of Cu. Digesters receiving mixed waste streams could be more vulnerable to foaming. The characteristics of each waste type varied significantly based on substrate and inoculum type, and digester functioning. The influent chemical characteristics of the waste significantly impacted all aspects of digester function. Using multivariate statistics and machine learning, models were developed and the prediction of digester outcomes were simulated based on the initial characteristics of the waste types. </p>
12

DEVELOPING A DECISION SUPPORT SYSTEM FOR CREATING POST DISASTER TEMPORARY HOUSING

Mahdi Afkhamiaghda (10647542) 07 May 2021 (has links)
<p>Post-disaster temporary housing has been a significant challenge for the emergency management group and industries for many years. According to reports by the Department of Homeland Security (DHS), housing in states and territories is ranked as the second to last proficient in 32 core capabilities for preparedness.The number of temporary housing required in a geographic area is influenced by a variety of factors, including social issues, financial concerns, labor workforce availability, and climate conditions. Acknowledging and creating a balance between these interconnected needs is considered as one of the main challenges that need to be addressed. Post-disaster temporary housing is a multi-objective process, thus reaching the optimized model relies on how different elements and objectives interact, sometimes even conflicting, with each other. This makes decision making in post-disaster construction more restricted and challenging, which has caused ineffective management in post-disaster housing reconstruction.</p> <p>Few researches have studied the use of Artificial Intelligence modeling to reduce the time and cost of post-disaster sheltering. However, there is a lack of research and knowledge gap regarding the selection and the magnitude of effect of different factors of the most optimized type of Temporary Housing Units (THU) in a post-disaster event.</p> The proposed framework in this research uses supervised machine learing to maximize certain design aspects of and minimize some of the difficulties to better support creating temporary houses in post-disaster situations. The outcome in this study is the classification type of the THU, more particularly, classifying THUs based on whether they are built on-site or off-site. In order to collect primary data for creating the model and evaluating the magnitude of effect for each factor in the process, a set of surveys were distributed between the key players and policymakers who play a role in providing temporary housing to people affected by natural disasters in the United States. The outcome of this framework benefits from tacit knowledge of the experts in the field to show the challenges and issues in the subject. The result of this study is a data-based multi-objective decision-making tool for selecting the THU type. Using this tool, policymakers who are in charge of selecting and allocating post-disaster accommodations can select the THU type most responsive to the local needs and characteristics of the affected people in each natural disaster.
13

INVESTIGATION OF CHEMISTRY IN MATERIALS USING FIRST-PRINCIPLES METHODS AND MACHINE LEARNING FORCE FIELDS

Pilsun Yoo (11159943) 21 July 2021 (has links)
The first-principles methods such as density functional theory (DFT) often produce quantitative predictions for physics and chemistry of materials with explicit descriptions of electron’s behavior. We were able to provide information of electronic structures with chemical doping and metal-insulator transition of rare-earth nickelates that cannot be easily accessible with experimental characterizations. Moreover, combining with mean-field microkinetic modeling, we utilized the DFT energetics to model water gas shift reactions catalyzed by Fe3O4at steady-state and determined favorable reaction mechanism. However, the high computational costs of DFT calculations make it impossible to investigate complex chemical processes with hundreds of elementary steps with more than thousands of atoms for realistic systems. The study of molecular high energy (HE) materials using the reactive force field (ReaxFF) has contributed to understand chemically induced detonation process with nanoscale defects as well as defect-free systems. However, the reduced accuracy of the force fields canalso lead to a different conclusion compared to DFT calculations and experimental results. Machine learning force field is a promising alternative to work with comparable simulation size and speed of ReaxFF while maintaining accuracy of DFT. In this respect, we developed a neural network reactive force field (NNRF) that was iteratively parameterized with DFT calculations to solve problems of ReaxFF. We built an efficient and accurate NNRF for complex decomposition reaction of HE materials such as high energy nitramine 1,3,5-Trinitroperhydro-1,3,5-triazine (RDX)and predicted consistent results for experimental findings. This work aims to demonstrate the approaches to clarify the reaction details of materials using the first-principles methods and machine learning force fields to guide quantitative predictions of complex chemical process.
14

Computational methods for protein-protein interaction identification

Ziyun Ding (7817588) 05 November 2019 (has links)
<div> <div> <div> <p>Understanding protein-protein interactions (PPIs) in a cell is essential for learning protein functions, pathways, and mechanisms of diseases. This dissertation introduces the computational method to predict PPIs. In the first chapter, the history of identifying protein interactions and some experimental methods are introduced. Because interacting proteins share similar functions, protein function similarity can be used as a feature to predict PPIs. NaviGO server is developed for biologists and bioinformaticians to visualize the gene ontology relationship and quantify their similarity scores. Furthermore, the computational features used to predict PPIs are summarized. This will help researchers from the computational field to understand the rationale of extracting biological features and also benefit the researcher with a biology background to understand the computational work. After understanding various computational features, the computational prediction method to identify large-scale PPIs was developed and applied to Arabidopsis, maize, and soybean in a whole-genomic scale. Novel predicted PPIs were provided and were grouped based on prediction confidence level, which can be used as a testable hypothesis to guide biologists’ experiments. Since affinity chromatography combined with mass spectrometry technique introduces high false PPIs, the computational method was combined with mass spectrometry data to aid the identification of high confident PPIs in large-scale. Lastly, some remaining challenges of the computational PPI prediction methods and future works are discussed. </p> </div> </div> </div>
15

Reconfigurable Microwave/Millimeter-Wave Filters: Automated tuning and Power Handling Analysis

Pintu Adhikari (11640121) 03 November 2021 (has links)
<div>In recent years, intelligent devices such as smartphones and self-driving cars are becoming ubiquitous in daily life, and thus, wireless communication is turning out to be increasingly omnipresent. To efficiently utilize the electromagnetic spectrum, automatically reconfigurable software-controlled radio transceivers are drawing an extensive amount of attention. In order to implement a reconfigurable radio transceiver, automatically tunable RF front-end components such as tunable filters are indispensable. Over the last decade, tunable filters have shown promising performance with high-quality factor (Q), a wide tuning range, and high-power handling. However, most of the existing tunable filters are manually adjusted. In this regard, this research work focuses on developing a novel automatic software-driven tuning technique for continuously tunable microwave and millimeter-wave filters.</div><div><br></div><div><br></div><div>First, a K-band continuously tunable bandpass filter has been demonstrated with contactless printed circuit board (PCB) tuners. Then, an automatic tuning technique based on deep-Q learning has been proposed and realized to tune a filter with contactless tuners automatically. Two-pole, three-pole, and four-pole bandpass filters are experimentally tested as examples without any human intervention to prove the feasibility of the tuning technique. For the first time, unlike a look-up table, the filters can be continuously tuned at a practically infinite number of frequencies inside the tuning range. </div><div><br></div><div>Next, a K/Ka-band tunable absorptive bandstop filter (ABSF) has been designed and fabricated in low-cost PCB technology. Contrary to a reflective bandstop filter, an ABSF filter is preferred for interference mitigation due to its deeper notch and lower reflection. However, the absorbed power may limit the filter's power handling. Therefore, lastly, a comparative analysis of power handling capability (PHC) between a reflective bandstop filter and an absorptive bandstop filter has been studied theoretically and experimentally in this dissertation.</div>
16

HIGHER ORDER OPTIMIZATION TECHNIQUES FOR MACHINE LEARNING

Sudhir B. Kylasa (5929916) 09 December 2019 (has links)
<div> <div> <div> <p>First-order methods such as Stochastic Gradient Descent are methods of choice for solving non-convex optimization problems in machine learning. These methods primarily rely on the gradient of the loss function to estimate descent direction. However, they have a number of drawbacks, including converging to saddle points (as opposed to minima), slow convergence, and sensitivity to parameter tuning. In contrast, second order methods that use curvature information in addition to the gradient, have been shown to achieve faster convergence rates, theoretically. When used in the context of machine learning applications, they offer faster (quadratic) convergence, stability to parameter tuning, and robustness to problem conditioning. In spite of these advantages, first order methods are commonly used because of their simplicity of implementation and low per-iteration cost. The need to generate and use curvature information in the form of a dense Hessian matrix makes each iteration of second order methods more expensive. </p><p><br></p> <p>In this work, we address three key problems associated with second order methods – (i) what is the best way to incorporate curvature information into the optimization procedure; (ii) how do we reduce the operation count of each iteration in a second order method, while maintaining its superior convergence property; and (iii) how do we leverage high-performance computing platforms to significant accelerate second order methods. To answer the first question, we propose and validate the use of Fisher information matrices in second order methods to significantly accelerate convergence. The second question is answered through the use of statistical sampling techniques that suitably sample matrices to reduce per-iteration cost without impacting convergence. The third question is addressed through the use of graphics processing units (GPUs) in distributed platforms to deliver state of the art solvers.</p></div></div></div><div><div><div> <p>Through our work, we show that our solvers are capable of significant improvement over state of the art optimization techniques for training machine learning models. We demonstrate improvements in terms of training time (over an order of magnitude in wall-clock time), generalization properties of learned models, and robustness to problem conditioning. </p> </div> </div> </div>
17

THE GAME CHANGER: ANALYTICAL METHODS FOR ENERGY DEMAND PREDICTION UNDER CLIMATE CHANGE

Debora Maia Silva (10688724) 22 April 2021 (has links)
<div>Accurate prediction of electricity demand is a critical step in balancing the grid. Many factors influence electricity demand. Among these factors, climate variability has been the most pressing one in recent times, challenging the resilient operation of the grid, especially during climatic extremes. In this dissertation, fundamental challenges related to accurate characterization of the climate-energy nexus are presented in Chapters 2--4, as described below. </div><div><br></div><div>Chapter 2 explores the cost of neglecting the role of humidity in predicting summer-time residential electricity consumption. Analysis of electricity demand in the CONUS region demonstrates that even though surface temperature---the most widely used metric for characterising heat stress---is an important factor, it is not sufficient for accurately characterizing cooling demand. The chapter proceeds to show significant underestimations of the climate sensitivity of demand, both in the observational space as well as under climate change. Specifically, the analysis reveals underestimations as high as 10-15% across CONUS, especially in high energy consuming states such as California and Texas. </div><div><br></div><div>Chapter 3 takes a critical look at one of the most widely used metrics, namely, the Cooling Degree Days (CDD), often calculated with an arbitrary set point temperature of 65F or 18.3C, ignoring possible variations due to different patterns of electricity consumption across different regions and climate zones. In this chapter, updated values are derived based on historical electricity consumption data across the country at the state level. Chapter 3 analysis demonstrates significant variation, as high as +-25%, between derived set point variables and the conventional value of 65F. Moreover, the CDD calculation is extended to account for the role of humidity, in the light of lessons learnt in the previous chapter. Our results reveal that under climate change scenarios, the air-temperature based CDD underestimates thermal comfort by as much as ~22%.</div><div><br></div><div>The predictive analytics conducted in Chapter 2 and Chapter 3 revealed a significant challenge in characterizing the climate-demand nexuses: the ability to capture the variability at the upper tails. Chapter 4 explores this specific challenge, with the specific goal of developing an algorithm to increase prediction accuracy at the higher quantiles of the demand distributions. Specifically, Chapter 4 presents a data-centric approach at the utility level (as opposed to the state-level analyses in the previous chapters), focusing on high-energy consuming states of California and Texas. The developed algorithm shows a general improvement of 7% in the mean prediction accuracy and an improvement of 15% for the 90th quantile predictions.</div>
18

Predictive Quality Analytics

Salim A Semssar (11823407) 03 January 2022 (has links)
Quality drives customer satisfaction, improved business performance, and safer products. Reducing waste and variation is critical to the financial success of organizations. Today, it is common to see Lean and Six Sigma used as the two main strategies in improving Quality. As advancements in information technologies enable the use of big data, defect reduction and continuous improvement philosophies will benefit and even prosper. Predictive Quality Analytics (PQA) is a framework where risk assessment and Machine Learning technology can help detect anomalies in the entire ecosystem, and not just in the manufacturing facility. PQA serves as an early warning system that directs resources to where help and mitigation actions are most needed. In a world where limited resources are the norm, focused actions on the significant few defect drivers can be the difference between success and failure
19

Large Eddy Simulations of a Back-step Turbulent Flow and Preliminary Assessment of Machine Learning for Reduced Order Turbulence Model Development

Biswaranjan Pati (11205510) 30 July 2021 (has links)
Accuracy in turbulence modeling remains a hurdle in the widespread use of Computational Fluid Dynamics (CFD) as a tool for furthering fluids dynamics research. Meanwhile, computational power remains a significant concern for solving real-life wall-bounded flows, which portray a wide range of length and time scales. The tools for turbulence analysis at our disposal, in the decreasing order of their accuracy, include Direct Numerical Simulation (DNS), Large Eddy Simulation (LES), and Reynolds-Averaged Navier Stokes (RANS) based models. While DNS and LES would remain exorbitantly expensive options for simulating high Reynolds number flows for the foreseeable future, RANS is and continues to be a viable option utilized in commercial and academic endeavors. In the first part of the present work, flow over the back-step test case was solved, and parametric studies for various parameters such as re-circulation length (X<sub>r</sub>), coefficient of pressure (C<sub>p</sub>), and coefficient of skin friction (C<sub>f</sub>) are presented and validated with experimental results. The back-step setup was chosen as the test case as turbulent modeling of flow past backward-facing step has been pivotal to understand separated flows better. Turbulence modeling is done on the test case using RANS (k-ε and k-ω models), and LES modeling, for different values of Reynolds number (Re ∈ {2, 2.5, 3, 3.5} × 10<sup>4</sup>) and expansion ratios (ER ∈ {1.5, 2, 2.5, 3}). The LES results show good agreement with experimental results, and the discrepancy between the RANS results and experimental data was highlighted. The results obtained in the first part reveal a pattern of under-prediction noticed with using RANS-based models to analyze canonical setups such as the backward-facing step. The LES results show close proximity to experimental data, as mentioned above, which makes it an excellent source of training data for the machine learning analysis outlined in the second part. The highlighted discrepancy and the inability of the RANS model to accurately predict significant flow properties create the need for a better model. The purpose of the second part of the present study is to make systematic efforts to minimize the error between flow properties from RANS modeling and experimental data, as seen in the first part. A machine learning model was constructed in the second part of the present study to predict the eddy viscosity parameter (μt) as a function of turbulent kinetic energy (TKE) and dissipation rate (ε) derived from LES data, effectively working as an ad hoc eddy-viscosity based turbulence model. The machine learning model does not work well with the flow domain as a whole, but a zonal analysis reveals a better prediction of eddy viscosity than the whole domain. Among the zones, the area in the vicinity of the re-circulation zone gives the best result. The obtained results point towards the need for a zonal analysis for the better performance of the machine learning model, which will enable us to improve RANS predictions by developing a reduced order turbulence model.
20

Machine Learning-Based Predictive Methods for Polyphase Motor Condition Monitoring

David Matthew LeClerc (13048125) 29 July 2022 (has links)
<p>  This paper explored the application of three machine learning models focused on predictive motor maintenance. Logistic Regression, Sequential Minimal Optimization (SMO), and NaïveBayes models. A comparative analysis of these models illustrated that while each had an accuracy greater than 95% in this study, the Logistic Regression Model exhibited the most reliable operation.</p>

Page generated in 0.1366 seconds