• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 245
  • 21
  • 17
  • 12
  • 9
  • 7
  • 6
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 404
  • 404
  • 93
  • 90
  • 77
  • 67
  • 59
  • 58
  • 51
  • 50
  • 42
  • 41
  • 41
  • 38
  • 37
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

DEVELOPMENT OF DATA-DRIVEN APPROACHES FOR WASTEWATER MODELING

Zhou, Pengxiao January 2023 (has links)
To effectively operate and manage the complex wastewater treatment system, simplified representations, known as wastewater modeling, are critical. Wastewater modeling allows for the understanding, monitoring, and prediction of wastewater treatment processes by capturing intricate relationships within the system. Process-driven models (PDMs), which rely on a set of interconnected hypotheses and assumptions, are commonly used to capture the physical, chemical, and biological mechanisms of wastewater treatment. More recently, with the development of advanced algorithms and sensor techniques, data-driven models (DDMs) that are based on analyzing the data about a system, specifically finding relationships between the system state variables without relying on explicit knowledge of the system, have emerged as a complementary alternative. However, both PDMs and DDMs suffer from their limitations. For example, uncertainties of PDMs can arise from imprecise calibration of empirical parameters and natural process variability. Applications of DDMs are limited to certain objectives because of a lack of high-quality dataset and struggling to capture changing relationship. Therefore, this dissertation aims to enhance the stable operation and effective management of WWTPs by addressing these limitations through the pursuit of three objectives: (1) investigating an efficient data-driven approach for uncertainty analysis of process-driven secondary settling tank models; (2) developing data-driven models that can leverage sparse and imbalanced data for the prediction of emerging contaminant removal; (3) exploring an advanced data-driven model for influent flow rate predictions during the COVID-19 emergency. / Thesis / Doctor of Philosophy (PhD) / Ensuring appropriate treatment and recycling of wastewater is vital to sustain life. Wastewater treatment plants (WWTPs), which have complicated processes that include several intricate physical, chemical, and biological procedures, play a significant role in the water recycling. Due to stricter regulations and complex wastewater composition, the wastewater treatment system has become increasingly complex. Therefore, it is crucial to use simplified versions of the system, known as wastewater modeling, to effectively operate and manage the complex system. The aim of this thesis is to develop data-driven approaches for wastewater modeling.
2

Discrete Event Simulation of Operating Rooms Using Data-Driven Modeling

Malik, Mandvi January 2018 (has links)
No description available.
3

An Examination of Mathematics Teachers’ Use of Student Data in Relationship to Student Academic Performance

Hartmann, Lillian Ann 12 1900 (has links)
Among educational researchers, important questions are being asked about how to improve mathematics instruction for elementary students. This study, conducted in a north Texas public school with 294 third- through fifth-grade students, ten teachers and three coaches, examined the relationship between students’ achievement in mathematics and the mathematics teaching and coaching instruction they received. Student achievement was measured by the Computer Adaptive Instrument (CAT), which is administered three times a year in the district and is the main criterion for students’ performance/movement in the district’s response to intervention program for mathematics. The response to intervention model employs student data to guide instruction and learning in the classroom and in supplemental sessions. The theoretical framework of the concerns based adoption model (CBAM) was the basis to investigate the concerns that mathematics teachers and coaches had in using the CAT student data to inform their instruction. The CAT data, based on item response theory, was the innovation. Unique in this study was the paralleling of teachers’ and coaches’ concerns and profiles for their use of the data with student scores using an empirical approach. Data were collected at three intervals through the Stages of Concerns Questionnaire, the Levels of Use interviews, and the Innovation Configuration Components Matrix from teachers and at three intervals student CAT-scaled scores. Multiple regression analyses with the concerns and CAT scores and levels of use and CAT scores were conducted to determine if relationships existed between the variables. The findings indicated that, overall, the teachers and coaches who scored high in personal concerns at the three data points remained at low levels of use or non-use of CAT data in their instruction. Only two teachers indicated movement from high intense personal concerns to high concerns regarding the impact on students. This correlated with their increased use of CAT at the three-collection points. The regression analyses indicated no correlations between the teachers’ and coaches’ concerns and the CAT and no correlations between their levels of data use and the CAT. At the exit interviews, patterns suggested that the presence of a change facilitator might have made a difference in their understanding and use of the CAT data ultimately impacting student achievement. This study sets a new precedent in the use of CBAM data and offers insights into the necessity of providing support and training in a change process.
4

MULTI-STREAM DATA-DRIVEN TELEMETRY SYSTEM

Can, Ouyan, Chang-jie, Shi 11 1900 (has links)
International Telemetering Conference Proceedings / November 04-07, 1991 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The Multi-Stream Data-Driven Telemetry System (MSDDTS) is a new generation system in China developed by Beijing Research Institute of Telemetry (BRIT) for high bit rate, multi-stream data acquisition, processing and display. Features of the MSDDTS include: .Up to 4 data streams; .Data driven architecture; .Multi-processor for parallel processing; .Modular, Configurable, expandable and programmable; .Stand-along capability; .And, external control by host computer. This paper addresses three very important aspects of the MSDDTS. First, the system architecture is discussed. Second, three basic models of the system configuration are described. The third shows the future development of the system.
5

Deep Learning of Unknown Governing Equations

Chen, Zhen January 2021 (has links)
No description available.
6

Data Driven Decision-Making and Principals' Perceptions

McCray, Melissa F 13 December 2014 (has links)
In this era of high stakes accountability, assessments are not only used as diagnostic tools, but they are also used to determine the effectiveness of school programs and personnel. Of the utmost importance is how principals use data to make instructional, intervention and planning decisions. The purpose of the current study was to determine principals’ perceptions regarding the importance, availability and utility of multiple sources of data in making their decisions and to determine their self-efficacy in M practices. This study was guided by 7 research questions and utilized 2 research designs. Descriptive research was used to answer research questions 1 through 6. Questions 1 through 3 sought to determine what data were available, used and important. Question 4 sought to determine the extent to principals relied on data to make decisions. Question 5 sought to determine the importance of different types of support for the effective use of data in decision-making. Question 6 sought to determine principals’ perceived self-efficacy in terms of effective data use. Question 7 was answered using correlational research to determine if principals’ measures of data use self-efficacy was related to student achievement. Overall, results showed that data surrounding student grades, attendance and discipline were most highly utilized in decision-making. All participating principals indicated that they either used data to a moderate degree or great degree when making decisions regarding development/revision of school improvement plan; inform parents of students’ progress/status/test scores; assignments of students to remedial programs; and improve classroom instruction. Data analysis further showed that principals indicated that school personnel trained in data analysis, sufficient time for data-analysis and staff development in the data analysis process are extremely important. Further analysis revealed that participating principals had high measures of data use self-efficacy and were highly certain that they could effectively use data. In the final analysis of the study, A Pearson’s r correlation coefficient was computed to assess the relationship between principals’ self-efficacy scores and student achievement. It was determined that there is no relationship between measures of principals’ data use perceived self-efficacy and student achievement. The study concludes with recommendations for future research.
7

Air Pollution Modelling and Forecasting in Hamilton Using Data-Driven Methods

Solaiman, Tarana 06 1900 (has links)
The purpose of this research is to provide an extensive evaluation of neural network models for the prediction and the simulation of some key air pollutants in Hamilton, Ontario, Canada. Hamilton experiences one of Canada's highest air pollution exposures because of the dual problem associated with continuing industrial emission and gradual increase of traffic related emissions along with the transboundary air pollutions from heavily industrialized neighboring north-eastern and mid-western US cities. These factors combined with meteorology, cause large degradation of Hamilton's air quality. Hence an appropriate and robust method is of most importance in order to get an early notification of the future air quality situation. Data driven methods such as neural networks (NNs) are becoming very popular due to their inherent capability to capture the complex non-linear relationships between pollutants, climatic and other non-climatic variables such as traffic variables, emission factors, etc. This study investigates dynamic neural networks, namely time lagged feed-forward neural network (TLFN), Bayesian neural network (BNN) and recurrent neural network (RNN) for short term forecasting. The results are being compared with the benchmark static multilayer perceptron (MLP) models. The analysis shows that TLFN model with its time delay memory and RNN with its adaptive memory has outperformed the static MLP models in ground level ozone (O_3) forecasting for up to 12 hours ahead. Furthermore the model developed using the annual database is able to map the variations in the seasonal concentrations. On the other hand, MLP model was quite competitive for nitrogen dioxide (NO_2) prediction when compared to the dynamic NN based models. The study further assesses the ability of the neural network models to generate pollutant concentrations at sites where sampling has not been done. Using these neural network models, data values were generated for total suspended particulate (TSP) and inhalable particulates (PM_10) concentrations. The obtained results show promising potential. Although there were under-predictions and over-predictions on some occasions, the neural network models, in general were able to generate the missing information and to obtain air quality situation in the study area. / Thesis / Master of Applied Science (MASc)
8

Physics-based and data-driven strategies for simulating colloid behavior in fractured aquifer systems

Ahmed, Ahmed January 2019 (has links)
The design of effective quality management strategies in groundwater systems is crucial, as clean water is essential for livelihood, health, and development. Colloids represent a class of contaminants that might be pathogenic or benign. Colloids can also enhance or inhibit the transport of dissolved contaminants in groundwater, which has inspired the use of benign colloids in the remediation of contaminated aquifers. Reliable modelling of colloid behavior is therefore essential for the design of effective remediation strategies, both those employing benign colloids and those aiming at the removal of pathogenic colloids. While colloid transport is controlled by groundwater velocity, colloid retention is governed by the physical and chemical properties of the aquifer together with those of the colloid. The present study aims at enhancing the reliability of modelling colloid behavior in fractured aquifers through: i) developing a synchronization-based framework that can effectively identify hydraulic connections within the aquifer; ii) developing a mathematical model for the relationship between the fraction of colloids retained along a fracture (Fr) and the parameters describing the aquifer’s physical and chemical properties; iii) developing an analytical model for the relationship between Fr and the coefficient describing irreversible colloid deposition in single fractures; and, iv) developing a numerical technique that can efficiently simulate colloid behavior in single fractures and fracture networks under different physical, chemical, and matrix conditions. The performance of the synchronization-based framework, mathematical and analytical models, and the numerical technique was assessed separately for different verification cases, and the corresponding efficacy was confirmed. Coupling the tools developed in the present study enables the reliable prediction of colloid behavior in response to changes in the groundwater-colloid-fracture system’s physical and chemical properties, which can aid in understanding how to manipulate the system’s properties for the effective design of groundwater quality management and remediation strategies. / Thesis / Doctor of Philosophy (PhD) / Microorganisms, microplastics, clay, and fine silt are classified as colloids within the spectrum of contaminants that might exist in groundwater. Although some colloids are benign (e.g., clay and fine silt), they can still affect the groundwater quality and aquifer porosity. Colloids can also enhance or inhibit the migration of other contaminants in groundwater because of their high adsorption capacity. Several remediation strategies are being envisioned to remove pathogenic colloids and eliminate other contaminants adsorbed onto benign colloids, where effective design of such strategies requires reliable models of colloid behavior. The present study aims at enhancing the reliability of simulating colloid behavior in fractured aquifers through: i) developing models that capture the effects of the aquifer’s physical and chemical properties on colloid behavior; and, ii) designing a framework that improves the reliability of aquifer conceptualization. Effective remediation strategies can then be designed for contaminated fractured aquifers based on the developed tools.
9

Predictive Simulations of the Impedance-Matched Multi-Axis Test Method Using Data-Driven Modeling

Moreno, Kevin Joel 02 October 2020 (has links)
Environmental testing is essential to certify systems to withstand the harsh dynamic loads they may experience in their service environment or during transport. For example, satel- lites are subjected to large vibration and acoustic loads when transported into orbit and need to be certified with tests that are representative of the anticipated loads. However, tra- ditional certification testing specifications can consist of sequential uniaxial vibration tests, which have been found to severely over- and under-test systems needing certification. The recently developed Impedance-Matched Multi-Axis Test (IMMAT) has been shown in the literature to improve upon traditional environmental testing practices through the use of multi-input multi-output testing and impedance matching. Additionally, with the use of numerical models, predictive simulations can be performed to determine optimal testing pa- rameters. Developing an accurate numerical model, however, requires precise knowledge of the system's dynamic characteristics, such as boundary conditions or material properties. These characteristics are not always available and would also require additional testing for verification. Furthermore, some systems may be extremely difficult to model using numerical methods because they contain millions of finite elements requiring impractical times scales to simulate or because they were fabricated before mainstream use of computer aided drafting and finite element analysis but are still in service. An alternative to numerical modeling is data-driven modeling, which does not require knowledge of a system's dynamic characteris- tics. The Continuous Residue Interpolation (CRI) method has been recently developed as a novel approach for building data-driven models of dynamical systems. CRI builds data- driven models by fitting smooth, continuous basis functions to a subset of frequency response function (FRF) measurements from a dynamical system. The resulting fitted basis functions can be sampled at any geometric location to approximate the expected FRF at that location. The research presented in this thesis explores the use of CRI-derived data-driven models in predictive simulations for the IMMAT performed on a Euler-Bernoulli beam. The results of the simulations reveal that CRI-derived data-driven models of a Euler-Bernoulli beam achieve similar performance when compared to a finite element model and make similar decisions when deciding the excitation locations in an IMMAT. / Master of Science / In the field of vibrations testing, environmental tests are used to ensure that critical devices or structures can withstand harsh vibration environments. For example, satellites experience harsh vibrations and damaging acoustics that are transferred from it's rocket transport vehicle. Traditional environmental tests would require that the satellite be placed on a vibration table and sequentially vibrated in multiple orientations for a specified duration and intensity. However, these traditional environmental tests do not always produce vibrations that are representative of the anticipated transport or operational environment. Newly developed methods, such as the Impedance-Matched Multi-Axis Test (IMMAT) methods achieves representative test results by matching the mounting characteristics of the structure during it's transport or operational environment and vibrating the structure in multiple directions simultaneously. An IMMAT can also be optimized by using finite element models (FEM), which approximate the device to be tested with a discrete number of small volumes whose physics are described by fundamental equations of motion. However, an FEM can only be used if it's dynamic characteristics are sufficiently similar to the structure undergoing testing. This can only be achieved with precise knowledge of the dynamical properties of the structure, which is not always available. An alternate approach to an FEM is to use a data-driven model. Because data-driven models are made using data from the system it is supposed to describe, dynamical properties of the device are pre-built in the model and is not necessary to approximate them. Continuous Residue Interpolation (CRI) is a recently developed data-driven modeling scheme that approximates a structure's dynamic properties with smooth, continuous functions updated with measurements of the input-output response dynamics of the device. This thesis presents the performance of data-driven models generated using CRI when used in predictive simulations of an IMMAT. The results show that CRI- derived data-driven models perform similarly to FEMs and make similar predictions for optimal input vibration locations.
10

Process and Quality Modeling in Cyber Additive Manufacturing Networks with Data Analytics

Wang, Lening 16 August 2021 (has links)
A cyber manufacturing system (CMS) is a concept generated from the cyber-physical system (CPS), providing adequate data and computation resources to support efficient and optimal decision making. Examples of these decisions include production control, variation reduction, and cost optimization. A CMS integrates the physical manufacturing equipment and computation resources via Industrial Internet, which provides low-cost Internet connections and control capability in the manufacturing networks. Traditional quality engineering methodologies, however, typically focus on statistical process control or run-to-run quality control through modeling and optimization of an individual process, which makes it less effective in a CMS with many manufacturing systems connected. In addition, more personalization in manufacturing generates limited samples for the same kind of product designs, materials, and specifications, which prohibits the use of many effective data-driven modeling methods. Motivated by Additive Manufacturing (AM) with the potential to manufacture products with a one-of-a-kind design, material, and specification, this dissertation will address the following three research questions: (1) how can in situ data be used to model multiple similar AM processes connected in a CMS (Chapter 3)? (2) How to improve the accuracy of the low-fidelity first-principle simulation (e.g., finite element analysis, FEA) for personalized AM products to validate the product and process designs (Chapter 4) in time? (3) And how to predict the void defect (i.e., unmeasurable quality variables) based on the in situ quality variables. By answering the above three research questions, the proposed methodology will effectively generate in situ process and quality data for modeling multiple connected AM processes in a CMS. The research to quantify the uncertainty of the simulated in situ process data and their impact on the overall AM modeling is out of the scope of this research. The proposed methodologies will be validated based on fused deposition modeling (FDM) processes and selective laser melting processes (SLM). Moreover, by comparing with the corresponding benchmark methods, the merits of the proposed methods are demonstrated in this dissertation. In addition, the proposed methods are inherently developed with a general data-driven framework. Therefore, they can also potentially be extended to other applications and manufacturing processes. / Doctor of Philosophy / Additive manufacturing (AM) is a promising advanced manufacturing process that can realize the personalized products in complex shapes with unprecedented materials. However, there are many quality issues that can restrict the wide deployment of AM in practice, such as voids, porosity, cracking, etc. To effectively model and further mitigate these quality issues, the cyber manufacturing system (CMS) is adopted. The CMS can provide the data acquisition functionality to collect the real-time process data which directly or indirectly related to the product quality in AM. Moreover, the CMS can provide the computation capability to analyze the AM data and support the decision-making to optimize the AM process. However, due to the characteristics of AM process, there are several challenges effectively and efficiently model the AM data. First, there are many one-of-a-kind products in AM, and leads to limited observations for each product that can support to estimate an accurate model. Therefore, in Chapter 3, I would like to discuss how to jointly model personalized products by sharing the information among these similar-but-non-identical AM processes with limited observations. Second, for personalized product realization in AM, it is essential to validate the product and process designs before fabrication quickly. Usually, finite element analysis (FEA) is employed to simulate the manufacturing process based on the first-principal model. However, due to the complexity, the high-fidelity simulation is very time-consuming and will delay the product realization in AM. Therefore, in Chapter 4, I would like to study how to predict the high-fidelity simulation result based on the low-fidelity simulation with fast computation speed and limited capability. Thirdly, the defects of AM are usually inside the product, and can be identified by the X-ray computed tomography (CT) images after the build of the AM products. However, limited by the sensor technology, CT image is difficult to obtain for online (i.e., layer-wise) defect detection to mitigate the defects. Therefore, as an alternative, I would like to investigate how to predict the CT image based on the optical layer-wise image, which can be obtained during the AM process in Chapter 5. The proposed methodologies will be validated based on two types of AM processes: fused deposition modeling (FDM) processes and selective laser melting processes (SLM).

Page generated in 0.0291 seconds