• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 278
  • 22
  • 21
  • 16
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 460
  • 460
  • 112
  • 96
  • 96
  • 85
  • 65
  • 60
  • 60
  • 53
  • 48
  • 46
  • 45
  • 45
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Deep Learning of Unknown Governing Equations

Chen, Zhen January 2021 (has links)
No description available.
12

Data Driven Decision-Making and Principals' Perceptions

McCray, Melissa F 13 December 2014 (has links)
In this era of high stakes accountability, assessments are not only used as diagnostic tools, but they are also used to determine the effectiveness of school programs and personnel. Of the utmost importance is how principals use data to make instructional, intervention and planning decisions. The purpose of the current study was to determine principals’ perceptions regarding the importance, availability and utility of multiple sources of data in making their decisions and to determine their self-efficacy in M practices. This study was guided by 7 research questions and utilized 2 research designs. Descriptive research was used to answer research questions 1 through 6. Questions 1 through 3 sought to determine what data were available, used and important. Question 4 sought to determine the extent to principals relied on data to make decisions. Question 5 sought to determine the importance of different types of support for the effective use of data in decision-making. Question 6 sought to determine principals’ perceived self-efficacy in terms of effective data use. Question 7 was answered using correlational research to determine if principals’ measures of data use self-efficacy was related to student achievement. Overall, results showed that data surrounding student grades, attendance and discipline were most highly utilized in decision-making. All participating principals indicated that they either used data to a moderate degree or great degree when making decisions regarding development/revision of school improvement plan; inform parents of students’ progress/status/test scores; assignments of students to remedial programs; and improve classroom instruction. Data analysis further showed that principals indicated that school personnel trained in data analysis, sufficient time for data-analysis and staff development in the data analysis process are extremely important. Further analysis revealed that participating principals had high measures of data use self-efficacy and were highly certain that they could effectively use data. In the final analysis of the study, A Pearson’s r correlation coefficient was computed to assess the relationship between principals’ self-efficacy scores and student achievement. It was determined that there is no relationship between measures of principals’ data use perceived self-efficacy and student achievement. The study concludes with recommendations for future research.
13

Air Pollution Modelling and Forecasting in Hamilton Using Data-Driven Methods

Solaiman, Tarana 06 1900 (has links)
The purpose of this research is to provide an extensive evaluation of neural network models for the prediction and the simulation of some key air pollutants in Hamilton, Ontario, Canada. Hamilton experiences one of Canada's highest air pollution exposures because of the dual problem associated with continuing industrial emission and gradual increase of traffic related emissions along with the transboundary air pollutions from heavily industrialized neighboring north-eastern and mid-western US cities. These factors combined with meteorology, cause large degradation of Hamilton's air quality. Hence an appropriate and robust method is of most importance in order to get an early notification of the future air quality situation. Data driven methods such as neural networks (NNs) are becoming very popular due to their inherent capability to capture the complex non-linear relationships between pollutants, climatic and other non-climatic variables such as traffic variables, emission factors, etc. This study investigates dynamic neural networks, namely time lagged feed-forward neural network (TLFN), Bayesian neural network (BNN) and recurrent neural network (RNN) for short term forecasting. The results are being compared with the benchmark static multilayer perceptron (MLP) models. The analysis shows that TLFN model with its time delay memory and RNN with its adaptive memory has outperformed the static MLP models in ground level ozone (O_3) forecasting for up to 12 hours ahead. Furthermore the model developed using the annual database is able to map the variations in the seasonal concentrations. On the other hand, MLP model was quite competitive for nitrogen dioxide (NO_2) prediction when compared to the dynamic NN based models. The study further assesses the ability of the neural network models to generate pollutant concentrations at sites where sampling has not been done. Using these neural network models, data values were generated for total suspended particulate (TSP) and inhalable particulates (PM_10) concentrations. The obtained results show promising potential. Although there were under-predictions and over-predictions on some occasions, the neural network models, in general were able to generate the missing information and to obtain air quality situation in the study area. / Thesis / Master of Applied Science (MASc)
14

Physics-based and data-driven strategies for simulating colloid behavior in fractured aquifer systems

Ahmed, Ahmed January 2019 (has links)
The design of effective quality management strategies in groundwater systems is crucial, as clean water is essential for livelihood, health, and development. Colloids represent a class of contaminants that might be pathogenic or benign. Colloids can also enhance or inhibit the transport of dissolved contaminants in groundwater, which has inspired the use of benign colloids in the remediation of contaminated aquifers. Reliable modelling of colloid behavior is therefore essential for the design of effective remediation strategies, both those employing benign colloids and those aiming at the removal of pathogenic colloids. While colloid transport is controlled by groundwater velocity, colloid retention is governed by the physical and chemical properties of the aquifer together with those of the colloid. The present study aims at enhancing the reliability of modelling colloid behavior in fractured aquifers through: i) developing a synchronization-based framework that can effectively identify hydraulic connections within the aquifer; ii) developing a mathematical model for the relationship between the fraction of colloids retained along a fracture (Fr) and the parameters describing the aquifer’s physical and chemical properties; iii) developing an analytical model for the relationship between Fr and the coefficient describing irreversible colloid deposition in single fractures; and, iv) developing a numerical technique that can efficiently simulate colloid behavior in single fractures and fracture networks under different physical, chemical, and matrix conditions. The performance of the synchronization-based framework, mathematical and analytical models, and the numerical technique was assessed separately for different verification cases, and the corresponding efficacy was confirmed. Coupling the tools developed in the present study enables the reliable prediction of colloid behavior in response to changes in the groundwater-colloid-fracture system’s physical and chemical properties, which can aid in understanding how to manipulate the system’s properties for the effective design of groundwater quality management and remediation strategies. / Thesis / Doctor of Philosophy (PhD) / Microorganisms, microplastics, clay, and fine silt are classified as colloids within the spectrum of contaminants that might exist in groundwater. Although some colloids are benign (e.g., clay and fine silt), they can still affect the groundwater quality and aquifer porosity. Colloids can also enhance or inhibit the migration of other contaminants in groundwater because of their high adsorption capacity. Several remediation strategies are being envisioned to remove pathogenic colloids and eliminate other contaminants adsorbed onto benign colloids, where effective design of such strategies requires reliable models of colloid behavior. The present study aims at enhancing the reliability of simulating colloid behavior in fractured aquifers through: i) developing models that capture the effects of the aquifer’s physical and chemical properties on colloid behavior; and, ii) designing a framework that improves the reliability of aquifer conceptualization. Effective remediation strategies can then be designed for contaminated fractured aquifers based on the developed tools.
15

Predictive Simulations of the Impedance-Matched Multi-Axis Test Method Using Data-Driven Modeling

Moreno, Kevin Joel 02 October 2020 (has links)
Environmental testing is essential to certify systems to withstand the harsh dynamic loads they may experience in their service environment or during transport. For example, satel- lites are subjected to large vibration and acoustic loads when transported into orbit and need to be certified with tests that are representative of the anticipated loads. However, tra- ditional certification testing specifications can consist of sequential uniaxial vibration tests, which have been found to severely over- and under-test systems needing certification. The recently developed Impedance-Matched Multi-Axis Test (IMMAT) has been shown in the literature to improve upon traditional environmental testing practices through the use of multi-input multi-output testing and impedance matching. Additionally, with the use of numerical models, predictive simulations can be performed to determine optimal testing pa- rameters. Developing an accurate numerical model, however, requires precise knowledge of the system's dynamic characteristics, such as boundary conditions or material properties. These characteristics are not always available and would also require additional testing for verification. Furthermore, some systems may be extremely difficult to model using numerical methods because they contain millions of finite elements requiring impractical times scales to simulate or because they were fabricated before mainstream use of computer aided drafting and finite element analysis but are still in service. An alternative to numerical modeling is data-driven modeling, which does not require knowledge of a system's dynamic characteris- tics. The Continuous Residue Interpolation (CRI) method has been recently developed as a novel approach for building data-driven models of dynamical systems. CRI builds data- driven models by fitting smooth, continuous basis functions to a subset of frequency response function (FRF) measurements from a dynamical system. The resulting fitted basis functions can be sampled at any geometric location to approximate the expected FRF at that location. The research presented in this thesis explores the use of CRI-derived data-driven models in predictive simulations for the IMMAT performed on a Euler-Bernoulli beam. The results of the simulations reveal that CRI-derived data-driven models of a Euler-Bernoulli beam achieve similar performance when compared to a finite element model and make similar decisions when deciding the excitation locations in an IMMAT. / Master of Science / In the field of vibrations testing, environmental tests are used to ensure that critical devices or structures can withstand harsh vibration environments. For example, satellites experience harsh vibrations and damaging acoustics that are transferred from it's rocket transport vehicle. Traditional environmental tests would require that the satellite be placed on a vibration table and sequentially vibrated in multiple orientations for a specified duration and intensity. However, these traditional environmental tests do not always produce vibrations that are representative of the anticipated transport or operational environment. Newly developed methods, such as the Impedance-Matched Multi-Axis Test (IMMAT) methods achieves representative test results by matching the mounting characteristics of the structure during it's transport or operational environment and vibrating the structure in multiple directions simultaneously. An IMMAT can also be optimized by using finite element models (FEM), which approximate the device to be tested with a discrete number of small volumes whose physics are described by fundamental equations of motion. However, an FEM can only be used if it's dynamic characteristics are sufficiently similar to the structure undergoing testing. This can only be achieved with precise knowledge of the dynamical properties of the structure, which is not always available. An alternate approach to an FEM is to use a data-driven model. Because data-driven models are made using data from the system it is supposed to describe, dynamical properties of the device are pre-built in the model and is not necessary to approximate them. Continuous Residue Interpolation (CRI) is a recently developed data-driven modeling scheme that approximates a structure's dynamic properties with smooth, continuous functions updated with measurements of the input-output response dynamics of the device. This thesis presents the performance of data-driven models generated using CRI when used in predictive simulations of an IMMAT. The results show that CRI- derived data-driven models perform similarly to FEMs and make similar predictions for optimal input vibration locations.
16

Process and Quality Modeling in Cyber Additive Manufacturing Networks with Data Analytics

Wang, Lening 16 August 2021 (has links)
A cyber manufacturing system (CMS) is a concept generated from the cyber-physical system (CPS), providing adequate data and computation resources to support efficient and optimal decision making. Examples of these decisions include production control, variation reduction, and cost optimization. A CMS integrates the physical manufacturing equipment and computation resources via Industrial Internet, which provides low-cost Internet connections and control capability in the manufacturing networks. Traditional quality engineering methodologies, however, typically focus on statistical process control or run-to-run quality control through modeling and optimization of an individual process, which makes it less effective in a CMS with many manufacturing systems connected. In addition, more personalization in manufacturing generates limited samples for the same kind of product designs, materials, and specifications, which prohibits the use of many effective data-driven modeling methods. Motivated by Additive Manufacturing (AM) with the potential to manufacture products with a one-of-a-kind design, material, and specification, this dissertation will address the following three research questions: (1) how can in situ data be used to model multiple similar AM processes connected in a CMS (Chapter 3)? (2) How to improve the accuracy of the low-fidelity first-principle simulation (e.g., finite element analysis, FEA) for personalized AM products to validate the product and process designs (Chapter 4) in time? (3) And how to predict the void defect (i.e., unmeasurable quality variables) based on the in situ quality variables. By answering the above three research questions, the proposed methodology will effectively generate in situ process and quality data for modeling multiple connected AM processes in a CMS. The research to quantify the uncertainty of the simulated in situ process data and their impact on the overall AM modeling is out of the scope of this research. The proposed methodologies will be validated based on fused deposition modeling (FDM) processes and selective laser melting processes (SLM). Moreover, by comparing with the corresponding benchmark methods, the merits of the proposed methods are demonstrated in this dissertation. In addition, the proposed methods are inherently developed with a general data-driven framework. Therefore, they can also potentially be extended to other applications and manufacturing processes. / Doctor of Philosophy / Additive manufacturing (AM) is a promising advanced manufacturing process that can realize the personalized products in complex shapes with unprecedented materials. However, there are many quality issues that can restrict the wide deployment of AM in practice, such as voids, porosity, cracking, etc. To effectively model and further mitigate these quality issues, the cyber manufacturing system (CMS) is adopted. The CMS can provide the data acquisition functionality to collect the real-time process data which directly or indirectly related to the product quality in AM. Moreover, the CMS can provide the computation capability to analyze the AM data and support the decision-making to optimize the AM process. However, due to the characteristics of AM process, there are several challenges effectively and efficiently model the AM data. First, there are many one-of-a-kind products in AM, and leads to limited observations for each product that can support to estimate an accurate model. Therefore, in Chapter 3, I would like to discuss how to jointly model personalized products by sharing the information among these similar-but-non-identical AM processes with limited observations. Second, for personalized product realization in AM, it is essential to validate the product and process designs before fabrication quickly. Usually, finite element analysis (FEA) is employed to simulate the manufacturing process based on the first-principal model. However, due to the complexity, the high-fidelity simulation is very time-consuming and will delay the product realization in AM. Therefore, in Chapter 4, I would like to study how to predict the high-fidelity simulation result based on the low-fidelity simulation with fast computation speed and limited capability. Thirdly, the defects of AM are usually inside the product, and can be identified by the X-ray computed tomography (CT) images after the build of the AM products. However, limited by the sensor technology, CT image is difficult to obtain for online (i.e., layer-wise) defect detection to mitigate the defects. Therefore, as an alternative, I would like to investigate how to predict the CT image based on the optical layer-wise image, which can be obtained during the AM process in Chapter 5. The proposed methodologies will be validated based on two types of AM processes: fused deposition modeling (FDM) processes and selective laser melting processes (SLM).
17

A Systematic Examination of Data-Driven Decision-making within a School Division: The Relationships among Principal Beliefs, School Characteristics, and Accreditation Status

Teigen, Beth 23 November 2009 (has links)
This non-experimental, census survey included the elementary, middle, and high school principals at the comprehensive schools within a large, suburban school division in Virginia. The focus of this study was the factors that influence building administrators in using data to make instructional decisions. The purpose was to discover if there is a difference in the perceptions of elementary, middle, and high school principals of data use to make instructional decisions within their buildings. McLeod’s (2006) Statewide Data-Driven Readiness Study: Principal Survey was used to assess the principals’ beliefs about the data-driven readiness of their individual schools. Each principal indicated the degree to which they agreed or disagreed with statements about acting upon data, data support systems, and the data school culture. Twenty-two items aligned with four constructs identified by White (2008) in her study of elementary school principals in Florida. These four constructs or factors were used to determine if there was a significant difference in principal beliefs concerning teacher use of data to improve student achievement, principal beliefs regarding a data-driven culture within their building, the existence of systems for supporting data-driven decision-making, and collaboration among teachers to make data-driven decisions. For each of the survey items a majority of the responses (≥62%) were in agreement with the statements, indicating the principals agreed slightly, agreed moderately, or agreed strongly that data-driven decision-making by teachers to improve student achievement was occurring within the building, a data-driven culture and data supporting systems exists, and teachers are collaborating and using data to make decisions. Multiple analyses of variance showed significant differences in the means. Some of these differences in means were based on the principals’ assignment levels. While both groups responded positively to the statement that collaboration among teachers to make data-driven decisions, the elementary principals agreed more strongly than the high school principals. When mediating variables were examined, significance was found in principals’ beliefs concerning teacher use of data to improve student achievement depending on the years of experience as a principal. Principals with six or more years of experience had a mean response for Construct 1 of 4.84 while those with five or less years of experience had a mean of 4.38, suggesting that on average those principals with more experience had a stronger belief that teachers are using data to improve student achievement. There is significance between the means of principals with three or fewer years versus those with more than three years in their current assignment on two of the constructs – a data-driven culture and collaboration among teachers. Principals with less time in their current position report a slightly higher agreement than their less experienced colleagues with statements about the data-driven culture within their school. Significant difference was also found between principals’ beliefs about teacher collaboration to improve student achievement and their beliefs regarding collaboration among teachers using data-driven decision-making and the school’s AYP status for 2008-2009. Principals assigned to schools that had made AYP for 2008-2009 moderately agreed that teachers were collaborating to make data-driven decisions. In comparison, principals assigned to schools that had not made AYP only slightly agreed that this level of collaboration was occurring in their schools.
18

Linguistique de corpus et didactique des langues et des cultures étrangères : étude comparée français-russe / Corpus linguistics and foreign language and culture teaching : French - Russian comparative study

Da Silva Akborisova, Elena 09 December 2014 (has links)
Cette thèse vise à contribuer à l’approche DDL (Data-Driven Learning) dans l’enseignement du lexique en FLE. Dans le cadre de l’approche DDL, on fait appel aux corpus pour enseigner différentes composantes d’une langue. Le lexique, étant un des premiers besoins d’un apprenant, car il donne accès à la communication en langue étrangère, fait l’objet de nombreux travaux de recherche actuels en linguistique et en didactique. L’idiomaticité, trait constitutif de toutes les langues, se manifeste sous forme d’expressions variées. Elle relève du champ de la lexicologie, et plus spécifiquement de la phraséologie. La linguistique de corpus permet d’observer ce fait de langue dans un cadre structure/sens. Les expressions idiomatiques en général, et en particulier les collocations, sont mises au centre de la démarche didactique dans cette thèse. Les collocations à verbe support restent une source d’erreurs importante même aux niveaux avancés d’apprentissage. Le matériel didactique présenté aux lecteurs de cette étude cherche à promouvoir l’exploitation directe des corpus bilingues par les apprenants en classe afin d’identifier ces collocations en L1 et en L2, de les comprendre, de trouver des correspondances et de les employer de manière appropriée. L’approche comparative français-russe renforcée par une observation des lignes de concordance issues de corpus authentiques devraient permettre une meilleure acquisition des faits linguistiques visés. Ce travail s’inscrit dans une perspective d’apprentissage déductif et d’autonomisation des apprenants. / This thesis aims to contribute to the DDL (Data-Driven Learning) approach in French vocabulary teaching. In the framework of the DDL approach we use corpora to teach different language phenomena. Vocabulary, one of the immediate needs of a language learner because it makes a communication in a foreign language possible, has become a popular research theme in the fields of linguistics and language teaching. Idiomaticity, an inherent part of all languages, manifests through various expressions. Phraseology studies different ways of expressing idiomaticity. Corpus linguistics permits to observe this language phenomenon in a structure/sense framework. Idiomatic expressions in general and collocations in particular are the heart and the main focus of the teaching perspective described in this thesis. Even advanced language learners make errors in light verb constructions. The teaching material presented in this study seeks to promote the search in bilingual corpora in the classroom in order to identify these collocations in L1 and in L2, to understand them, to find equivalents and finally, to use them correctly. A comparative French-Russian approach reinforced by a study of concordance lines from authentic corpora might contribute to better understanding of a particular language feature. This study falls in line with deductive learning practices and with the learners’ autonomisation perspective.
19

Seven Attempts at Magic: A Digital Portfolio Dissertation of Seven Interactive, Electroacoustic, Compositions for Data-driven Instruments.

Joslin, Steven 06 1900 (has links)
The seven compositions that comprise this dissertation are represented by the following files: text file (pdf), seven video performances (mp4), and corresponding zipped files of custom software and affiliated files (various file types). / This Digital Portfolio Dissertation presents seven compositions including text documents that explain the synthesis techniques, data mapping and routing, visual elements, the software used, all software needed to reproduce these works, and a video recording of all seven compositions. The unifying thread in my seven works is magic. The sense of magic in a live performance is the connection between artist and audience that lies beyond the immediate understanding of any work. I use this insight to create a new world inspired by sound and visuals. I perform each of these works by combining my understanding of data-driven instruments and my experience as a classically trained musician. The combination of sound design, visual composition, and a sense of magic allows me to realize these seven works. My goal is to contribute to the extensive library of electroacoustic works through my performance of my music with data-driven instruments.
20

Combining Big Data And Traditional Business Intelligence – A Framework For A Hybrid Data-Driven Decision Support System

Dotye, Lungisa January 2021 (has links)
Since the emergence of big data, traditional business intelligence systems have been unable to meet most of the information demands in many data-driven organisations. Nowadays, big data analytics is perceived to be the solution to the challenges related to information processing of big data and decision-making of most data-driven organisations. Irrespective of the promised benefits of big data, organisations find it difficult to prove and realise the value of the investment required to develop and maintain big data analytics. The reality of big data is more complex than many organisations’ perceptions of big data. Most organisations have failed to implement big data analytics successfully, and some organisations that have implemented these systems are struggling to attain the average promised value of big data. Organisations have realised that it is impractical to migrate the entire traditional business intelligence (BI) system into big data analytics and there is a need to integrate these two types of systems. Therefore, the purpose of this study was to investigate a framework for creating a hybrid data-driven decision support system that combines components from traditional business intelligence and big data analytics systems. The study employed an interpretive qualitative research methodology to investigate research participants' understanding of the concepts related to big data, a data-driven organisation, business intelligence, and other data analytics perceptions. Semi-structured interviews were held to collect research data and thematic data analysis was used to understand the research participants’ feedback information based on their background knowledge and experiences. The application of the organisational information processing theory (OIPT) and the fit viability model (FVM) guided the interpretation of the study outcomes and the development of the proposed framework. The findings of the study suggested that data-driven organisations collect data from different data sources and process these data to transform them into information with the goal of using the information as a base of all their business decisions. Executive and senior management roles in the adoption of a data-driven decision-making culture are key to the success of the organisation. BI and big data analytics are tools and software systems that are used to assist a data-driven organisation in transforming data into information and knowledge. The suggested challenges that organisations experience when they are trying to integrate BI and big data analytics were used to guide the development of the framework that can be used to create a hybrid data-driven decision support system. The framework is divided into these elements: business motivation, information requirements, supporting mechanisms, data attributes, supporting processes and hybrid data-driven decision support system architecture. The proposed framework is created to assist data-driven organisations in assessing the components of both business intelligence and big data analytics systems and make a case-by-case decision on which components can be used to satisfy the specific data requirements of an organisation. Therefore, the study contributes to enhancing the existing literature position of the attempt to integrate business intelligence and big data analytics systems. / Dissertation (MIT (Information Systems))--University of Pretoria, 2021. / Informatics / MIT (Information Systems) / Unrestricted

Page generated in 0.0612 seconds