Spelling suggestions: "subject:"data driven"" "subject:"mata driven""
1 |
DEVELOPMENT OF DATA-DRIVEN APPROACHES FOR WASTEWATER MODELINGZhou, Pengxiao January 2023 (has links)
To effectively operate and manage the complex wastewater treatment system, simplified
representations, known as wastewater modeling, are critical. Wastewater modeling allows for the understanding, monitoring, and prediction of wastewater treatment processes by capturing intricate relationships within the system. Process-driven models (PDMs), which rely on a set of interconnected hypotheses and assumptions, are commonly used to capture the physical, chemical, and biological mechanisms of wastewater treatment. More recently, with the development of advanced algorithms and sensor techniques, data-driven models (DDMs) that are based on analyzing the data about a system, specifically finding relationships between the system state variables without relying on explicit knowledge of the system, have emerged as a complementary alternative. However, both PDMs and DDMs suffer from their limitations. For example, uncertainties of PDMs can arise from imprecise calibration of empirical parameters and natural process variability. Applications of DDMs are limited to certain objectives because of a lack of high-quality dataset and struggling to capture changing relationship. Therefore, this dissertation aims to enhance the stable operation and effective management of WWTPs by addressing these limitations through the pursuit of three objectives: (1) investigating an efficient data-driven approach for uncertainty analysis of process-driven secondary settling tank models; (2) developing data-driven models that can leverage sparse and imbalanced data for the prediction of emerging contaminant removal; (3) exploring an advanced data-driven model for influent flow rate predictions during the COVID-19 emergency. / Thesis / Doctor of Philosophy (PhD) / Ensuring appropriate treatment and recycling of wastewater is vital to sustain life. Wastewater treatment plants (WWTPs), which have complicated processes that include several intricate physical, chemical, and biological procedures, play a significant role in the water recycling. Due to stricter regulations and complex wastewater composition, the wastewater treatment system has become increasingly complex. Therefore, it is crucial to use simplified versions of the system, known as wastewater modeling, to effectively operate and manage the complex system. The aim of this thesis is to develop data-driven approaches for wastewater modeling.
|
2 |
Data-driven human body morphingZhang, Xiao 01 November 2005 (has links)
This thesis presents an efficient and biologically informed 3D human body morphing technique through data-driven alteration of standardized 3D models. The anthropometric data is derived from a large empirical database and processed using principal component analysis (PCA). Although techniques using PCA are relatively commonplace in computer graphics, they are mainly used for scientific visualizations and animation. Here we focus on uncovering the underlying mathematical structure of anthropometric data and using it to build an intuitive interface that allows the interactive manipulation of body shape within the normal range of human variation. We achieve weight/gender based body morphing by using PCA. First we calculate the principal vector space of the original data. The data then are transformed into a new orthogonal multidimensional space. Next, we reduce the dimension of the data by only keeping the components of the most significant principal vectors. We then fit a curve through the original data points and are able to generate a new human body shape by inversely transforming the data from principal vector space back to the original measuring data space. Finally, we sort the original data by the body weight, calculating males and females separately. This enables us to use weight and gender as two intuitive controls for body morphing. The Deformer program is implemented using the programming language C++ with OPENGL and FLTK API. 3D and human body models are created using Alias MayaTm.
|
3 |
ESL Model of the Hyper-scalar Processor on a ChipChen, Po-kai 20 August 2007 (has links)
This paper proposed a scalable chip multiprocessor architecture, which is called Hyper-scalar combined with the concept of superscalar and multithreaded architecture; hence, this architecture can enhance single-threaded performance by using core group and also supports multithreaded applications. System programmer can dynamically allocate the core groups to accelerate a single thread by extended system instructions. In order to solve the data dependence between all issued instructions the virtual shared register file is proposed.
This mechanism allows the data in local register files to be accessed from other cores through the data switching path hardware and the instructions are executed only when the operands are available. The instructions within a single-threaded application can be dispatched to variable cores without re-compilation. This execution paradigm accelerates the single-threaded performance more flexibly.
In the case of simulation and experimental framework, the ESL Model written in SystemC, a modeling language based on C++ is to provide hardware-oriented simulation platform and the MediaBench suite is selected for the experiments. On average, the Hyper-scalar architecture can accelerate single-threaded performance by 30% to 110% using 2 ~ 8 cores.
|
4 |
Data-driven human body morphingZhang, Xiao 01 November 2005 (has links)
This thesis presents an efficient and biologically informed 3D human body morphing technique through data-driven alteration of standardized 3D models. The anthropometric data is derived from a large empirical database and processed using principal component analysis (PCA). Although techniques using PCA are relatively commonplace in computer graphics, they are mainly used for scientific visualizations and animation. Here we focus on uncovering the underlying mathematical structure of anthropometric data and using it to build an intuitive interface that allows the interactive manipulation of body shape within the normal range of human variation. We achieve weight/gender based body morphing by using PCA. First we calculate the principal vector space of the original data. The data then are transformed into a new orthogonal multidimensional space. Next, we reduce the dimension of the data by only keeping the components of the most significant principal vectors. We then fit a curve through the original data points and are able to generate a new human body shape by inversely transforming the data from principal vector space back to the original measuring data space. Finally, we sort the original data by the body weight, calculating males and females separately. This enables us to use weight and gender as two intuitive controls for body morphing. The Deformer program is implemented using the programming language C++ with OPENGL and FLTK API. 3D and human body models are created using Alias MayaTm.
|
5 |
Discrete Event Simulation of Operating Rooms Using Data-Driven ModelingMalik, Mandvi January 2018 (has links)
No description available.
|
6 |
An Examination of Mathematics Teachers’ Use of Student Data in Relationship to Student Academic PerformanceHartmann, Lillian Ann 12 1900 (has links)
Among educational researchers, important questions are being asked about how to improve mathematics instruction for elementary students. This study, conducted in a north Texas public school with 294 third- through fifth-grade students, ten teachers and three coaches, examined the relationship between students’ achievement in mathematics and the mathematics teaching and coaching instruction they received. Student achievement was measured by the Computer Adaptive Instrument (CAT), which is administered three times a year in the district and is the main criterion for students’ performance/movement in the district’s response to intervention program for mathematics. The response to intervention model employs student data to guide instruction and learning in the classroom and in supplemental sessions. The theoretical framework of the concerns based adoption model (CBAM) was the basis to investigate the concerns that mathematics teachers and coaches had in using the CAT student data to inform their instruction. The CAT data, based on item response theory, was the innovation. Unique in this study was the paralleling of teachers’ and coaches’ concerns and profiles for their use of the data with student scores using an empirical approach. Data were collected at three intervals through the Stages of Concerns Questionnaire, the Levels of Use interviews, and the Innovation Configuration Components Matrix from teachers and at three intervals student CAT-scaled scores. Multiple regression analyses with the concerns and CAT scores and levels of use and CAT scores were conducted to determine if relationships existed between the variables. The findings indicated that, overall, the teachers and coaches who scored high in personal concerns at the three data points remained at low levels of use or non-use of CAT data in their instruction. Only two teachers indicated movement from high intense personal concerns to high concerns regarding the impact on students. This correlated with their increased use of CAT at the three-collection points. The regression analyses indicated no correlations between the teachers’ and coaches’ concerns and the CAT and no correlations between their levels of data use and the CAT. At the exit interviews, patterns suggested that the presence of a change facilitator might have made a difference in their understanding and use of the CAT data ultimately impacting student achievement. This study sets a new precedent in the use of CBAM data and offers insights into the necessity of providing support and training in a change process.
|
7 |
Data-Driven Modeling and Control of Batch and Continuous Processes using Subspace MethodsPatel, Nikesh January 2022 (has links)
This thesis focuses on subspace based data-driven modeling and control techniques for batch and continuous processes. Motivated by the increasing amount of process data, data-driven modeling approaches have become more popular. These approaches are better in comparison to first-principles models due to their ability to capture true process dynamics. However, data-driven models rely solely on mathematical correlations and are subject to overfitting. As such, applying first-principles based constraints to the subspace model can lead to better predictions and subsequently better control. This thesis demonstrates that the addition of process gain constraints leads to a more accurate constrained model. In addition, this thesis also shows that using the constrained model in a model predictive control (MPC) algorithm allows the system to reach desired setpoints faster. The novel MPC algorithm described in this thesis is specially designed as a quadratic program to include a feedthrough matrix. This is traditionally ignored in industry however this thesis portrays that its inclusion leads to more accurate process control.
Given the importance of accurate process data during model identification, the missing data problem is another area that needs improvement. There are two main scenarios with missing data: infrequent sampling/ sensor errors and quality variables. In the infrequent sampling case, data points are missing in set intervals and so correlating between different batches is not possible as the data is missing in the same place everywhere. The quality variable case is different in that quality measurements require additional expensive test making them unavailable for over 90\% of the observations at the regular sampling frequency. This thesis presents a novel subspace approach using partial least squares and principal component analysis to identify a subspace model. This algorithm is used to solve each case of missing data in both simulation (polymethyl methacrylate) and industrial (bioreactor) processes with improved performance. / Dissertation / Doctor of Philosophy (PhD) / An important consideration of chemical processes is the maximization of production and product quality. To that end developing an accurate controller is necessary to avoid wasting resources and off-spec products. All advance process control approaches rely on the accuracy of the process model, therefore, it is important to identify the best model. This thesis presents two novel subspace based modeling approaches the first using first principles based constraints and the second handling missing data approaches. These models are then applied to a modified state space model with a predictive control strategy to show that the improved models lead to improved control. The approaches in this work are tested on both simulation (polymethyl methacrylate) and industrial (bioreactor) processes.
|
8 |
MULTI-STREAM DATA-DRIVEN TELEMETRY SYSTEMCan, Ouyan, Chang-jie, Shi 11 1900 (has links)
International Telemetering Conference Proceedings / November 04-07, 1991 / Riviera Hotel and Convention Center, Las Vegas, Nevada / The Multi-Stream Data-Driven Telemetry System (MSDDTS) is a new generation system in China developed by Beijing Research Institute of Telemetry (BRIT) for high bit rate, multi-stream data acquisition, processing and display. Features of the MSDDTS include:
.Up to 4 data streams; .Data driven architecture; .Multi-processor for parallel processing; .Modular, Configurable, expandable and programmable; .Stand-along capability; .And, external control by host computer.
This paper addresses three very important aspects of the MSDDTS. First, the system architecture is discussed. Second, three basic models of the system configuration are described. The third shows the future development of the system.
|
9 |
The Impact of Data-Driven Decision Making on Educational Practice in Louisiana SchoolsMaxie, Dana James 01 January 2012 (has links)
Using data to improve educational practice in schools has become a popular reform strategy that has grown as a result of the No Child Left Behind Act of 2001. Districts and schools across the United States are under a great deal of pressure to collect and analyze data in hopes of identifying student weaknesses to implement corrective action plans that will lead to overall student achievement in the classroom.
Technology tools such as computer-based assessment and reporting systems have provided schools with immediate access to student-level data. The problem is the lack of direction in how to use the information to make instructional changes in the classroom. A review of literature provided an overview of research-based strategies that support data-driven decision making (DDDM) in the classroom. Three case studies in Louisiana were examined to build a conceptual understanding about how districts and schools use data to make informed decisions. Three research questions guided the investigation and focused on the tools used to assess, store, and retrieve student data, evidence that connects the data and improvements in teaching, and recommendations for other districts and schools. Educational practices were documented through a collection of documents, interview/questionnaire data, and physical artifacts.
Results were reported in a question and answer format for three case studies. School administrators reported using data to plan, evaluate, and provide feedback to teachers. In contrast, teachers and instructional specialists revealed that data were used to assess and measure student's weekly performance. All schools utilized at least two computer-based assessment and/or reporting systems to manage student-level data within the district and/or school. Instructional coaches provided direct support to teachers. Data analysis revealed that teachers collaborated and supported each other through data team meetings and working sessions. Principals and teachers monitored student behavior through use of data management and reporting tools. Schools showed promising and positive attitudes about making changes and building a data-driven culture. Findings were supported through current research on DDDM.
|
10 |
DATA DRIVEN WORKFORCE PERFORMANCE PLANNINGBarajas, Christopher 01 June 2019 (has links)
The business of logistics and transportation is increasing in demand and complexity and will do so into the future. As with many businesses in the digital age, large amounts of data is being generated at increasing speeds leading us all into the era of big data. A common result is that organizations are left data rich and information poor. At ABC Logistics, and many other third party logistics and transportation companies, the question is how to harness the data and create centers of excellence through business intelligence methodologies. This research project goes through the steps taken to identify an area where business intelligence and data transformation could be an advantageous prospect and how to present it in a way that would be of great benefit to the organization as a whole.
Third party logistics companies, such as ABC Logistics, operate under a business model where they do not produce or own any of the product they manage through the supply chain process. What they sell is their expertise in logistics services from the inbound of product, processing of orders, and outbound shipping to and from the customer. This makes the third party logistics business very competitive. Competitive advantages are key to success in this type of business and one area that is underutilized is measuring and managing labor productivity. Currently, ABC Logistics utilizes an AS400 system for warehouse management and Kronos for timekeeping. The problem lies in how to get all the information together in one location where transactional master data is shared across the organization. Once we do that, then the second problem would be analysis and decision management i.e., how we analyze the data and present the information in a human readable format for frontline supervisors and middle management to be able to interpret the data and take action.
The solution will be to create a data warehouse to normalize all the various data sources for timekeeping and warehouse production transactions. In order to build the data warehouse, we will utilize an SQL Database with SQL Server Integration Services to transform the data into our data warehouse. With the data transformed into a structured and consistent format, the data is analyzed and the results presented in a human readable format. This will be done through business intelligence tools such as Power BI that allows us to create custom dashboards. This solution will lead to a better understanding of our operation, increase profit, and give ABC Logistics a competitive advantage over their competitors.
|
Page generated in 0.0492 seconds