• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 299
  • 24
  • 21
  • 18
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 493
  • 493
  • 122
  • 106
  • 99
  • 88
  • 73
  • 67
  • 62
  • 56
  • 53
  • 47
  • 47
  • 46
  • 43
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Data-driven Supply Chain Monitoring and Optimization

Wang, Jing January 2022 (has links)
In the era of Industry 4.0, conventional supply chains are undergoing a transformation into digital supply chains with the wide application of digital technologies such as big data, cloud computing, and Internet of Things. A digital supply chain is an intelligent and value-driven process that has superior features such as speed, flexibility, transparency, and real-time inventory monitoring and management. This concept is further included in the framework of Supply Chain 4.0, which emphasizes the connection between supply chain and Industry 4.0. In this context, data analytics for supply chain management presents a promising research opportunity. This thesis aims to investigate the use of data analytics in supply chain decision-making, including modelling, monitoring, and optimization. First, this thesis investigates supply chain monitoring (SCMo) using data analytics. The goal of SCMo is to raise an alarm when abnormal supply chain events occur and identify the potential reason. We propose a framework of SCMo based on a data-driven method, principal component analysis (PCA). Within this framework, supply chain data such as inventory levels and customer demand are collected, and the normal operating conditions of a supply chain are characterized using PCA. Fault detection and diagnosis are implemented by examining the monitoring statistics and variable contributions. A supply chain simulation model is developed to carry out the case studies. The results show that dynamic PCA (DPCA) successfully detected abnormal behaviour of the supply chain, such as transportation delay, low production rate, and supply shortage. Moreover, the contribution plot is shown to be effective in interpreting the abnormality and identify the fault-related variables. The method of using data-driven methods for SCMo is named data-driven SCMo in this work. Then, a further investigation of data-driven SCMo based on another statistical process monitoring method, canonical variate analysis (CVA), is conducted. CVA utilizes the state-space model of a system and determines the canonical states by maximizing the correlation between the combination of past system outputs and inputs and the combination of future outputs. A state-space model of supply chain is developed, which forms the basis of applying CVA to detect supply chain faults. The performance of CVA and PCA are assessed and compared in terms of dimensionality reduction, false alarm rate, missed detection rate, and detection delay. Case studies show that CVA identifies a smaller system order than PCA and achieves comparable performance to PCA in a lower-dimensional latent space. Next, we investigate data-driven supply chain control under uncertainty with risk taken into account. The method under investigation is reinforcement learning (RL). Within the RL framework, an agent learns an optimal policy that maps the state to action during the process of interacting with the non-deterministic environment, such that a numerical reward is maximized. The current literature regarding supply chain control focuses on conventional RL that maximizes the expected return. However, this may be not the best option for risk-averse decision makers. In this work, we explore the use of safe RL, which takes into account the concept of risk in the learning process. Two safe RL algorithms, Q-hat-learning and Beta-pessimistic Q-learning, are investigated. Case studies are carried out based on the supply chain simulator developed using agent-based modelling. Results show that Q-learning has the best performance under normal scenarios, while safe RL algorithms perform better under abnormal scenarios and are more robust to changes in the environment. Moreover, we find that the benefits of safe RL are more pronounced in a closed-loop supply chain. Finally, we investigate real-time supply chain optimization. The operational optimization problems for supply chains of realistic size are often large and complex, and solving them in real time can be challenging. This work aims to address the problem by using a deep learning-based model predictive control (MPC) technique. The MPC problem for supply chain operation is formulated based on the state space model of a supply chain, and the optimal state-input pairs are precomputed in the offline phase. Then, a deep neural network is built to map the state to input, which is then used in the online phase to reduce solution time. We propose an approach to implement the deep learning-based MPC method when there are delayed terms in the system, and a heuristic approach to feasibility recovery for mixed-integer MPC, with binary decision variables taken into account. Case studies show that compared with solving the nominal MPC problem online, deep learning-based MPC can provide near-optimal solution at a lower computational cost. / Thesis / Doctor of Philosophy (PhD)
292

PurdueThesis_XuejunZhao

Xuejun Zhao (14187179) 29 November 2022 (has links)
<p> </p> <p><em>This study examines data-driven contract design in the small data regime and large data regime respectively, and the implications from contract pricing in the pharmaceutical supply chain. </em></p>
293

Impact of Academic and Nonacademic Support Structures On Third Grade Reading Achievement

Peugeot, Megan Aline 17 July 2017 (has links)
No description available.
294

Designing Applications for Smart Cities: A designerly approach to data analytics

Bücker, Dennis January 2017 (has links)
The purpose of this thesis is to investigate the effects of a designerly approach to data analytics. The research was conducted during the Interaction Design Master program at Malmö University in 2017 and follows a research through design approach where the material driven design process in itself becomes a way to acquire new knowledge. The thesis uses big data as design material for designers to ideate connected products and services in the context of smart city applications. More specifically, it conducts a series of material studies that show the potential of this new perspective to data analytics. As a result of this research a set of designs and exercises are presented and structured into a guide. Furthermore, the results emphasize the need for this type of research and highlights data as a departure material as of special interest for HCI.
295

Conditions and Practices of Data-Driven Innovation in Swedish Healthcare

Shenouda, Ramy, Herz, Stefan January 2022 (has links)
This study explores the conditions and practices for data-driven innovation in the Swedish healthcare sector. While innovation nowadays often is based on big amounts of data, the laws for handling data in healthcare are restrictive. However, we see that remarkable innovation is happening in Swedish healthcare anyway. Therefore, we decided to explore the legal situation and talk to different actors to examine how they deal with the circumstances and how they innovate. We related those findings to the literature on data-driven innovation, platforms, artificial intelligence and about the healthcare sector context. It turned out that legislation is only one of the barriers to innovation. In addition, organizational and structural factors play a big role, too. Furthermore, we point out the strategic responses and their usage of artificial intelligence. Our contributions are that we mapped the Swedish healthcare landscape with a focus on data-driven innovation and that we applied Huang’s et al. (2017) model of rapid platform scaling to this context. Moreover, we point out how the healthcare sector differs from commercial business and how that is reflected in the innovation practices. Finally, we show which barriers need to be removed in order to improve the conditions for data-driven innovation.
296

Bottleneck Identification using Data Analytics to Increase Production Capacity

Ganss, Thorsten Peter January 2021 (has links)
The thesis work develops an automated, data-driven bottleneck detection procedure based on real-world data. Following a seven-step process it is possible to determine the average as well as the shifting bottleneck by automatically applying the active period method. A detailed explanation of how to pre-process the extracted data is presented which is a good guideline for other analysists to customize the available code according to their needs. The obtained results show a deviation between the expected bottleneck and the bottleneck calculated based on production data collected in one week of full production. The expected bottleneck is currently determined by the case company by measuring cycle times physically at the machine, but this procedure does not represent the whole picture of the production line and is therefore recommended to be replaced by the developed automated analysis. Based on the analysis results, different optimization potentials are elaborated and explained to improve the data quality as well as the overall production capacity of the investigated production line. Especially, the installed gantry systems need further analysis to decrease their impact on the overall capacity. As for data quality, especially, the improvement of the machines data itself as well as the standardization of timestamps should be focused to enable better analysis in the future. Finally, future recommendations mainly suggest to run the analysis several times with new data sets to validate the results and improve the overall understanding of the production lines behavior. / Detta examensarbete utvecklar en process för en automatiserad, datadriven flaskhalsidentifiering baserad på verkliga data. Följt av en sjustegsprocess ges det möjlighet att bestämma den genomsnittliga och den varierande flaskhalsen genom en automatisk implementering av ”the active period method”. En detaljerad förklaring av hur man förbehandlar informationen som extraherats är presenterat vilket är en god riktlinje för andra analytiker för att anpassa den tillgängliga koden utifrån deras behov. Det samlade resultatet illustrerar en avvikelse mellan den förväntade flaskhalsen och den flaskhalsen som utgår ifrån beräkningar av tillverkningsdata ansamlat i en vecka av full produktion. Den förväntade flaskhalsen är för nuvarande bestämt av fallets företag genom en fysisk mätning av cykeltiderna på maskinen, däremot är denna process inte representativ för helhetsbilden på tillverkningslinjen och det är därvid rekommenderat att ersätta den föregående flaskhalsidentifieringen med den utvecklade automatiserade analysen. Baserat på analysens resultat framkom det olika optimiseringsmöjligheter som är utvecklade och klargjorda för att förbättra kvaliteten på data samt den övergripande produktionskapaciteten av den undersökta produktionslinjen. Speciellt när det gällerde installerade portalsystemen så behövs det en fördjupande analys för att minimera dess verkan på den översiktliga kapaciteten. När det gäller datakvalitet, speciellt förbättringen av maskindata, behövs det en standardiserad tidsstämpling för att utföra enbättre analys i framtiden. De framtida rekommendationerna föreslår huvudsakligen att köra analysen ett flertal gånger med nya datauppsättningar för att validera resultaten och förbättra den övergripliga uppfattningen av produktionslinjens beteende.
297

Stochastic distribution tracking control for stochastic non-linear systems via probability density function vectorisation

Liu, Y., Zhang, Qichun, Yue, H. 08 February 2022 (has links)
Yes / This paper presents a new control strategy for stochastic distribution shape tracking regarding non-Gaussian stochastic non-linear systems. The objective can be summarised as adjusting the probability density function (PDF) of the system output to any given desired distribution. In order to achieve this objective, the system output PDF has first been formulated analytically, which is time-variant. Then, the PDF vectorisation has been implemented to simplify the model description. Using the vector-based representation, the system identification and control design have been performed to achieve the PDF tracking. In practice, the PDF evolution is difficult to implement in real-time, thus a data-driven extension has also been discussed in this paper, where the vector-based model can be obtained using kernel density estimation (KDE) with the real-time data. Furthermore, the stability of the presented control design has been analysed, which is validated by a numerical example. As an extension, the multi-output stochastic systems have also been discussed for joint PDF tracking using the proposed algorithm, and the perspectives of advanced controller have been discussed. The main contribution of this paper is to propose: (1) a new sampling-based PDF transformation to reduce the modelling complexity, (2) a data-driven approach for online implementation without model pre-training, and (3) a feasible framework to integrate the existing control methods. / This paper is partly supported by National Science Foundation of China under Grants (61603262 and 62073226), Liaoning Province Natural Science Joint Foundation in Key Areas (2019- KF-03-08), Natural Science Foundation of Liaoning Province (20180550418), Liaoning BaiQianWan Talents Program, i5 Intelligent Manufacturing Institute Fund of Shenyang Institute of Technology (i5201701), Central Government Guides Local Science and Technology Development Funds of Liaoning Province (2021JH6/10500137).
298

School Practices and Student Achievement

Atkins, Rosa Stocks 08 December 2008 (has links)
After implementing a statewide standardized testing program in 1998, the Virginia Department of Education realized that some schools were making great gains in student achievement while other schools continued to struggle. The Department conducted a study to identify the practices used by schools showing improvement. Six effective practice domains were identified. The current study was a follow-up to the research conducted by the Virginia Department of Education. A questionnaire measuring the six effective practice domains: (a) curriculum alignment, (b) time and scheduling, (c) use of data, (d) professional development, (e) school culture, and (f) leadership was administered to teachers in 148 schools in Virginia; 80 schools participated. Two questions guided the study: (1) How frequently do schools use the Virginia Department of Education effective practices, and (2) what is the relationship between the use of the effective practices and school pass rates on the 3rd grade 2005 Standards of Learning (SOL) reading test? Descriptive statistics, linear regression, and discriminant function analysis were applied to explore the relationships between the predictor variables (percentage of students receiving free or reduced-price lunch and the use of the effective practices) and the criterion variable (school pass rate on the 2005 SOL 3rd grade reading test). Academic culture and the percentage of students receiving free or reduced-price lunch accounted for significant amounts of the variance in school pass rates. The remaining five effective practice measures were not related to school pass rates. The measures may have affected the results. In most cases, one person was used as the proxy for the school, and this person may have provided a biased assessment of what was happening in the school. / Ed. D.
299

Numerical Analysis for Data-Driven Reduced Order Model Closures

Koc, Birgul 05 May 2021 (has links)
This dissertation contains work that addresses both theoretical and numerical aspects of reduced order models (ROMs). In an under-resolved regime, the classical Galerkin reduced order model (G-ROM) fails to yield accurate approximations. Thus, we propose a new ROM, the data-driven variational multiscale ROM (DD-VMS-ROM) built by adding a closure term to the G-ROM, aiming to increase the numerical accuracy of the ROM approximation without decreasing the computational efficiency. The closure term is constructed based on the variational multiscale framework. To model the closure term, we use data-driven modeling. In other words, by using the available data, we find ROM operators that approximate the closure term. To present the closure term's effect on the ROMs, we numerically compare the DD-VMS-ROM with other standard ROMs. In numerical experiments, we show that the DD-VMS-ROM is significantly more accurate than the standard ROMs. Furthermore, to understand the closure term's physical role, we present a theoretical and numerical investigation of the closure term's role in long-time integration. We theoretically prove and numerically show that there is energy exchange from the most energetic modes to the least energetic modes in closure terms in a long time averaging. One of the promising contributions of this dissertation is providing the numerical analysis of the data-driven closure model, which has not been studied before. At both the theoretical and the numerical levels, we investigate what conditions guarantee that the small difference between the data-driven closure model and the full order model (FOM) closure term implies that the approximated solution is close to the FOM solution. In other words, we perform theoretical and numerical investigations to show that the data-driven model is verifiable. Apart from studying the ROM closure problem, we also investigate the setting in which the G-ROM converges optimality. We explore the ROM error bounds' optimality by considering the difference quotients (DQs). We theoretically prove and numerically illustrate that both the ROM projection error and the ROM error are suboptimal without the DQs, and optimal if the DQs are used. / Doctor of Philosophy / In many realistic applications, obtaining an accurate approximation to a given problem can require a tremendous number of degrees of freedom. Solving these large systems of equations can take days or even weeks on standard computational platforms. Thus, lower-dimensional models, i.e., reduced order models (ROMs), are often used instead. The ROMs are computationally efficient and accurate when the underlying system has dominant and recurrent spatial structures. Our contribution to reduced order modeling is adding a data-driven correction term, which carries important information and yields better ROM approximations. This dissertation's theoretical and numerical results show that the new ROM equipped with a closure term yields more accurate approximations than the standard ROM.
300

Dissertation_XiaoquanGao.pdf

Xiaoquan Gao (12049385) 04 December 2024 (has links)
Public sector services often face challenges in allocating limited resources effectively. Despite their fundamental importance to societal welfare, these systems often operate without sufficient analytical support, and their decision-making processes remain understudied in academic literature. While data-driven analytical approaches offer promising solutions for addressing complex tradeoffs and resource constraints, the unique characteristics of public systems create significant challenges for modeling and developing efficient solutions. This dissertation addresses these challenges by applying stochastic models to enhance decision-making in two critical areas: emergency medical services in healthcare and jail diversion in the criminal justice system.The first part focuses on integrating drones into emergency medical services to shorten response times and improve patient outcomes. We develop a Markov Decision Process (MDP) model to address the coordination between aerial and ground vehicles, accounting for uncertain travel times and bystander availability. To solve this complex problem, we develop a tractable approximate policy iteration algorithm that approximates value function through neural networks, with basis functions tailored to the spatial and temporal characteristics of the EMS system. Case studies using historical data from Indiana provide valuable insights for managing real-time EMS logistics. Our results show that drone augmentation can reduce response times by over 30% compared to traditional ambulances. This research provides practical guidelines for implementing drone-assisted emergency medical services while contributing to the literature on hybrid delivery systems.The second part develops data-driven analytical tools to improve placement decisions in jail diversion programs, balancing public safety and individual rehabilitation. Community corrections programs offer promising alternatives to incarceration but face their own resource constraints. We develop an MDP model that captures the complex tradeoffs between individual recidivism risks and the impacts of overcrowding. Our model extends beyond traditional queueing problems by incorporating criminal justice-specific features, including deterministic service times and convex occupancy-dependent costs. To overcome the theoretical challenges, we develop a novel unified approach that combines system coupling with policy deviation bounds to analyze value functions, ultimately establishing the superconvexity. This theoretical foundation enables us to develop an efficient algorithm based on time-scale separation, providing practical tools for optimizing diversion decisions. Case study based on real data from our community partner shows our approach can reduce recidivism rates by 28% compared to current practices. Beyond academic impact, this research has been used by community partners to secure program funding for future staffing.<p></p>

Page generated in 0.0606 seconds