Spelling suggestions: "subject:"data deriven"" "subject:"data dcdriven""
281 |
Datadrivet beslutsfattande i sjukvården : en studie av hur fenomenet datadrivet beslutsfattande uppfattas inom hälso- och sjukvård / Data-driven decision-making in healthcare : a study of how the phenomenon of data-driven decision-making is perceived in healthcareMikkonen, Rebecka, Winther, Erik January 2022 (has links)
I dagens samhälle har digitalisering blivit en stor del av vår vardag. Med global digitalisering kommer förändringar i hur organisationer och företag fungerar, detta inkluderar även hälso- och sjukvården. En viktig aspekt av digitaliseringen är mängden data som den genererar. Det har blivit en viktig aspekt för framgång under de senaste åren har varit relaterad till hur organisationer använder data till sin egen fördel. Användningen av data har ökat dramatiskt på en global skala och vi kan nu se fördelarna med att ha dataanalys inte bara i organisationen som helhet utan också använda den utvunna data för beslutsfattande. Denna studie syftar till att klargöra hur datadrivet beslutsfattande uppfattas av anställda inom hälso-. och sjukvården. Samt hur de uppfattar användande, möjligheter, begränsningar och risker med att fatta datadrivna beslut. Denna studien är skriven på svenska och har utförts genom en kvalitativ metod. Det har genomförts en liten-n-studie där enskilda intervjuer med fyra stycken respondenter genomförts. Respondenterna upplever att användandet av datadrivet beslutsfattande som något positivt och ser framtida möjligheter för datadrivet beslutsfattande inom hälso- och sjukvården. Dem identifierar risker och begränsningar med att använda denna typ av beslutsfattande, trots detta är fördelarna samtliga respondenter uttrycker övervägande gentemot dem risker och begränsningar som detta medför. / In today's day and age digitalisation has become a big part of our society. With global digitalisation comes changes in how organizations and companies function, this also includes healthcare. An important aspect of digitization is the amount of data it generates. It has become an important aspect of success in recent years has been related to how organizations use data to their own advantage. The use of data has increased dramatically on a global scale and we can now see the benefits of having data analysis not only in the organization as a whole but also using the extracted data for decision making. This study aims to clarify how data-driven decision-making is perceived by health care employees. and healthcare. And how they perceive use, opportunities, limitations and risks in making data-driven decisions. This study is written in Swedish and has been performed using a qualitative method. A small-n study has been conducted in which individual interviews with four respondents were conducted. The respondents perceive the use of data-driven decision-making as something positive and see future opportunities for data-driven decision-making in the health care sector. They identify risks and limitations with using this type of decision-making, despite this the advantages all respondents express are predominantly positive compared to the negative factors regarding data-driven decision-making.
|
282 |
CONCORDANCE-BASED FEEDBACK FOR L2 WRITING IN AN ONLINE ENVIRONMENTParise, Peter, 0009-0006-4628-0185 08 1900 (has links)
Data-driven learning is a sub-discipline of corpus linguistics that makes use of the analyses and tools of corpus linguistics in foreign and second language classroom (Johns, 1991; Johns & King, 1991). With this approach, learners become researchers rather than passive recipients of language rules (Johns, 1991). This study was an investigation of the impact of this approach as a form of written corrective feedback for in-service teachers of English participating in an online writing course at a teacher training institute in Japan. Data-driven learning is commonly utilized in conventional, face-to-face classrooms, or computer lab settings in which there is close direction from the instructor on how to interpret the output of a corpus query. The purpose of this study was to investigate how data-driven learning can be implemented in a blended online environment by providing training to develop the participants’ corpus competence (Charles, 2011; Flowerdew, 2010), which is defined as the ability to interpret data obtained from querying a corpus. This competence has been associated with becoming familiar with corpus methods, which include interpreting concordances, and in turn can aid in accurately repairing writing errors. This training, while initially presented in a face-to-face session at the beginning of the course, was sustained with support from resources on the course’s Moodle website and my comments in Microsoft Word documents. In addition, I applied a fine-grained approach to the analysis of the to examine the quality of participants’ interpretation of concordances. The mixed method triangulation convergence design (Creswell & Plano Clark, 2007, 2011) used in this study was based on data from four sources to examine the effectiveness of data-driven learning in an online environment as well as to observe how the participants interpreted concordances. One data set involved an analysis of the participants’ responses in drafts of their own writing to concordance-based feedback. The participants were given a prefabricated concordance, which was a concordance I generated. That concordance was attached to an error in the participants’ document and the participants used the information provided by the concordance to repair their writing error. The resulting data set, which contains the concordance, along with before and after comparisons of the writers’ repairs, shows how the participants’ interpretations of concordances aided the repairs. With the evidence of several trials over the course of four writing assignments, it was possible to see how the participants used the supplied concordance to repair their writing errors and in turn revealed their degree of corpus competence. A second data set obtained from think-aloud protocols from select participants was utilized to reveal how they interpreted the concordance during an error-repair task. This data revealed what kind of thought processes or noticing that occurred in this task. A third piece of evidence was derived from data obtained from the Moodle website via log files and other resources such as online documents and training quizzes. The purpose was to document which resources the participants accessed relating to data-driven learning training to investigate if those resources aided in their development of corpus competence. The fourth piece of evidence was a quiz developed online to compare the participants with a standard set of items. The quiz was used to investigate which participants successfully or unsuccessfully interpreted the concordances. This instrument, which was analyzed with the Rasch model, allowed for further comparison between the participants’ skill of interpreting concordances. These four data sources were triangulated and in the final analysis cross-referenced to examine how data-driven learning can be successfully applied in a blended online learning environment and how the training of corpus competence aided the learners in interpreting the concordances. / Teaching & Learning
|
283 |
Closure Modeling for Accelerated Multiscale Evolution of a 1-Dimensional Turbulence ModelDhingra, Mrigank 10 July 2023 (has links)
Accelerating the simulation of turbulence to stationarity is a critical challenge in various engineering applications. This study presents an innovative equation-free multiscale approach combined with a machine learning technique to address this challenge in the context of the one-dimensional stochastic Burgers' equation, a widely used toy model for turbulence. We employ an encoder-decoder recurrent neural network to perform super-resolution reconstruction of the velocity field from lower-dimensional energy spectrum data, enabling seamless transitions between fine and coarse levels of description. The proposed multiscale-machine learning framework significantly accelerates the computation of the statistically stationary turbulent Burgers' velocity field, achieving up to 442 times faster wall clock time compared to direct numerical simulation, while maintaining three-digit accuracy in the velocity field. Our findings demonstrate the potential of integrating equation-free multiscale methods with machine learning methods to efficiently simulate stochastic partial differential equations and highlight the possibility of using this approach to simulate stochastic systems in other engineering domains. / Master of Science / In many practical engineering problems, simulating turbulence can be computationally expensive and time-consuming. This research explores an innovative method to accelerate these simulations using a combination of equation-free multiscale techniques and deep learning. Multiscale methods allow researchers to simulate the behavior of a system at a coarser scale, even when the specific equations describing its evolution are only available for a finer scale. This can be particularly helpful when there is a notable difference in the time scales between the coarser and finer scales of a system. The ``equation-free approach multiscale method coarse projective integration" can then be used to speed up the simulations of the system's evolution. Turbulence is an ideal candidate for this approach since it can be argued that it evolves to a statistically steady state on two different time scales. Over the course of evolution, the shape of the energy spectrum (the coarse scale) changes slowly, while the velocity field (the fine scale) fluctuates rapidly. However, applying this multiscale framework to turbulence simulations has been challenging due to the lack of a method for reconstructing the velocity field from the lower-dimensional energy spectrum data. This is necessary for moving between the two levels of description in the multiscale simulation framework. In this study, we tackled this challenge by employing a deep neural network model called an encoder-decoder sequence-to-sequence architecture. The model was used to capture and learn the conversions between the structure of the velocity field and the energy spectrum for the one-dimensional stochastic Burgers' equation, a simplified model of turbulence. By combining multiscale techniques with deep learning, we were able to achieve a much faster and more efficient simulation of the turbulent Burgers' velocity field. The findings of this study demonstrated that this novel approach could recover the final steady-state turbulent Burgers' velocity field up to 442 times faster than the traditional direct numerical simulations, while maintaining a high level of accuracy. This breakthrough has the potential to significantly improve the efficiency of turbulence simulations in a variety of engineering applications, making it easier to study and understand these complex phenomena.
|
284 |
[pt] APLICAÇÃO DE TÉCNICAS DE REDES NEURAIS PARA A MELHORIA DA MODELAGEM DA TURBULÊNCIA, UTILIZANDO DADOS EXPERIMENTAIS / [en] APPLICATION OF NEURAL NETWORK TECHNIQUES TO ENHANCE TURBULENCE MODELING USING EXPERIMENTAL DATALEONARDO SOARES FERNANDES 12 March 2024 (has links)
[pt] Apesar dos recentes avanços tecnológicos e do surgimento de computadores
extremamente rápidos, a simulação numérica direta de escoamentos turbulentos
ainda é proibitivamente cara para a maioria das aplicações de engenharia e até
mesmo para algumas aplicações de pesquisa. As simulações utilizadas são, no geral,
baseadas em grandezas médias e altamente dependentes de modelos de turbulência.
Apesar de amplamente utilizados, tais modelos não conseguem prever
adequadamente o escoamento médio em muitas aplicações, como o escoamento em
um duto quadrado. Com o reflorescimento do Aprendizado de Máquina nos últimos
anos, muita atenção está sendo dada ao uso de tais técnicas para substituir os
modelos tradicionais de turbulência. Este trabalho estudou o uso de Redes Neurais
como alternativa para aprimorar a simulação de escoamentos turbulentos. Para isso,
a técnica PIV-Estereoscópico foi aplicada ao escoamento em um duto quadrado
para obter dados experimentais de estatísticas do escoamento e campos médios de
velocidade de 10 casos com diferentes números de Reynolds. Um total de 10
metodologias foram avaliadas para entender quais grandezas devem ser previstas
por um algoritmo de aprendizado de máquina para obter simulações aprimoradas.
A partir das metodologias selecionadas, excelentes resultados foram obtidos com
uma Rede Neural treinada a partir dos dados experimentais para prever o termo
perpendicular do Tensor de Reynolds e a viscosidade turbulenta. As simulações
turbulentas auxiliadas pela Rede Neural retornaram campos de velocidade com
menos de 4 por cento de erro, em comparação os dados medidos. / [en] Although the technological advances that led to the development of fast
computers, the direct numerical simulation of turbulent flows is still prohibitively
expensive to most engineering and even some research applications. The CFD
simulations used worldwide are, therefore, based on averaged quantities and
heavily dependent on mathematical turbulence models. Despite widely used, such
models fail to proper predict the averaged flow in many practical situations, such
as the simple flow in a square duct. With the re-blossoming of machine learning
methods in the past years, much attention is being given to the use of such
techniques as a replacement to the traditional turbulence models. The present work
evaluated the use of Neural Networks as an alternative to enhance the simulation of
turbulent flows. To this end, the Stereoscopic-PIV technique was used to obtain
well-converged flow statistics and velocity fields for the flow in a square duct for
10 values of Reynolds number. A total of 10 methodologies were evaluated in a
data-driven approach to understand what quantities should be predicted by a
Machine Learning technique that would result in enhanced simulations. From the
selected methodologies, accurate results could be obtained with a Neural Network
trained from the experimental data to predict the nonlinear part of the Reynolds
Stress Tensor and the turbulent eddy viscosity. The turbulent simulations assisted
by the Neural Network returned velocity fields with less than 4 percent in error, in
comparison with those previously measured.
|
285 |
Smart Maintenance : tillämpning inom svensk tillverkningsindustri / Smart Maintenance : application in Swedish manufacturingAfaneh, Lara, Ulambayar, Unubold January 2022 (has links)
Tillverkningsindustrin blir alltmer digitaliserad samt att nya digitala verktyg implementeras inom företagen. Som följd av detta pågår en förändring av arbetssätt. Smart Maintenance är det senaste begreppet i hur underhåll borde utföras inom tillverkningsanläggningar med hjälp av digital teknik. Detta begrepp syftar på ett arbetssätt som ämna möjliggöra en resurseffektivare produktion och underhållsverksamhet, ur såväl organisatoriskt som tekniskt perspektiv. I detta examensarbete genomfördes intervjuer med företag, vilket utgjorde den centrala undersökningsmetoden för att förstå hur den svenska tillverkningsindustrin ser på Smart Maintenance (SM), vad deras tolkning är på begreppet samt ifall de har tillämpat detta, samt tillämpat aspekter eller dimensioner från SM i deras underhållsverksamhet. En intervju med en forskare genomfördes för att utöka projektgruppens kompetens kring begreppet och dess påverkan på lönsamhet, hållbarhet och konkurrenskraft. Med information från intervjuerna och en litteraturstudie som grund, erhölls slutsatser kring vilka de främsta fördelarna och utmaningarna är i utövandet av Smart Maintenance, samt dessas samband med hållbarhet. Dessutom resulterade projektet i slutsatser kring hur företagen tolkar begreppet och hur data kan används för investeringsplaner inom de intervjuade företagen. / The manufacturing industry is becoming increasingly digital and new digital tools are being implemented within companies. As a result, there is a change in working methods. Smart Maintenance is the latest concept in how maintenance should be performed in manufacturing facilities using digital technology. This concept refers to a way of working that aims to enable a more resource-efficient production and maintenance operation, from both an organizational and technical perspective. In this thesis, interviews were conducted with companies, which constituted the central research method for understanding how the Swedish manufacturing industry views Smart Maintenance (SM), what their interpretation is of the concept and if they have applied this, and applied aspects or dimensions from SM in their maintenance operations. An interview with a researcher was conducted to expand the project group's knowledge on the concept and its impact on profitability, sustainability and competitiveness. Based on information from the interviews and a literature study, conclusions were obtained about what the main benefits and challenges are in the practice of Smart Maintenance, as well as their connection with sustainability. In addition, the project resulted in conclusions about how the companies interpret the concept and how data can be used in order to make better decisions within the interviewed companies.
|
286 |
Data-Driven Models for Infrastructure Climate-Induced Deterioration PredictionElleathy, Yasser January 2021 (has links)
Infrastructure deterioration has been attributed to insufficient maintenance budgets, lacking restoration strategies, deficient deterioration prediction techniques, and changing climatic conditions. Considering that the latter adds more challenges to the former, there has been a growing demand to develop and implement climate-informed infrastructure asset management strategies. However, quantifying the impact of the spatiotemporally varying climate metrics on infrastructure systems poses a serious challenge due to the associated complexities and relevant modelling uncertainties. As such, in lieu of complex physics-based simulations, the current study proposes a glass box data-driven framework for predicting infrastructure climate induced deterioration rates. The framework harnesses evolutionary computing, and specifically multigene genetic programming, to develop closed-form expressions that link infrastructure characteristics to relevant spatiotemporal climate indices and predict infrastructure deterioration rates. The framework consists of four steps: 1) data collection and preparation; 2) input integration; 3) feature selection; and 4) model development and result interpretation. To numerically demonstrate its utility, the proposed framework was applied to develop deterioration rate expressions of two different classes of concrete and steel bridges in Ontario, Canada. The developed predictive models reproduced the observed deterioration rate of both bridge classes with coefficient of determination (R2) values of 0.912 and 0.924 for the training subsets and 0.817 and 0.909 for the testing subsets of the concrete and steel bridges, respectively. Attributed to its generic nature, the framework can be applied to other infrastructure systems, with available historical deterioration data, to devise relevant effective asset management strategies and infrastructure restoration standards under future climate scenarios. / Thesis / Master of Applied Science (MASc)
|
287 |
Data-driven Supply Chain Monitoring and OptimizationWang, Jing January 2022 (has links)
In the era of Industry 4.0, conventional supply chains are undergoing a transformation into digital supply chains with the wide application of digital technologies such as big data, cloud computing, and Internet of Things. A digital supply chain is an intelligent and value-driven process that has superior features such as speed, flexibility, transparency, and real-time inventory monitoring and management. This concept is further included in the framework of Supply Chain 4.0, which emphasizes the connection between supply chain and Industry 4.0. In this context, data analytics for supply chain management presents a promising research opportunity. This thesis aims to investigate the use of data analytics in supply chain decision-making, including modelling, monitoring, and optimization.
First, this thesis investigates supply chain monitoring (SCMo) using data analytics. The goal of SCMo is to raise an alarm when abnormal supply chain events occur and identify the potential reason. We propose a framework of SCMo based on a data-driven method, principal component analysis (PCA). Within this framework, supply chain data such as inventory levels and customer demand are collected, and the normal operating conditions of a supply chain are characterized using PCA. Fault detection and diagnosis are implemented by examining the monitoring statistics and variable contributions. A supply chain simulation model is developed to carry out the case studies. The results show that dynamic PCA (DPCA) successfully detected abnormal behaviour of the supply chain, such as transportation delay, low production rate, and supply shortage. Moreover, the contribution plot is shown to be effective in interpreting the abnormality and identify the fault-related variables. The method of using data-driven methods for SCMo is named data-driven SCMo in this work.
Then, a further investigation of data-driven SCMo based on another statistical process monitoring method, canonical variate analysis (CVA), is conducted. CVA utilizes the state-space model of a system and determines the canonical states by maximizing the correlation between the combination of past system outputs and inputs and the combination of future outputs. A state-space model of supply chain is developed, which forms the basis of applying CVA to detect supply chain faults. The performance of CVA and PCA are assessed and compared in terms of dimensionality reduction, false alarm rate, missed detection rate, and detection delay. Case studies show that CVA identifies a smaller system order than PCA and achieves comparable performance to PCA in a lower-dimensional latent space.
Next, we investigate data-driven supply chain control under uncertainty with risk taken into account. The method under investigation is reinforcement learning (RL). Within the RL framework, an agent learns an optimal policy that maps the state to action during the process of interacting with the non-deterministic environment, such that a numerical reward is maximized. The current literature regarding supply chain control focuses on conventional RL that maximizes the expected return. However, this may be not the best option for risk-averse decision makers. In this work, we explore the use of safe RL, which takes into account the concept of risk in the learning process. Two safe RL algorithms, Q-hat-learning and Beta-pessimistic Q-learning, are investigated. Case studies are carried out based on the supply chain simulator developed using agent-based modelling. Results show that Q-learning has the best performance under normal scenarios, while safe RL algorithms perform better under abnormal scenarios and are more robust to changes in the environment. Moreover, we find that the benefits of safe RL are more pronounced in a closed-loop supply chain.
Finally, we investigate real-time supply chain optimization. The operational optimization problems for supply chains of realistic size are often large and complex, and solving them in real time can be challenging. This work aims to address the problem by using a deep learning-based model predictive control (MPC) technique. The MPC problem for supply chain operation is formulated based on the state space model of a supply chain, and the optimal state-input pairs are precomputed in the offline phase. Then, a deep neural network is built to map the state to input, which is then used in the online phase to reduce solution time. We propose an approach to implement the deep learning-based MPC method when there are delayed terms in the system, and a heuristic approach to feasibility recovery for mixed-integer MPC, with binary decision variables taken into account. Case studies show that compared with solving the nominal MPC problem online, deep learning-based MPC can provide near-optimal solution at a lower computational cost. / Thesis / Doctor of Philosophy (PhD)
|
288 |
PurdueThesis_XuejunZhaoXuejun Zhao (14187179) 29 November 2022 (has links)
<p> </p>
<p><em>This study examines data-driven contract design in the small data regime and large data regime respectively, and the implications from contract pricing in the pharmaceutical supply chain. </em></p>
|
289 |
Impact of Academic and Nonacademic Support Structures On Third Grade Reading AchievementPeugeot, Megan Aline 17 July 2017 (has links)
No description available.
|
290 |
Designing Applications for Smart Cities: A designerly approach to data analyticsBücker, Dennis January 2017 (has links)
The purpose of this thesis is to investigate the effects of a designerly approach to data analytics. The research was conducted during the Interaction Design Master program at Malmö University in 2017 and follows a research through design approach where the material driven design process in itself becomes a way to acquire new knowledge. The thesis uses big data as design material for designers to ideate connected products and services in the context of smart city applications. More specifically, it conducts a series of material studies that show the potential of this new perspective to data analytics. As a result of this research a set of designs and exercises are presented and structured into a guide. Furthermore, the results emphasize the need for this type of research and highlights data as a departure material as of special interest for HCI.
|
Page generated in 0.0769 seconds