• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 290
  • 24
  • 21
  • 16
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 475
  • 475
  • 117
  • 99
  • 99
  • 88
  • 67
  • 62
  • 62
  • 54
  • 48
  • 47
  • 47
  • 45
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Development of a data-driven marketing strategy for an online pharmacy

Holmér, Gelaye Worku, Gamage, Ishara H. January 2022 (has links)
The term electronic commerce (e-commerce) refers to a business model that allows companies and individuals to buy and sell goods and services over the internet. The focus of this thesis is on online pharmacies, a segment of the ecommerce market. Even though internet pharmacies are still subject to the same stringent rules imposed on pharmacies that limit the scope for their market growth, it has shown a notable increase in the past decades. The main goal of this thesis is to develop a data-driven marketing strategy based on a Swedish based online pharmacy’s daily sales data. The methodology of the data analysis includes exploratory data analysis (EDA) and market basket analysis (MBA) using the Apriori algorithm and the application of marketing frameworks and theories from a data-driven standpoint. In addition to the data analysis, this paper proposes a conceptual framework of a digital marketing strategy based on the RACE framework (reach, act, convert, and engage). The result of the analysis has led to the following data-driven marketing strategy: Special attention should be paid to association rules with a high lift ration value; high gross profit margin percentile (GPMP) products should have a volume-based marketing strategy that focuses on lower prices on subsequent items; and price bundling is the best marketing strategy for low GPMP products. Some of the practical ideas mentioned in this thesis paper include optimizing keyword search for a high GPMP product type and sending reminder emails and push alerts to avoid cart abandonment. The findings and recommendations presented in this thesis can be used by online pharmacies to extract knowledge that may support several decisions ranging from raising overall order size, marketing campaigns, to increasing the sales of products with a high gross profit margin.
272

Data driven marketing : How to gain relevant insights through Google Analytics

Carlsson Ståbi, Jenny January 2019 (has links)
In this report, problems regarding the retrieving, measuring, and analysis of data when analysing marketing effects in the web analytics tool Google Analytics will be discussed. A correct setup, configuration, maintenance, campaign tracking and the understanding of the data in Google Analytics is essential to be able to achieve relevant insights. This is important since many Swedish marketing departments experience issues related to their setup of Google Analytics as well as the ongoing configuration and maintenance. A literature study has been conducted to gather information, focusing on collecting theories from researchers and experts in the field of web analytics and marketing analytics. Google Analytics data and reports from several Swedish companies have been studied to gain a deep understanding of how the tool is used for the measuring and analysis of the marketing effects. Interviews with marketing department and media bureau/agency employees have been conducted and analysed in a qualitative manner. A thematic analysis of the interviews has been done, resulting in 8 themes which are presented in the result section. The result has been analysed and discussed in relation to the theory. The interviews showed that there is a difference in knowledge and experience between the senior and junior analysts, and that there is a significant learning curve when working in Google Analytics. The junior analysts trusted the data, and did not know about campaign tracking and filters, in contrast to the senior analysts, who did not trust the data as a control mechanism, and did work with campaign tracking and filters. Furthermore, the senior analysts had more understanding of the data models in Google Analytics, such as attribution models, which are known to show different stories based on which attribution model is being used. The conclusions are four capabilities that address a need for more and better control over the setup and over the data, a wider use of campaign tracking, and wider knowledge of the data and the data models in Google Analytics, and of the business the organisation is conducting, to be able to gain relevant insights. / I den här rapporten diskuteras problemen med att insamla, mäta och analysera data vid analys av marknadseffekter i webbanalys-verktyget Google Analytics. Korrekt installation, konfiguration, underhåll, kampanjspårning och förståelsen av datan i Google Analytics är viktigt för att kunna uppnå relevanta insikter. Detta är viktigt eftersom att många svenska marknadsföringsavdelningar upplever problem i samband med installationen av Google Analytics samt den pågående konfigurationen och underhållet av data som ska mätas och analyseras. En litteraturstudie har gjorts för att samla in information, med inriktning på att samla teori från forskare och experter inom webbanalys och marknadsanalys. Google Analytics-data och rapporter från flera svenska företag har studerats för att få en djupare förståelse för hur verktyget används för att mäta och analysera marknadsföringseffekter. Intervjuer med medarbetare på marknadsavdelningar och mediebyråer har genomförts och analyserats på ett kvalitativt sätt. En tematisk analys av intervjuerna har gjorts, vilket resulterat i 8 teman som presenteras i resultatavsnittet. Resultatet har analyserats och diskuterats i förhållande till teorin. Intervjuerna visade att det finns skillnad i kunskap och erfarenhet mellan seniora och juniora analytiker, och att det finns en signifikant inlärningskurva när en arbetar i Google Analytics. De juniora analytikerna litade på datan och tillämpade inte kampanjspårning och filter i motsats till de seniora analytikerna som inte litade på datan som en kontrollmekanism, samt tillämpade kampanjspårning och filter. Vidare hade de seniora analytikerna större förståelse för datamodellerna i Google Analytics, till exempel attributionsmodeller, som är kända för att indikera olika saker baserat på vilken modell som används. Slutsatserna är fyra förmågor som relaterar till ett behov av mer och bättre kontroll över datan och installationen av Google Analytics, en bredare användning av kampajspårning, bredare kunskaper om både datan och de olika datamodellerna i Google Analytics, och verksamheten som organisationen utför för att kunna tillskansa sig relevanta insikter som är lämpliga att grunda beslut utifrån.
273

MENTAL STRESS AND OVERLOAD DETECTION FOR OCCUPATIONAL SAFETY

Eskandar, Sahel January 2022 (has links)
Stress and overload are strongly associated with unsafe behaviour, which motivated various studies to detect them automatically in workplaces. This study aims to advance safety research by developing a data-driven stress and overload detection method. An unsupervised deep learning-based anomaly detection method is developed to detect stress. The proposed method performs with convolutional neural network encoder-decoder and long short-term memory equipped with an attention layer. Data from a field experiment with 18 participants was used to train and test the developed method. The field experiment was designed to include a pre-defined sequence of activities triggering mental and physical stress, while a wristband biosensor was used to collect physiological signals. The collected contextual and physiological data were pre-processed and then resampled into correlation matrices of 14 features. Correlation matrices are used as an input to the unsupervised Deep Learning (DL) based anomaly detection method. The developed method is validated, offering accuracy and F-measures close to 0.98. The technique employed captures the input data attributes correlation, promoting higher interpretability of the DL method for easier comprehension. Over-reliance on uncertain absolute truth, the need for a high number of training samples, and the requirement of a threshold for detecting anomalies are identified as shortcomings of the proposed method. To overcome these shortcomings, an Adaptive Neuro-Fuzzy Inference System (ANFIS) was designed and developed. While the ANFIS method did not improve the overall accuracy, it outperformed the DL-based method in detecting anomalies precisely. The overall performance of the ANFIS method is better than the DL-based method for the anomalous class, and the method results in lower false alarms. However, the DL-based method is suitable for circumstances where false alarms are tolerated. / Dissertation / Doctor of Philosophy (PhD)
274

Multi-variate Process Models for Predicting Site-specific Microstructure and Properties of Inconel 706 Forgings.

Senanayake, Nishan M. January 2022 (has links)
No description available.
275

Datadrivet beslutsfattande i sjukvården : en studie av hur fenomenet datadrivet beslutsfattande uppfattas inom hälso- och sjukvård / Data-driven decision-making in healthcare : a study of how the phenomenon of data-driven decision-making is perceived in healthcare

Mikkonen, Rebecka, Winther, Erik January 2022 (has links)
I dagens samhälle har digitalisering blivit en stor del av vår vardag. Med global digitalisering kommer förändringar i hur organisationer och företag fungerar, detta inkluderar även hälso- och sjukvården. En viktig aspekt av digitaliseringen är mängden data som den genererar. Det har blivit en viktig aspekt för framgång under de senaste åren har varit relaterad till hur organisationer använder data till sin egen fördel. Användningen av data har ökat dramatiskt på en global skala och vi kan nu se fördelarna med att ha dataanalys inte bara i organisationen som helhet utan också använda den utvunna data för beslutsfattande. Denna studie syftar till att klargöra hur datadrivet beslutsfattande uppfattas av anställda inom hälso-. och sjukvården. Samt hur de uppfattar användande, möjligheter, begränsningar och risker med att fatta datadrivna beslut. Denna studien är skriven på svenska och har utförts genom en kvalitativ metod. Det har genomförts en liten-n-studie där enskilda intervjuer med fyra stycken respondenter genomförts. Respondenterna upplever att användandet av datadrivet beslutsfattande som något positivt och ser framtida möjligheter för datadrivet beslutsfattande inom hälso- och sjukvården. Dem identifierar risker och begränsningar med att använda denna typ av beslutsfattande, trots detta är fördelarna samtliga respondenter uttrycker övervägande gentemot dem risker och begränsningar som detta medför. / In today's day and age digitalisation has become a big part of our society. With global digitalisation comes changes in how organizations and companies function, this also includes healthcare. An important aspect of digitization is the amount of data it generates. It has become an important aspect of success in recent years has been related to how organizations use data to their own advantage. The use of data has increased dramatically on a global scale and we can now see the benefits of having data analysis not only in the organization as a whole but also using the extracted data for decision making. This study aims to clarify how data-driven decision-making is perceived by health care employees. and healthcare. And how they perceive use, opportunities, limitations and risks in making data-driven decisions. This study is written in Swedish and has been performed using a qualitative method. A small-n study has been conducted in which individual interviews with four respondents were conducted. The respondents perceive the use of data-driven decision-making as something positive and see future opportunities for data-driven decision-making in the health care sector. They identify risks and limitations with using this type of decision-making, despite this the advantages all respondents express are predominantly positive compared to the negative factors regarding data-driven decision-making.
276

CONCORDANCE-BASED FEEDBACK FOR L2 WRITING IN AN ONLINE ENVIRONMENT

Parise, Peter, 0009-0006-4628-0185 08 1900 (has links)
Data-driven learning is a sub-discipline of corpus linguistics that makes use of the analyses and tools of corpus linguistics in foreign and second language classroom (Johns, 1991; Johns & King, 1991). With this approach, learners become researchers rather than passive recipients of language rules (Johns, 1991). This study was an investigation of the impact of this approach as a form of written corrective feedback for in-service teachers of English participating in an online writing course at a teacher training institute in Japan. Data-driven learning is commonly utilized in conventional, face-to-face classrooms, or computer lab settings in which there is close direction from the instructor on how to interpret the output of a corpus query. The purpose of this study was to investigate how data-driven learning can be implemented in a blended online environment by providing training to develop the participants’ corpus competence (Charles, 2011; Flowerdew, 2010), which is defined as the ability to interpret data obtained from querying a corpus. This competence has been associated with becoming familiar with corpus methods, which include interpreting concordances, and in turn can aid in accurately repairing writing errors. This training, while initially presented in a face-to-face session at the beginning of the course, was sustained with support from resources on the course’s Moodle website and my comments in Microsoft Word documents. In addition, I applied a fine-grained approach to the analysis of the to examine the quality of participants’ interpretation of concordances. The mixed method triangulation convergence design (Creswell & Plano Clark, 2007, 2011) used in this study was based on data from four sources to examine the effectiveness of data-driven learning in an online environment as well as to observe how the participants interpreted concordances. One data set involved an analysis of the participants’ responses in drafts of their own writing to concordance-based feedback. The participants were given a prefabricated concordance, which was a concordance I generated. That concordance was attached to an error in the participants’ document and the participants used the information provided by the concordance to repair their writing error. The resulting data set, which contains the concordance, along with before and after comparisons of the writers’ repairs, shows how the participants’ interpretations of concordances aided the repairs. With the evidence of several trials over the course of four writing assignments, it was possible to see how the participants used the supplied concordance to repair their writing errors and in turn revealed their degree of corpus competence. A second data set obtained from think-aloud protocols from select participants was utilized to reveal how they interpreted the concordance during an error-repair task. This data revealed what kind of thought processes or noticing that occurred in this task. A third piece of evidence was derived from data obtained from the Moodle website via log files and other resources such as online documents and training quizzes. The purpose was to document which resources the participants accessed relating to data-driven learning training to investigate if those resources aided in their development of corpus competence. The fourth piece of evidence was a quiz developed online to compare the participants with a standard set of items. The quiz was used to investigate which participants successfully or unsuccessfully interpreted the concordances. This instrument, which was analyzed with the Rasch model, allowed for further comparison between the participants’ skill of interpreting concordances. These four data sources were triangulated and in the final analysis cross-referenced to examine how data-driven learning can be successfully applied in a blended online learning environment and how the training of corpus competence aided the learners in interpreting the concordances. / Teaching & Learning
277

Closure Modeling for Accelerated Multiscale Evolution of a 1-Dimensional Turbulence Model

Dhingra, Mrigank 10 July 2023 (has links)
Accelerating the simulation of turbulence to stationarity is a critical challenge in various engineering applications. This study presents an innovative equation-free multiscale approach combined with a machine learning technique to address this challenge in the context of the one-dimensional stochastic Burgers' equation, a widely used toy model for turbulence. We employ an encoder-decoder recurrent neural network to perform super-resolution reconstruction of the velocity field from lower-dimensional energy spectrum data, enabling seamless transitions between fine and coarse levels of description. The proposed multiscale-machine learning framework significantly accelerates the computation of the statistically stationary turbulent Burgers' velocity field, achieving up to 442 times faster wall clock time compared to direct numerical simulation, while maintaining three-digit accuracy in the velocity field. Our findings demonstrate the potential of integrating equation-free multiscale methods with machine learning methods to efficiently simulate stochastic partial differential equations and highlight the possibility of using this approach to simulate stochastic systems in other engineering domains. / Master of Science / In many practical engineering problems, simulating turbulence can be computationally expensive and time-consuming. This research explores an innovative method to accelerate these simulations using a combination of equation-free multiscale techniques and deep learning. Multiscale methods allow researchers to simulate the behavior of a system at a coarser scale, even when the specific equations describing its evolution are only available for a finer scale. This can be particularly helpful when there is a notable difference in the time scales between the coarser and finer scales of a system. The ``equation-free approach multiscale method coarse projective integration" can then be used to speed up the simulations of the system's evolution. Turbulence is an ideal candidate for this approach since it can be argued that it evolves to a statistically steady state on two different time scales. Over the course of evolution, the shape of the energy spectrum (the coarse scale) changes slowly, while the velocity field (the fine scale) fluctuates rapidly. However, applying this multiscale framework to turbulence simulations has been challenging due to the lack of a method for reconstructing the velocity field from the lower-dimensional energy spectrum data. This is necessary for moving between the two levels of description in the multiscale simulation framework. In this study, we tackled this challenge by employing a deep neural network model called an encoder-decoder sequence-to-sequence architecture. The model was used to capture and learn the conversions between the structure of the velocity field and the energy spectrum for the one-dimensional stochastic Burgers' equation, a simplified model of turbulence. By combining multiscale techniques with deep learning, we were able to achieve a much faster and more efficient simulation of the turbulent Burgers' velocity field. The findings of this study demonstrated that this novel approach could recover the final steady-state turbulent Burgers' velocity field up to 442 times faster than the traditional direct numerical simulations, while maintaining a high level of accuracy. This breakthrough has the potential to significantly improve the efficiency of turbulence simulations in a variety of engineering applications, making it easier to study and understand these complex phenomena.
278

[pt] APLICAÇÃO DE TÉCNICAS DE REDES NEURAIS PARA A MELHORIA DA MODELAGEM DA TURBULÊNCIA, UTILIZANDO DADOS EXPERIMENTAIS / [en] APPLICATION OF NEURAL NETWORK TECHNIQUES TO ENHANCE TURBULENCE MODELING USING EXPERIMENTAL DATA

LEONARDO SOARES FERNANDES 12 March 2024 (has links)
[pt] Apesar dos recentes avanços tecnológicos e do surgimento de computadores extremamente rápidos, a simulação numérica direta de escoamentos turbulentos ainda é proibitivamente cara para a maioria das aplicações de engenharia e até mesmo para algumas aplicações de pesquisa. As simulações utilizadas são, no geral, baseadas em grandezas médias e altamente dependentes de modelos de turbulência. Apesar de amplamente utilizados, tais modelos não conseguem prever adequadamente o escoamento médio em muitas aplicações, como o escoamento em um duto quadrado. Com o reflorescimento do Aprendizado de Máquina nos últimos anos, muita atenção está sendo dada ao uso de tais técnicas para substituir os modelos tradicionais de turbulência. Este trabalho estudou o uso de Redes Neurais como alternativa para aprimorar a simulação de escoamentos turbulentos. Para isso, a técnica PIV-Estereoscópico foi aplicada ao escoamento em um duto quadrado para obter dados experimentais de estatísticas do escoamento e campos médios de velocidade de 10 casos com diferentes números de Reynolds. Um total de 10 metodologias foram avaliadas para entender quais grandezas devem ser previstas por um algoritmo de aprendizado de máquina para obter simulações aprimoradas. A partir das metodologias selecionadas, excelentes resultados foram obtidos com uma Rede Neural treinada a partir dos dados experimentais para prever o termo perpendicular do Tensor de Reynolds e a viscosidade turbulenta. As simulações turbulentas auxiliadas pela Rede Neural retornaram campos de velocidade com menos de 4 por cento de erro, em comparação os dados medidos. / [en] Although the technological advances that led to the development of fast computers, the direct numerical simulation of turbulent flows is still prohibitively expensive to most engineering and even some research applications. The CFD simulations used worldwide are, therefore, based on averaged quantities and heavily dependent on mathematical turbulence models. Despite widely used, such models fail to proper predict the averaged flow in many practical situations, such as the simple flow in a square duct. With the re-blossoming of machine learning methods in the past years, much attention is being given to the use of such techniques as a replacement to the traditional turbulence models. The present work evaluated the use of Neural Networks as an alternative to enhance the simulation of turbulent flows. To this end, the Stereoscopic-PIV technique was used to obtain well-converged flow statistics and velocity fields for the flow in a square duct for 10 values of Reynolds number. A total of 10 methodologies were evaluated in a data-driven approach to understand what quantities should be predicted by a Machine Learning technique that would result in enhanced simulations. From the selected methodologies, accurate results could be obtained with a Neural Network trained from the experimental data to predict the nonlinear part of the Reynolds Stress Tensor and the turbulent eddy viscosity. The turbulent simulations assisted by the Neural Network returned velocity fields with less than 4 percent in error, in comparison with those previously measured.
279

Smart Maintenance : tillämpning inom svensk tillverkningsindustri / Smart Maintenance : application in Swedish manufacturing

Afaneh, Lara, Ulambayar, Unubold January 2022 (has links)
Tillverkningsindustrin blir alltmer digitaliserad samt att nya digitala verktyg implementeras inom företagen. Som följd av detta pågår en förändring av arbetssätt. Smart Maintenance är det senaste begreppet i hur underhåll borde utföras inom tillverkningsanläggningar med hjälp av digital teknik. Detta begrepp syftar på ett arbetssätt som ämna möjliggöra en resurseffektivare produktion och underhållsverksamhet, ur såväl organisatoriskt som tekniskt perspektiv. I detta examensarbete genomfördes intervjuer med företag, vilket utgjorde den centrala undersökningsmetoden för att förstå hur den svenska tillverkningsindustrin ser på Smart Maintenance (SM), vad deras tolkning är på begreppet samt ifall de har tillämpat detta, samt tillämpat aspekter eller dimensioner från SM i deras underhållsverksamhet. En intervju med en forskare genomfördes för att utöka projektgruppens kompetens kring begreppet och dess påverkan på lönsamhet, hållbarhet och konkurrenskraft. Med information från intervjuerna och en litteraturstudie som grund, erhölls slutsatser kring vilka de främsta fördelarna och utmaningarna är i utövandet av Smart Maintenance, samt dessas samband med hållbarhet. Dessutom resulterade projektet i slutsatser kring hur företagen tolkar begreppet och hur data kan används för investeringsplaner inom de intervjuade företagen. / The manufacturing industry is becoming increasingly digital and new digital tools are being implemented within companies. As a result, there is a change in working methods. Smart Maintenance is the latest concept in how maintenance should be performed in manufacturing facilities using digital technology. This concept refers to a way of working that aims to enable a more resource-efficient production and maintenance operation, from both an organizational and technical perspective. In this thesis, interviews were conducted with companies, which constituted the central research method for understanding how the Swedish manufacturing industry views Smart Maintenance (SM), what their interpretation is of the concept and if they have applied this, and applied aspects or dimensions from SM in their maintenance operations. An interview with a researcher was conducted to expand the project group's knowledge on the concept and its impact on profitability, sustainability and competitiveness. Based on information from the interviews and a literature study, conclusions were obtained about what the main benefits and challenges are in the practice of Smart Maintenance, as well as their connection with sustainability. In addition, the project resulted in conclusions about how the companies interpret the concept and how data can be used in order to make better decisions within the interviewed companies.
280

Data-Driven Models for Infrastructure Climate-Induced Deterioration Prediction

Elleathy, Yasser January 2021 (has links)
Infrastructure deterioration has been attributed to insufficient maintenance budgets, lacking restoration strategies, deficient deterioration prediction techniques, and changing climatic conditions. Considering that the latter adds more challenges to the former, there has been a growing demand to develop and implement climate-informed infrastructure asset management strategies. However, quantifying the impact of the spatiotemporally varying climate metrics on infrastructure systems poses a serious challenge due to the associated complexities and relevant modelling uncertainties. As such, in lieu of complex physics-based simulations, the current study proposes a glass box data-driven framework for predicting infrastructure climate induced deterioration rates. The framework harnesses evolutionary computing, and specifically multigene genetic programming, to develop closed-form expressions that link infrastructure characteristics to relevant spatiotemporal climate indices and predict infrastructure deterioration rates. The framework consists of four steps: 1) data collection and preparation; 2) input integration; 3) feature selection; and 4) model development and result interpretation. To numerically demonstrate its utility, the proposed framework was applied to develop deterioration rate expressions of two different classes of concrete and steel bridges in Ontario, Canada. The developed predictive models reproduced the observed deterioration rate of both bridge classes with coefficient of determination (R2) values of 0.912 and 0.924 for the training subsets and 0.817 and 0.909 for the testing subsets of the concrete and steel bridges, respectively. Attributed to its generic nature, the framework can be applied to other infrastructure systems, with available historical deterioration data, to devise relevant effective asset management strategies and infrastructure restoration standards under future climate scenarios. / Thesis / Master of Applied Science (MASc)

Page generated in 0.0539 seconds