• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 158
  • 18
  • 8
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 274
  • 274
  • 116
  • 65
  • 56
  • 49
  • 47
  • 47
  • 44
  • 43
  • 38
  • 31
  • 29
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
171

Designing Applications for Smart Cities: A designerly approach to data analytics

Bücker, Dennis January 2017 (has links)
The purpose of this thesis is to investigate the effects of a designerly approach to data analytics. The research was conducted during the Interaction Design Master program at Malmö University in 2017 and follows a research through design approach where the material driven design process in itself becomes a way to acquire new knowledge. The thesis uses big data as design material for designers to ideate connected products and services in the context of smart city applications. More specifically, it conducts a series of material studies that show the potential of this new perspective to data analytics. As a result of this research a set of designs and exercises are presented and structured into a guide. Furthermore, the results emphasize the need for this type of research and highlights data as a departure material as of special interest for HCI.
172

Developing a supervised machine learning model for an optimised aluminium addition based on historical data analytics, for clean steelmaking

Thakur, Arun Kumar January 2022 (has links)
De-oxidation is an important process in clean steelmaking. Al (Aluminium) is mainly used as de-oxidant and controls the final oxygen content and impact the sulphur removal in steel. Adding optimum amount of Al is critical for steel cleanliness and to reduce cost. Unfortunately, recovery of Al is not repeatable due to inherent variation in factors like amount of slag carryover, total oxygen content, tapping weight and so on. To address this challenge, statistical modeling is used to develop a supervised machine learning model to predict Al addition for secondary de-oxidation. Data analytics is used on historical data from production database to gain insights from data on secondary de-oxidation practice, observe patterns, trends and understand correlation among critical process parameters. Simple and multiple linear regression models have been developed with prediction accuracy of 58 and 66% respectively. These models have been trained, tested and cross validated using standard procedures like k-fold cross validation and grid search. To deploy multiple linear regression model into production, a Microsoft Excel based dashboard containing prediction tool, pivot charts, line, and bar graphs for analysing the process is developed. This model when tested in shadow deployment environment perform well on steel grades containing dissolved C (Carbon) up to 0.15% after tapping. In shadow deployment mode the new model can be utilised in parallel to existing tool. For %C greater than 0.15%, prediction accuracy stands at 46%. This is due to nonlinear relationship between oxygen content and added Al. With our model, in process window containing 0 to 0.15 % C after tapping in steel melt, we believe that we can in future achieve better steel quality and repeatability in de-oxidation process, improve productivity in terms of time and resources and facilitates decision making when the model is ready for use in real production environment. Future work in this direction would be to further develop this model for other steel grades. / Deoxidation är en viktig process vid ren ståltillverkning. Al (aluminium) används huvudsakligen som deoxidationsmedel och kontrollerar den slutliga syrehalten och påverkar avlägsnandet av svavel i stålet. Det är viktigt att tillsätta en optimal mängd Al för att stålet ska bli rent och för att minska kostnaderna. Alumiumåterhämtningen är tyvärr inte repeterbar på grund av varierande faktorer som slaggöverföring, total syrehalt, tappvikt och så vidare. För att ta itu med denna utmaning används statistisk modellering för att utveckla en övervakad maskininlärningsmodell för att förutsäga Al-tillsats för sekundär deoxidering. Dataanalys används på historiska data från produktionsdatabasen för att få insikt i data om sekundär deoxidering, observera mönster, trender och förstå korrelationen mellan kritiska processparametrar. Enkla och multipla linjära regressionsmodeller har utvecklats med en prediktionsnoggrannhet på 58 respektive 66 %. Dessa modeller har tränats, testats och korsvaliderats med hjälp av standardförfaranden som k-fold korsvalidering och grid search. För att använda den multipla linjära regressionsmodellen i produktionen har man utvecklat en Microsoft Excel-baserad instrumentpanel som innehåller ett prognosverktyg, pivotdiagram, linje- och stapeldiagram för analys av processen. När denna modell testades i en skuggmiljö fungerade den bra på stålsorter som innehåller upplösta C (kol) på upp till 0,15 % efter tappning. I en skuggbaserad miljö kan den nya modellen användas parallellt med det befintliga verktyget. För % C över 0,15 % är förutsägelsenoggrannheten 46 %. Detta beror på det icke-linjära förhållandet mellan syrehalt och tillsatt Al. Med vår modell, i processfönstret som innehåller 0-0,15 % C efter tappning i stålsmältan, tror vi att vi i framtiden kan uppnå bättre stålkvalitet och repeterbarhet i deoxidering processen, förbättra produktiviteten när det gäller tid och resurser och underlätta beslutsfattandet när modellen är redo att användas i en verklig produktionsmiljö. Framtida arbete i denna riktning skulle vara att vidareutveckla denna modell för andra stålsorter.
173

On Leveraging Representation Learning Techniques for Data Analytics in Biomedical Informatics

Cao, Xi Hang January 2019 (has links)
Representation Learning is ubiquitous in state-of-the-art machine learning workflow, including data exploration/visualization, data preprocessing, data model learning, and model interpretations. However, the majority of the newly proposed Representation Learning methods are more suitable for problems with a large amount of data. Applying these methods to problems with a limited amount of data may lead to unsatisfactory performance. Therefore, there is a need for developing Representation Learning methods which are tailored for problems with ``small data", such as, clinical and biomedical data analytics. In this dissertation, we describe our studies of tackling the challenging clinical and biomedical data analytics problem from four perspectives: data preprocessing, temporal data representation learning, output representation learning, and joint input-output representation learning. Data scaling is an important component in data preprocessing. The objective in data scaling is to scale/transform the raw features into reasonable ranges such that each feature of an instance will be equally exploited by the machine learning model. For example, in a credit flaw detection task, a machine learning model may utilize a person's credit score and annual income as features, but because the ranges of these two features are different, a machine learning model may consider one more heavily than another. In this dissertation, I thoroughly introduce the problem in data scaling and describe an approach for data scaling which can intrinsically handle the outlier problem and lead to better model prediction performance. Learning new representations for data in the unstandardized form is a common task in data analytics and data science applications. Usually, data come in a tubular form, namely, the data is represented by a table in which each row is a feature (row) vector of an instance. However, it is also common that the data are not in this form; for example, texts, images, and video/audio records. In this dissertation, I describe the challenge of analyzing imperfect multivariate time series data in healthcare and biomedical research and show that the proposed method can learn a powerful representation to encounter various imperfections and lead to an improvement of prediction performance. Learning output representations is a new aspect of Representation Learning, and its applications have shown promising results in complex tasks, including computer vision and recommendation systems. The main objective of an output representation algorithm is to explore the relationship among the target variables, such that a prediction model can efficiently exploit the similarities and potentially improve prediction performance. In this dissertation, I describe a learning framework which incorporates output representation learning to time-to-event estimation. Particularly, the approach learns the model parameters and time vectors simultaneously. Experimental results do not only show the effectiveness of this approach but also show the interpretability of this approach from the visualizations of the time vectors in 2-D space. Learning the input (feature) representation, output representation, and predictive modeling are closely related to each other. Therefore, it is a very natural extension of the state-of-the-art by considering them together in a joint framework. In this dissertation, I describe a large-margin ranking-based learning framework for time-to-event estimation with joint input embedding learning, output embedding learning, and model parameter learning. In the framework, I cast the functional learning problem to a kernel learning problem, and by adopting the theories in Multiple Kernel Learning, I propose an efficient optimization algorithm. Empirical results also show its effectiveness on several benchmark datasets. / Computer and Information Science
174

Adapting video games for TV : A nexus of interactive and linear storytelling

Viejobueno, Carlos January 2024 (has links)
The Entertainment Industry has long been plagued by the “Video Game Curse”, a term used by both fans and critics to describe the historically poor performance and reception of video game adaptations to film and television, often resulting from a failure to capture the essence and appeal of the original games (Barasch, 2023). However, in the last few years, there have been several successful video game-to-film/TV adaptations released (Williams, 2021), with many more adaptations in development. Has the “Video Game Curse” finally been lifted? And if so, then how and why now? This thesis will examine how the “Video Game Curse” has manifested itself in film and TV productions and how more recent media seem to have learned from the mistakes of the past. Using a theoretical creative development project for a TV series based on the cult classic video game, Mirror’s Edge (2008), this paper will unpack the unique challenges and opportunities that come with adapting video games into film and TV. This analysis will also include a discussionon the evolution of storytelling techniques, the importance of authentic representation of the game's world and characters, and the integration of interactive elements that define video games into a passive viewing experience. / <p>Det finns övrigt digitalt material (t.ex. film-, bild- eller ljudfiler) eller modeller/artefakter tillhörande examensarbetet som ska skickas till arkivet.</p><p>There are other digital material (eg film, image or audio files) or models/artifacts that belongs to the thesis and need to be archived.</p>
175

A Data Analytics Framework for Regional Voltage Control

Yang, Duotong 16 August 2017 (has links)
Modern power grids are some of the largest and most complex engineered systems. Due to economic competition and deregulation, the power systems are operated closer their security limit. When the system is operating under a heavy loading condition, the unstable voltage condition may cause a cascading outage. The voltage fluctuations are presently being further aggravated by the increasing integration of utility-scale renewable energy sources. In this regards, a fast response and reliable voltage control approach is indispensable. The continuing success of synchrophasor has ushered in new subdomains of power system applications for real-time situational awareness, online decision support, and offline system diagnostics. The primary objective of this dissertation is to develop a data analytic based framework for regional voltage control utilizing high-speed data streams delivered from synchronized phasor measurement units. The dissertation focuses on the following three studies: The first one is centered on the development of decision-tree based voltage security assessment and control. The second one proposes an adaptive decision tree scheme using online ensemble learning to update decision model in real time. A system network partition approach is introduced in the last study. The aim of this approach is to reduce the size of training sample database and the number of control candidates for each regional voltage controller. The methodologies proposed in this dissertation are evaluated based on an open source software framework. / Ph. D.
176

Smart Additive Manufacturing Using Advanced Data Analytics and Closed Loop Control

Liu, Chenang 19 July 2019 (has links)
Additive manufacturing (AM) is a powerful emerging technology for fabrication of components with complex geometries using a variety of materials. However, despite promising potential, due to the complexity of the process dynamics, how to ensure product quality and consistency of AM parts efficiently during the process still remains challenging. Therefore, the objective of this dissertation is to develop effective methodologies for online automatic quality monitoring and improvement, i.e., to build a basis for smart additive manufacturing. The fast-growing sensor technology can easily generate a massive amount of real-time process data, which provides excellent opportunities to address the barriers of online quality assurance in AM through data-driven perspectives. Although this direction is very promising, the online sensing data typically have high dimensionality and complex inherent structure, which causes the tasks of real-time data-driven analytics and decision-making to be very challenging. To address these challenges, multiple data-driven approaches have been developed in this dissertation to achieve effective feature extraction, process modeling, and closed-loop quality control. These methods are successfully validated by a typical AM process, namely, fused filament fabrication (FFF). Specifically, four new methodologies are proposed and developed as listed below, (1) To capture the variation of hidden patterns in sensor signals, a feature extraction approach based on spectral graph theory is developed for defect detection in online quality monitoring of AM. The most informative feature is extracted and integrated with a statistical control chart, which can effectively detect the anomalies caused by cyber-physical attack. (2) To understand the underlying structure of high dimensional sensor data, an effective dimension reduction method based on an integrated manifold learning approach termed multi-kernel metric learning embedded isometric feature mapping (MKML-ISOMAP) is proposed for online process monitoring and defect diagnosis of AM. Based on the proposed method, process defects can be accurately identified by supervised classification algorithms. (3) To quantify the layer-wise quality correlation in AM by taking into consideration of reheating effects, a novel bilateral time series modeling approach termed extended autoregressive (EAR) model is proposed, which successfully correlates the quality characteristics of the current layer with not only past but also future layers. The resulting model is able to online predict the defects in a layer-wise manner. (4) To achieve online defect mitigation for AM process, a closed-loop quality control system is implemented using an image analysis-based proportional-integral-derivative (PID) controller, which can mitigate the defects by adaptively adjusting machine parameters during the printing process in a timely manner. By fully utilizing the online sensor data with innovative data analytics and closed-loop control approaches, the above-proposed methodologies are expected to have excellent performance in online quality assurance for AM. In addition, these methodologies are inherently integrated into a generic framework. Thus, they can be easily transformed for applications in other advanced manufacturing processes. / Doctor of Philosophy / Additive manufacturing (AM) technology is rapidly changing the industry; and online sensor-based data analytics is one of the most effective enabling techniques to further improve AM product quality. The objective of this dissertation is to develop methodologies for online quality assurance of AM processes using sensor technology, advanced data analytics, and closed-loop control. It aims to build a basis for the implementation of smart additive manufacturing. The proposed new methodologies in this dissertation are focused to address the quality issues in AM through effective feature extraction, advanced statistical modeling, and closed-loop control. To validate their effectiveness and efficiency, a widely used AM process, namely, fused filament fabrication (FFF), is selected as the experimental platform for testing and validation. The results demonstrate that the proposed methods are very promising to detect and mitigate quality defects during AM operations. Consequently, with the research outcome in this dissertation, our capability of online defect detection, diagnosis, and mitigation for the AM process is significantly improved. However, the future applications of the accomplished work in this dissertation are not just limited to AM. The developed generic methodological framework can be further extended to many other types of advanced manufacturing processes.
177

Essays on Utilizing Data Analytics and Dynamic Modeling to Inform Complex Science and Innovation Policies

Baghaei Lakeh, Arash 27 April 2018 (has links)
In many ways, science represents a complex system which involves technical, social, and economic aspects. An analysis of such a system requires employing and combining different methodological perspectives and incorporation of different sources of data. In this dissertation, we use a variety of methods to analyze large sets of data in order to examine the effects of various domestic and institutional factors on scientific activities. First, we evaluate how the contributions of behavioral and social sciences to studies of health have evolved over time. We use data analytics to conduct a textual analysis of more than 200,000 publications on the topic of HIV/AIDS. We find that the focus of the scientific community within the context of the same problem varies as the societal context of the problem changes. Specifically, we uncover that the focus on the behavioral and social aspects of HIV/AIDS has increased over time and varies in different countries. Further, we show that this variation is related to the mortality level that the disease causes in each country. Second, we investigate how different sources of funding affect the science enterprise differently. We use data analytics to analyze more than 60,000 papers published on the subject of specific diseases globally and highlight the role of philanthropic money in these domains. We find that philanthropies tend to have a more practical approach in health studies as compared with public funders. We further show that they are also concerned with the economic, policy related, social, and behavioral aspects of the diseases. We uncover that philanthropies tend to mix and combine approaches and contents supported both by public and private sources of funding for science. We further show that in doing so, philanthropies tend to be closer to the position held by the public sector in the context of health studies. Finally, we find that studies funded by philanthropies tend to receive higher citations, and hence have higher impact, in comparison to those funded by the public sector. Third, we study the effect of different schemes of funding distribution on the career of scientists. In this study, we develop a system dynamics model for analyzing a scientist's career under different funding and competition contexts. We investigate the characteristics of optimal strategies and also the equilibrium points for the cases of scientists competing for financial resources. We show that a policy to fund the best can lead scientists to spend more time on writing proposals, in order to secure funding, rather than writing papers. We find that when everyone receives funding (or have the same chance of receiving funding) the overall optimal payoff of the scientists reaches its highest level and at this optimum, scientists spend all their time on writing papers rather than writing proposals. Our analysis suggests that more egalitarian distributions of funding results in higher overall research output by scientists. We also find that luck plays an important role in the success of scientists. We show that following the optimal strategies do not guarantee success. Due to the stochastic nature of funding decisions, some will eventually fail. The failure is not due to scientists' faulty decisions, but rather simply due to their lack of luck. / Ph. D. / Science helps us understand the world and enables us to improve how we interact with our environment. But science itself has also been the subject of inquiry by philosophers, sociologists, economists, historians, and scientists. The goal in the investigations of science has been to better understand how scientific advances occur, how to foster innovation, and how to improve the institutions that push science forward. This dissertation contributes to this area of research by asking and responding to several questions about the science enterprise. First, we study how communities of scientists in different parts of the world look at the seemingly same problem differently. We use a computational method to read through a large set of publications on the topic of HIV/AIDS (which includes more than 200,000 papers) and uncover the topics of these papers. We find that in the context of HIV/AIDS, contributions of behavioral and social scientists have increased over time. Moreover, we show that the share of these contributions in any counties’ total research output differs significantly. We further find that there is a significant relationship between one country’s rate of death, due to HIV/AIDS, and the share of behavioral and social studies in the overall research profile of that country on the topic of HIV/AIDS. Second, we investigate how different sources of research funding affect scientific activities differently. Specifically, we focus on the role of philanthropic money in science and its effect on the content and impact of research studies. In our analysis, we rely on computational techniques that distinguishes between different themes of research in the studies of a few diseases and also different statistical methods. We find that philanthropies tend to have a more practical approach to health studies as compared with public sources of funding. Meanwhile, we find that they are also concerned with the economic, policy related, social, and behavioral aspects of the diseases. Moreover, we show that philanthropies tend to mix and combine approaches and contents supported both by public and private sources of funding for science. We find that, in doing so, philanthropies tend to be closer to the position held by the public sector in the context of health studies. Finally, we show that studies funded by philanthropies tend to receive higher citations. This finding suggests that these studies have a higher impact in comparison to those funded by the public sector. Third, we study how different mechanisms for distributing research funding among scientists can affect their career and success. Many scientists should spend time on both writing papers and research grant proposals. In this work, we aim at understanding how a scientists should allocate her time between these two activities to maximize her career long number of papers. We develop a small mathematical model to capture the mechanisms related to the research career of a scientist in an academic setting. Then, for different schemes of funding distribution, we find the scientist’s time allocation that maximizes the number of papers she publishes over her career. We find that when funding is being allocated to the best scientists and best grant proposals, scientists’ best strategy is to spend more time on writing research grant proposals rather than papers. This decreases the total number of papers published by the scientists over their career. We also find that luck is important in determining the career success of scientists. Due to errors in evaluation of proposal qualities, a scientist may fail in her career regardless of whether she has followed the best strategy that she could.
178

Smart Delivery Mobile Lockers: Design, Models and Analytics

Liu, Si January 2024 (has links)
This doctoral thesis represents pioneering research in integrating Smart Mobile Lockers with City Buses (SML-CBs) for e-commerce last-mile delivery, a novel concept rooted in the sharing economy. It explores the innovative use of underutilized urban bus capacities for parcel transportation while incorporating smart parcel lockers to facilitate self-pick-up by customers. Comprising six chapters, the thesis delineates its background, motivations, contributions, and organization in Chapter 1. Chapter 2 presents a comprehensive review of the recent literature on last-mile freight deliveries, including a bibliometric analysis, identifying gaps and opportunities for SML-CBs intervention. In Chapter 3, using survey data, we conduct empirical analytics to study Canadian consumers’ attitudes towards adopting SML-CBs, focusing on deterrents such as excessive walking distances to pick-up locations and incentives led by environmental concerns. This chapter also pinpoints demographic segments likely to be early adopters of this innovative delivery system. To address the concerns over walking distances identified in Chapter 3, Chapter 4 presents a prescriptive model and algorithms aimed at minimizing customer walking distance to self-pick-up points, considering the assignment of SML-CBs and customers. The case study results endorse the convenience of SML-CBs in terms of short walking distances. To systematically assess the sustainability benefits, a key motivator identified in Chapter 3, Chapter 5 includes analytical models for pricing and accessibility of SML-CBs. It also employs a hybrid life cycle assessment (LCA) methodology to analyze the sustainability performance of SML-CBs. It establishes system boundaries, develops pertinent LCA parameters, and illustrates substantial greenhouse gas (GHG) savings in both operational and life cycle phases when SML-CBs are utilized instead of traditional delivery trucks. The dissertation is concluded in Chapter 6, summarizing the principal contributions and suggesting avenues for future research. This comprehensive study not only provides empirical and analytical evidence supporting the feasibility and advantages of SML-CBs but also contributes to the literature on sustainable logistics and urban freight deliveries. / Thesis / Doctor of Philosophy (PhD) / This doctoral thesis represents pioneering research in integrating Smart Mobile Lockers with City Buses (SML-CBs) for e-commerce last-mile delivery. It explores the innovative use of underutilized urban bus capacities for parcel transportation while incorporating smart parcel lockers to facilitate self-pick-up by customers. Comprising six chapters, the thesis delineates its background, motivations, contributions, and organization in Chapter 1. Chapter 2 presents a comprehensive review of the recent literature on lastmile freight deliveries. In Chapter 3, we study Canadian consumers’ attitudes towards adopting SML-CBs, focusing on deterrents such as excessive walking distances to pickup locations and incentives led by environmental concerns. To address the concerns over walking distances identified in Chapter 3, Chapter 4 presents models and algorithms for operating SML-CBs. Chapter 5 presents an assessment of the sustainability of SML-CBs. The dissertation is concluded in Chapter 6, summarizing the principal contributions and suggesting avenues for future research.
179

Big data analytics capability and market performance: The roles of disruptive business models and competitive intensity

Olabode, Oluwaseun E., Boso, N., Hultman, M., Leonidou, C.N. 08 October 2021 (has links)
Yes / Research shows that big data analytics capability (BDAC) is a major determinant of firm performance. However, scant research has theoretically articulated and empirically tested the mechanisms and conditions under which BDAC influences performance. This study advances existing knowledge on the BDAC–performance relationship by drawing on the knowledge-based view and contingency theory to argue that how and when BDAC influences market performance is dependent on the intervening role of disruptive business models and the contingency role of competitive intensity. We empirically test this argument on primary data from 360 firms in the United Kingdom. The results show that disruptive business models partially mediate the positive effect of BDAC on market performance, and this indirect positive effect is strengthened when competitive intensity increases. These findings provide new perspectives on the business model processes and competitive conditions under which firms maximize marketplace value from investments in BDACs.
180

How and when does big data analytics capability contribute to market performance

Olabode, Oluwaseun E., Boso, N., Hultman, Magnus, Leonidou, C.N. 19 September 2023 (has links)
Yes / This study looks at the relationship between big data analytics capability and market performance and how this relationship can be facilitated by adopting disruptive business models in competitive environments.

Page generated in 0.1268 seconds