• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 158
  • 18
  • 8
  • 6
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 273
  • 273
  • 116
  • 65
  • 56
  • 49
  • 47
  • 46
  • 44
  • 43
  • 38
  • 31
  • 29
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Marketing Research in the 21st Century: Opportunities and Challenges

Hair, Joe F., Harrison, Dana E., Risher, Jeffrey J. 01 October 2018 (has links)
The role of marketing is evolving rapidly, and design and analysis methods used by marketing researchers are also changing. These changes are emerging from transformations in management skills, technological innovations, and continuously evolving customer behavior. But perhaps the most substantial driver of these changes is the emergence of big data and the analytical methods used to examine and understand the data. To continue being relevant, marketing research must remain as dynamic as the markets themselves and adapt accordingly to the following: Data will continue increasing exponentially; data quality will improve; analytics will be more powerful, easier to use, and more widely used; management and customer decisions will increasingly be knowledge-based; privacy issues and challenges will be both a problem and an opportunity as organizations develop their analytics skills; data analytics will become firmly established as a competitive advantage, both in the marketing research industry and in academics; and for the foreseeable future, the demand for highly trained data scientists will exceed the supply.
92

Finding co-workers with similar competencies through data clustering / Att upptäcka medarbetare med liknande kompetensprofil via dataklustring

Skoglund, Oskar January 2022 (has links)
In this thesis, data clustering techniques are applied to a competence database from the company Combitech. The goal of the clustering is to connect co-workers with similar competencies and competence areas in order to enable more skill sharing. This is accomplished by implementing and evaluating three clustering algorithms, k-modes, DBSCAN, and ROCK. The clustering algorithms are fine-tuned with the use of three internal validity indices, the Dunn, Silhouette, and Davies-Bouldin score. Finally, a form regarding the clustering of the three algorithms is sent out to the co-workers, which the clustering is based on, in order to obtain external validation by calculating the clustering accuracy. The results from the internal validity indices show that ROCK and DBSCAN create the most separated and dense clusters. The results from the form show that ROCK is the most accurate of the three algorithms, with an accuracy of 94%, followed by k-modes at 58% and DBSCAN at 40% accuracy. However, the visualization of the clusters shows that both ROCK and DBSCAN create one very big cluster, which is not desirable. This was not the case for k-modes, where the clusters are more evenly sized while still being fairly well-separated. In general, the results show that it is possible to use data clustering techniques to connect people with similar competencies and that the predicted clusters agree fairly well with the gold-standard data from the co-workers. However, the results are very dependent on the choice of algorithm and parametric values, and thus have to be chosen carefully.
93

A performance study for autoscaling big data analytics containerized applications : Scalability of Apache Spark on Kubernetes

Vennu, Vinay Kumar, Yepuru, Sai Ram January 2022 (has links)
Container technologies are rapidly changing how distributed applications are executed and managed on cloud computing resources. As containers can be deployed on a large scale, there is a tremendous need for Container Orchestration tools like Kubernetes that are highly automatic in deployment, scaling, and management. In recent times, the adoption of these container technologies like Docker has seen a rise in internal usage, commercial offering, and various application fields ranging from High-Performance Computing to Geo-distributed (Edge or IoT) applications. Big Data analytics is another field where there is a trend to run applications (e.g., Apache Spark) as containers for elastic workloads and multi-tenant service models by leveraging various container orchestration tools like Kubernetes. Despite the abundant research on the performance impact of containerizing big data applications, to the best of our knowledge, the studies that focus on specific aspects like scalability and resource management are largely unexplored, which leaves a research gap to study upon. This research studies the performance impact of autoscaling a big data analytics application on Kubernetes based on autoscaling mechanisms like Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA). These state-of-art autoscaling mechanisms available for scaling containerized applications on Kubernetes and the available big data benchmarking tools for generating workload on frameworks like Spark are identified through a literature review. Apache Spark is selected as a representative big data application due to its ecosystem and industry-wide adoption by enterprises. In particular, a series of experiments are conducted by adjusting resource parameters (such as CPU requests and limits) and autoscaling mechanisms to measure run-time metrics like execution time and CPU utilization. Our experiment results show that while Spark performs better execution time when configured to scale with VPA, it also exhibits overhead in CPU utilization. In contrast, the impact of autoscaling big data applications using HPA adds overhead in terms of both execution time and CPU utilization. The research from this thesis can be used by researchers and other cloud practitioners, using big data applications to evaluate autoscaling mechanisms and derive better performance and resource utilization.
94

Academic Analytics: Zur Bedeutung von (Big) Data Analytics in der Evaluation

Stützer, Cathleen M. 03 September 2020 (has links)
Im Kontext der Hochschul- und Bildungsforschung wird Evaluation in ihrer Gesamtheit als Steuerungs- und Controlling-Instrument eingesetzt, um unter anderem Aussagen zur Qualität von Lehre, Forschung und Administration zu liefern. Auch wenn der Qualitätsbegriff an den Hochschulen bislang noch immer sehr unterschiedlich geführt wird, verfolgen die Beteiligten ein einheitliches Ziel – die Evaluation als zuverlässiges (internes) Präventions- und VorhersageInstrument in den Hochschulalltag zu integrieren. Dass dieses übergeordnete Ziel mit einigen Hürden verbunden ist, liegt auf der Hand und wird in der Literatur bereits vielfältig diskutiert (Benneworth & Zomer 2011; Kromrey 2001; Stockmann & Meyer 2014; Wittmann 2013). Die Evaluationsforschung bietet einen interdisziplinären Forschungszugang. Instrumente und Methoden aus unterschiedlichen (sozialwissenschaftlichen) Disziplinen, die sowohl qualitativer als auch quantitativer Natur sein können, kommen zum Einsatz. Mixed Method/Multi Data–Ansätze gelten dabei – trotz des unstreitbar höheren Erhebungs- und Verwertungsaufwandes – als besonders einschlägig in ihrer Aussagekraft (Döring 2016; Hewson 2007). Allerdings finden (Big) Data Analytics, Echtzeit- und Interaktionsanalysen nur sehr langsam einen Zugang zum nationalen Hochschul- und Bildungssystem. Der vorliegende Beitrag befasst sich mit der Bedeutung von (Big) Data Analytics in der Evaluation. Zum einen werden Herausforderungen und Potentiale aufgezeigt – zum anderen wird der Frage nachgegangen, wie es gelingen kann, (soziale) Daten (automatisiert) auf unterschiedlichen Aggregationsebenen zu erheben und auszuwerten. Es werden am Fallbeispiel der Evaluation von E-Learning in der Hochschullehre geeignete Erhebungsmethoden, Analyseinstrumente und Handlungsfelder vorgestellt. Die Fallstudie wird dabei in den Kontext der Computational Social Science (CSS) überführt, um einen Beitrag zur Entwicklung der Evaluationsforschung im Zeitalter von Big Data und sozialen Netzwerken zu leisten.
95

Building Big Data Analytics as a Strategic Capability in Industrial Firms:Firm Level Capabilities and Project Level Practices

Alexander, Dijo T. 29 January 2019 (has links)
No description available.
96

Use of Data Analytics and Machine Learning to Improve Culverts Asset Management Systems

Gao, Ce 10 June 2019 (has links)
No description available.
97

SHOPS Predicting Shooting Crime Locations Using Principle of Data Analytics

Varlioglu, Muhammed 21 October 2019 (has links)
No description available.
98

Implications of Analytics and Visualization of Torque Tightening Process Data on Decision Making : An automotive perspective

Thomas, Nikhil January 2023 (has links)
In recent years, there is an increased focus on integrating digital technologies in industrial processes, also termed ”Industry 4.0”. Out of the many challenges for the transition, one is to understand how to find useful insights from data collected over large periods of time, predominantly in industrial IT systems. Automotive assembly plant X is currently undergoing a digital transformation to leverage such technologies. There is an emphasis to understand the implications of data analytics and visualization and how it could be leveraged for process optimization. The torque tightening assembly process at plant X was chosen to carry out the study as there were opportunities to access the process data from the tool management system database. The purpose of this master thesis was thus to find the implications of data analytics on the torque tightening operations in assembly plant X. In addition, the thesis also aimed to understand how visualization of key performance indicators (KPIs) can improve traceability of operational deviations. In other words, the study aims to validate how data analytics and visualization of KPIs facilitate data-driven decision making, improve traceability of operational deviations. The research is based on an inductive, exploratory case study approach. The study was carried out by understanding the current state through a series of interviews and then followed by the development of the framework and dashboard for visualization of operational deviations. Further, a discussion on how data analytics and visualization could help in decision-making for continuous improvement efforts is presented. / På senare år har det funnits ett ökat fokus på att integrera digital teknik i industriella processer, även kallad ”Industry 4.0”. Av de många utmaningarna för övergången är en att förstå hur man kan hitta användbara insikter från data som samlats in under långa tidsperioder, främst i industriella IT-system. Fordonsmonteringsfabrik X genomgår för närvarande en digital transformation för att dra nytta av sådan teknik. Det finns en betoning på att förstå implikationerna av dataanalys och visualisering och hur det kan utnyttjas för processoptimering. Vridmoment åtdragning monteringsprocessen vid anläggning X valdes för att genomföra studien eftersom det fanns möjligheter att komma åt processdata från verktygshanteringssystemdatabasen. Syftet med detta examensarbete var alltså att hitta implikationerna av dataanalys på momentåtdragningsoperationerna i monteringsanläggning X. Dessutom syftade examensarbetet också till att förstå hur visualisering av nyckeltalsindikatorer (KPI) kan förbättra spårbarheten av driftsavvikelser. Med andra ord syftar studien till att validera hur dataanalys och visualisering av KPI:er underlättar datadrivet beslutsfattande, förbättrar spårbarheten av operativa avvikelser. Forskningen bygger på en induktiv utforskande fallstudiemetod. Studien genomfördes genom att förstå nuläget genom en serie intervjuer och sedan följdes av utvecklingen av ramverket och digital informationstavla för visualisering av operativa avvikelser. Vidare presenteras en diskussion om hur dataanalys och visualisering kan hjälpa till vid beslutsfattande för ständiga förbättringsarbeten.
99

Enhancing urban centre resilience under climate-induced disasters using data analytics and machine learning techniques

Haggag, May January 2021 (has links)
According to the Centre for Research on the Epidemiology of Disasters, the global average number of CID has tripled in less than four decades (from approximately 1,300 Climate-Induced Disasters (CID) between 1975 and 1984 to around 3,900 between 2005 and 2014). In addition, around 1 million deaths and $1.7 trillion damage costs were attributed to CID since 2000, with around $210 billion incurred only in 2020. Consequently, the World Economic Forum identified extreme weather as the top ranked global risk in terms of likelihood and among the top five risks in terms of impact in the last 4 years. These risks are not expected to diminish as: i) the number of CID is anticipated to double during the next 13 years; ii) the annual fatalities due to CID are expected to increase by 250,000 deaths in the next decade; and iii) the annual CID damage costs are expected to increase by around 20% in 2040 compared to those realized in 2020. Given the anticipated increase in CID frequency, the intensification of CID impacts, the rapid growth in the world’s population, and the fact that two thirds of such population will be officially living in urban areas by 2050, it has recently become extremely crucial to enhance both community and city resilience under CID. Resilience, in that context, refers to the ability of a system to bounce back, recover or adapt in the face of adverse events. This is considered a very farfetched goal given both the extreme unpredictability of the frequency and impacts of CID and the complex behavior of cities that stems from the interconnectivity of their comprising infrastructure systems. With the emergence of data-driven machine learning which assumes that models can be trained using historical data and accordingly, can efficiently learn to predict different complex features, developing robust models that can predict the frequency and impacts of CID became more conceivable. Through employing data analytics and machine learning techniques, this work aims at enhancing city resilience by predicting both the occurrence and expected impacts of climate-induced disasters on urban areas. The first part of this dissertation presents a critical review of the research work pertaining to resilience of critical infrastructure systems. Meta-research is employed through topic modelling, to quantitatively uncover related latent topics in the field. The second part aims at predicting the occurrence of CID by developing a framework that links different climate change indices to historical disaster records. In the third part of this work, a framework is developed for predicting the performance of critical infrastructure systems under CID. Finally, the aim of the fourth part of this dissertation is to develop a systematic data-driven framework for the prediction of CID property damages. This work is expected to aid stakeholders in developing spatio-temporal preparedness plans under CID, which can facilitate mitigating the adverse impacts of CID on infrastructure systems and improve their resilience. / Thesis / Doctor of Philosophy (PhD)
100

Hardware Utilisation Techniques for Data Stream Processing

Meldrum, Max January 2019 (has links)
Recent years have seen an increase in use of the stream processing architecture to compose continuous analytics applications. This thesis presents the design of a Rust-based stream processor that adopts two separate techniques to tackle existing weaknesses in modern production-grade stream processors. The first technique employs a data analytics language on top of the streaming runtime, in order to provide both dataflow and low-level compiler optimisations. This technique is motivated by an analysis of the impact that the lack of compiler integration may have on the end-to-end performance of streaming pipelines in Apache Flink. In the second technique streaming operators are scheduled using a task-parallel approach to boost performance for skewed data distributions. The experimental results for data-parallel streaming pipelines in this thesis demonstrate, that the scheduling model of the prototype achieves performance improvements in skewed scenarios without exhibiting any significant losses in performance during uniform distributions. / Under senare år har användningen av strömbearbetningsarkitekturen ökat för att komponera kontinuerliga analysapplikationer. Denna avhandling presenterar designen av en Rust-baserad strömprocessor som använder två separata tekniker för att hantera befintliga svagheter i moderna strömprocessorer. Den första tekniken använder ett dataanalysspråk ovanpå strömprocessorn, för att ge både dataflöde och kompilatoroptimeringar på låg nivå. Denna teknik är motiverad av en analys av påverkan som bristen på kompilatorintegration kan ha på den slutliga prestandan för analysapplikationer i Apache Flink. I den andra tekniken schemaläggs strömningsoperatörer med hjälp av en uppgiftsparallell metod för att öka prestanda för skev datadistribution. De experimentella resultaten för data-parallella analysapplikationer i denna avhandling visar att schemaläggningsmodellen för prototypen uppnår prestandaförbättringar i ojämna distributioner utan att uppvisa några betydande förluster i prestanda under enhetliga fördelningar.

Page generated in 0.0281 seconds