• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 1
  • 1
  • Tagged with
  • 10
  • 10
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Condicionantes do uso efetivo de big data e business analytics em organizações privadas: atitudes, aptidão e resultados

SANTOS, Ijon Augusto Borges dos 31 May 2016 (has links)
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2017-04-10T18:23:48Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação de Mestrado_PROPAD_UFPE_Ijon Santos.pdf: 3007544 bytes, checksum: c798b542d8e9f98334c33dbb694d633e (MD5) / Made available in DSpace on 2017-04-10T18:23:48Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação de Mestrado_PROPAD_UFPE_Ijon Santos.pdf: 3007544 bytes, checksum: c798b542d8e9f98334c33dbb694d633e (MD5) Previous issue date: 2016-05-31 / A presente dissertação busca explicar os fatores condicionantes para a adoção efetiva de Big Data e Business Analytics por parte das Organizações Privadas de Pernambuco em termos de atitudes, aptidão e resultados. Para esse fim, um apanhado teórico-conceitual é reunido sobre o avanço no tráfego de dados na era da Revolução Digital e a predisposição das organizações em se apropriar das tecnologias compatíveis de informação e comunicação que transformam o modus faciendi e o modus pensandi da sociedade. No corpus de pesquisa se destacam duas teorias fundamentadoras: A Teoria da Mediação Cognitiva e a Teoria da Estruturação (base do Modelo de Estruturação de Tecnologia). Ambas exploradas no cerne da questão da dualidade tecnologia-uso, em que o convívio com artefatos tecnológicos em interação com as ações humanas inicia um processo mútuo de influência entre esses elementos, constituindo uma nova modalidade de mediação denominada Hipercultura. Em um método quantitativo de pesquisa, tais construtos serão relacionados entre si e investigados em 183 líderes estratégicos pernambucanos, além de comparados com indivíduos equivalentes de outras naturalidades e nacionalidades por meio de um formulário especialmente preparado. Os resultados obtidos indicam o nível de prontidão das empresas sobre este tema e a relação com o sucesso ou fracasso, quando considerados os níveis de hipercultura, de capacidade analítica e das condições de Tecnologias de Informação e Comunicação existentes nas empresas. Ao final do estudo, são levantados possíveis desdobramentos para os conceitos introduzidos. / The present dissertation seeks to explain the determining factors for the effective adoption of Big Data and Business Analytics on Pernambuco’s Private Organization in terms of attitudes, skills and results. For this purpose, a theoretical-conceptual caught is gathered about the progress in data traffic in the Digital Revolution age and the willingness of organizations to take ownership of supported technologies of information and communication that transform the modus faciendi and the modus pensandi of the society. In the research corpus stand two essential theories: The Cognitive Mediation Networks Theory and the Structuration Theory (base Structurational Model of Technology). Both explored the matter of duality-use technology, in which the interaction with technological artifacts interacting with human actions starts a process of mutual influence between these elements, constituting a new form of mediation called Hyperculture. In a quantitative search method, such constructs will be related to each other and investigated 183 strategic leaders from Pernambuco, and equivalents compared to individuals with other places of birth and nationality using a specially prepared form. The results may indicate the level of readiness of the companies on this issue and if there is, or not, relation with success or failure, when considering the hyperculture levels, analytical capacity and conditions of information and communication technologies in the existing companies. At the end of the study a several possible developments, implications, and applications for the concepts introduced are presented.
2

Forecasting Large-scale Time Series Data

Hartmann, Claudio 03 December 2018 (has links)
The forecasting of time series data is an integral component for management, planning, and decision making in many domains. The prediction of electricity demand and supply in the energy domain or sales figures in market research are just two of the many application scenarios that require thorough predictions. Many of these domains have in common that they are influenced by the Big Data trend which also affects the time series forecasting. Data sets consist of thousands of temporal fine grained time series and have to be predicted in reasonable time. The time series may suffer from noisy behavior and missing values which makes modeling these time series especially hard, nonetheless accurate predictions are required. Furthermore, data sets from different domains exhibit various characteristics. Therefore, forecast techniques have to be flexible and adaptable to these characteristics. Long-established forecast techniques like ARIMA and Exponential Smoothing do not fulfill these new requirements. Most of the traditional models only represent one individual time series. This makes the prediction of thousands of time series very time consuming, as an equally large number of models has to be created. Furthermore, these models do not incorporate additional data sources and are, therefore, not capable of compensating missing measurements or noisy behavior of individual time series. In this thesis, we introduce CSAR (Cross-Sectional AutoRegression Model), a new forecast technique which is designed to address the new requirements on forecasting large-scale time series data. It is based on the novel concept of cross-sectional forecasting that assumes that time series from the same domain follow a similar behavior and represents many time series with one common model. CSAR combines this new approach with the modeling concept of ARIMA to make the model adaptable to the various properties of data sets from different domains. Furthermore, we introduce auto.CSAR, that helps to configure the model and to choose the right model components for a specific data set and forecast task. With CSAR, we present a new forecast technique that is suited for the prediction of large-scale time series data. By representing many time series with one model, large data sets can be predicted in short time. Furthermore, using data from many time series in one model helps to compensate missing values and noisy behavior of individual series. The evaluation on three real world data sets shows that CSAR outperforms long-established forecast techniques in accuracy and execution time. Finally, with auto.CSAR, we create a way to apply CSAR to new data sets without requiring the user to have extensive knowledge about our new forecast technique and its configuration.
3

Spatial Multimedia Data Visualization

JAMONNAK, SUPHANUT 30 November 2021 (has links)
No description available.
4

Learning lost temporal fuzzy association rules

Matthews, Stephen January 2012 (has links)
Fuzzy association rule mining discovers patterns in transactions, such as shopping baskets in a supermarket, or Web page accesses by a visitor to a Web site. Temporal patterns can be present in fuzzy association rules because the underlying process generating the data can be dynamic. However, existing solutions may not discover all interesting patterns because of a previously unrecognised problem that is revealed in this thesis. The contextual meaning of fuzzy association rules changes because of the dynamic feature of data. The static fuzzy representation and traditional search method are inadequate. The Genetic Iterative Temporal Fuzzy Association Rule Mining (GITFARM) framework solves the problem by utilising flexible fuzzy representations from a fuzzy rule-based system (FRBS). The combination of temporal, fuzzy and itemset space was simultaneously searched with a genetic algorithm (GA) to overcome the problem. The framework transforms the dataset to a graph for efficiently searching the dataset. A choice of model in fuzzy representation provides a trade-off in usage between an approximate and descriptive model. A method for verifying the solution to the hypothesised problem was presented. The proposed GA-based solution was compared with a traditional approach that uses an exhaustive search method. It was shown how the GA-based solution discovered rules that the traditional approach did not. This shows that simultaneously searching for rules and membership functions with a GA is a suitable solution for mining temporal fuzzy association rules. So, in practice, more knowledge can be discovered for making well-informed decisions that would otherwise be lost with a traditional approach.
5

Big Data usage in the Maritime industry : A Qualitative Study for the use of Port State Control (PSC) inspection data by shipping professionals

Ampatzidis, Dimitrios January 2021 (has links)
Vessels during their calls on ports is possible to have an inspection from the local Port State Control (PSC) authorities regarding their implementation of International Maritime Organization guidelines for safety and security. This qualitative study focuses on how shipping professionals understand and use Big Data in the PSC inspection databases, what characteristics they recognize these data should have, what value they attach to those big data, and how they use them to support the decision-making process within their organizations. This study conducted interviews with shipping professionals, collected their perspectives, and analyzed their sayings with Thematic Analysis to reach the study's outcome. Many researchers have been discussed Big Data characteristics and the value an organization or a researcher could have from Big Data and Analytics. However, there is no universally accepted theory regarding Big Data characteristics and the value for the database users. The research concluded that Big Data from the PSC inspections procedures provides valid and helpful information that broadens professionals' understanding of inspection control and safety need, through this, it is possible to upscale their internal operations and their decision-making procedures as long as these data are characterized by volume, velocity, veracity, and complexity.
6

Data Science and Analytics in Industrial Maintenance: Selection, Evaluation, and Application of Data-Driven Methods

Zschech, Patrick 02 October 2020 (has links)
Data-driven maintenance bears the potential to realize various benefits based on multifaceted data assets generated in increasingly digitized industrial environments. By taking advantage of modern methods and technologies from the field of data science and analytics (DSA), it is possible, for example, to gain a better understanding of complex technical processes and to anticipate impending machine faults and failures at an early stage. However, successful implementation of DSA projects requires multidisciplinary expertise, which can rarely be covered by individual employees or single units within an organization. This expertise covers, for example, a solid understanding of the domain, analytical method and modeling skills, experience in dealing with different source systems and data structures, and the ability to transfer suitable solution approaches into information systems. Against this background, various approaches have emerged in recent years to make the implementation of DSA projects more accessible to broader user groups. These include structured procedure models, systematization and modeling frameworks, domain-specific benchmark studies to illustrate best practices, standardized DSA software solutions, and intelligent assistance systems. The present thesis ties in with previous efforts and provides further contributions for their continuation. More specifically, it aims to create supportive artifacts for the selection, evaluation, and application of data-driven methods in the field of industrial maintenance. For this purpose, the thesis covers four artifacts, which were developed in several publications. These artifacts include (i) a comprehensive systematization framework for the description of central properties of recurring data analysis problems in the field of industrial maintenance, (ii) a text-based assistance system that offers advice regarding the most suitable class of analysis methods based on natural language and domain-specific problem descriptions, (iii) a taxonomic evaluation framework for the systematic assessment of data-driven methods under varying conditions, and (iv) a novel solution approach for the development of prognostic decision models in cases of missing label information. Individual research objectives guide the construction of the artifacts as part of a systematic research design. The findings are presented in a structured manner by summarizing the results of the corresponding publications. Moreover, the connections between the developed artifacts as well as related work are discussed. Subsequently, a critical reflection is offered concerning the generalization and transferability of the achieved results. Thus, the thesis not only provides a contribution based on the proposed artifacts; it also paves the way for future opportunities, for which a detailed research agenda is outlined.:List of Figures List of Tables List of Abbreviations 1 Introduction 1.1 Motivation 1.2 Conceptual Background 1.3 Related Work 1.4 Research Design 1.5 Structure of the Thesis 2 Systematization of the Field 2.1 The Current State of Research 2.2 Systematization Framework 2.3 Exemplary Framework Application 3 Intelligent Assistance System for Automated Method Selection 3.1 Elicitation of Requirements 3.2 Design Principles and Design Features 3.3 Prototypical Instantiation and Evaluation 4 Taxonomic Framework for Method Evaluation 4.1 Survey of Prognostic Solutions 4.2 Taxonomic Evaluation Framework 4.3 Exemplary Framework Application 5 Method Application Under Industrial Conditions 5.1 Conceptualization of a Solution Approach 5.2 Prototypical Implementation and Evaluation 6 Discussion of the Results 6.1 Connections Between Developed Artifacts and Related Work 6.2 Generalization and Transferability of the Results 7 Concluding Remarks Bibliography Appendix I: Implementation Details Appendix II: List of Publications A Publication P1: Focus Area Systematization B Publication P2: Focus Area Method Selection C Publication P3: Focus Area Method Selection D Publication P4: Focus Area Method Evaluation E Publication P5: Focus Area Method Application / Datengetriebene Instandhaltung birgt das Potential, aus den in Industrieumgebungen vielfältig anfallenden Datensammlungen unterschiedliche Nutzeneffekte zu erzielen. Unter Verwendung von modernen Methoden und Technologien aus dem Bereich Data Science und Analytics (DSA) ist es beispielsweise möglich, das Verhalten komplexer technischer Prozesse besser nachzuvollziehen oder bevorstehende Maschinenausfälle und Fehler frühzeitig zu erkennen. Eine erfolgreiche Umsetzung von DSA-Projekten erfordert jedoch multidisziplinäres Expertenwissen, welches sich nur selten von einzelnen Personen bzw. Einheiten innerhalb einer Organisation abdecken lässt. Dies umfasst beispielsweise ein fundiertes Domänenverständnis, Kenntnisse über zahlreiche Analysemethoden, Erfahrungen im Umgang mit verschiedenen Quellsystemen und Datenstrukturen sowie die Fähigkeit, geeignete Lösungsansätze in Informationssysteme zu überführen. Vor diesem Hintergrund haben sich in den letzten Jahren verschiedene Ansätze herausgebildet, um die Durchführung von DSA-Projekten für breitere Anwendergruppen zugänglich zu machen. Dazu gehören strukturierte Vorgehensmodelle, Systematisierungs- und Modellierungsframeworks, domänenspezifische Benchmark-Studien zur Veranschaulichung von Best Practices, Standardlösungen für DSA-Software und intelligente Assistenzsysteme. An diese Arbeiten knüpft die vorliegende Dissertation an und liefert weitere Artefakte, um insbesondere die Selektion, Evaluation und Anwendung datengetriebener Methoden im Bereich der industriellen Instandhaltung zu unterstützen. Insgesamt erstreckt sich die Abhandlung auf vier Artefakte, die in einzelnen Publikationen erarbeitet wurden. Dies umfasst (i) ein umfangreiches Systematisierungsframework zur Beschreibung zentraler Ausprägungen wiederkehrender Datenanalyseprobleme im Bereich der industriellen Instandhaltung, (ii) ein textbasiertes Assistenzsystem, welches ausgehend von natürlichsprachlichen und domänenspezifischen Problembeschreibungen eine geeignete Klasse von Analysemethoden vorschlägt, (iii) ein taxonomisches Evaluationsframework zur systematischen Bewertung von datengetriebenen Methoden unter verschiedenen Rahmenbedingungen sowie (iv) einen neuartigen Lösungsansatz zur Entwicklung von prognostischen Entscheidungsmodellen im Fall von eingeschränkter Informationslage. Die Konstruktion der Artefakte wird durch einzelne Forschungsziele im Rahmen eines systematischen Forschungsdesigns angeleitet. Neben der Darstellung der einzelnen Forschungsbeiträge unter Bezugnahme auf die erzielten Ergebnisse der dazugehörigen Publikationen werden auch die Verbindungen zwischen den entwickelten Artefakten beleuchtet und Zusammenhänge zu angrenzenden Arbeiten hergestellt. Zudem erfolgt eine kritische Reflektion der Ergebnisse hinsichtlich ihrer Verallgemeinerung und Übertragung auf andere Rahmenbedingungen. Dadurch liefert die vorliegende Abhandlung nicht nur einen Beitrag anhand der erzeugten Artefakte, sondern ebnet auch den Weg für fortführende Forschungsarbeiten, wofür eine detaillierte Forschungsagenda erarbeitet wird.:List of Figures List of Tables List of Abbreviations 1 Introduction 1.1 Motivation 1.2 Conceptual Background 1.3 Related Work 1.4 Research Design 1.5 Structure of the Thesis 2 Systematization of the Field 2.1 The Current State of Research 2.2 Systematization Framework 2.3 Exemplary Framework Application 3 Intelligent Assistance System for Automated Method Selection 3.1 Elicitation of Requirements 3.2 Design Principles and Design Features 3.3 Prototypical Instantiation and Evaluation 4 Taxonomic Framework for Method Evaluation 4.1 Survey of Prognostic Solutions 4.2 Taxonomic Evaluation Framework 4.3 Exemplary Framework Application 5 Method Application Under Industrial Conditions 5.1 Conceptualization of a Solution Approach 5.2 Prototypical Implementation and Evaluation 6 Discussion of the Results 6.1 Connections Between Developed Artifacts and Related Work 6.2 Generalization and Transferability of the Results 7 Concluding Remarks Bibliography Appendix I: Implementation Details Appendix II: List of Publications A Publication P1: Focus Area Systematization B Publication P2: Focus Area Method Selection C Publication P3: Focus Area Method Selection D Publication P4: Focus Area Method Evaluation E Publication P5: Focus Area Method Application
7

Analyzing Small Businesses' Adoption of Big Data Security Analytics

Mathias, Henry 01 January 2019 (has links)
Despite the increased cost of data breaches due to advanced, persistent threats from malicious sources, the adoption of big data security analytics among U.S. small businesses has been slow. Anchored in a diffusion of innovation theory, the purpose of this correlational study was to examine ways to increase the adoption of big data security analytics among small businesses in the United States by examining the relationship between small business leaders' perceptions of big data security analytics and their adoption. The research questions were developed to determine how to increase the adoption of big data security analytics, which can be measured as a function of the user's perceived attributes of innovation represented by the independent variables: relative advantage, compatibility, complexity, observability, and trialability. The study included a cross-sectional survey distributed online to a convenience sample of 165 small businesses. Pearson correlations and multiple linear regression were used to statistically understand relationships between variables. There were no significant positive correlations between relative advantage, compatibility, and the dependent variable adoption; however, there were significant negative correlations between complexity, trialability, and the adoption. There was also a significant positive correlation between observability and the adoption. The implications for positive social change include an increase in knowledge, skill sets, and jobs for employees and increased confidentiality, integrity, and availability of systems and data for small businesses. Social benefits include improved decision making for small businesses and increased secure transactions between systems by detecting and eliminating advanced, persistent threats.
8

Increasing analytics maturity by establishing analytics networks and spreading the use of Lean Six Sigma : A case study of a global B2B company

SVANTESSON ROMANOV, VIKTOR, GULLQVIST, IDA January 2016 (has links)
Organisations with high-performing data and analytics capabilities are more successful than organisations with lower analytics maturity. It is therefore necessary for organisations to assess their analytics capabilities and needs in order to identify and evaluate areas of improvement that need to be addressed. This was the purpose of this case study conducted on a region in a global B2B organisation, which has a centrally established analytics function on corporate level, wanting the use of analytics to be integrated in more of the region’s processes and analytical capabilities and resources being used as efficient as possible.To fulfil the thesis purpose, empirical data was collected through qualitative interviews with employees on corporate level, more quantitative interviews with regional employees and a questionnaire issued to regional employees. This was complemented with a thorough literature study which provided the analytics maturity models used for identifying the current capabilities on a holistic level of the region, as well as analytics setups, Lean Six Sigma and Knowledge Management. Results show a relatively low analytics maturity due to e.g. insufficient support from management, unclear responsibility of analytics, data not being used correctly or requested enough and various issues with competence, tools and sources.This study contributes to analytics research by identifying that analytics maturity models available free of charge only are good for inspiration and not full use when used in a large company. Furthermore, the study shows that complexities arise when having a central analytics function with low analytics maturity while other parts of the company face analytics problems but no indications are given on who and what to proceed on or not. This study therefore results in contributing with a proposition for companies wanting to increase its analytics maturity that this could be facilitated by establishing networks for analytics. Combining literature and empirics show that networks enable investigation of the analytics situation while at the same time enabling increased sharing, collaboration, innovation, coordination and dissemination. By making Lean Six Sigma a central part of the network analytics will be used more and better while at the same time increasing the success-rate of change and improvements projects.
9

Big and Small Data for Value Creation and Delivery: Case for Manufacturing Firms

Stout, Blaine David, PhD January 2018 (has links)
No description available.
10

Stora datamängders revolution : en ny era av digital marknadsföring / The big data revolution : a new era of digital marketing

Rosander, Felix, Stiernstedt, Isabelle January 2023 (has links)
Denna kvalitativa studie undersöker den påverkan som stora datamängder och prediktiv analys har på digitala marknadsföringsstrategier i datadrivna verksamheter. Genom djupintervjuer med digitala marknadsförare och dataanalytiker inom olika branscher, bidrar studien med en inblick i respondenternas personliga uppfattning i hur dessa digitala verktyg påverkar deras strategier och affärsverksamhet i det IT-beroende arbetssystemet. Användningen av stora datamängder och prediktiv analys anses som kritiska verktyg för att på ett mer effektivt sätt samla och analysera kunddata och kundbeteenden. Detta eftersom det ger upphov till möjligheten att förutsäga kundtrender och anpassa verksamhetens marknadsföringsstrategier i realtid. Idag har företagens förmåga att på ett effektivt sätt samla in och analysera data en alltmer avgörande roll. Inte bara för att utveckla marknadsföringsstrategier, men även för att uppnå omfattande konkurrensfördelar. Studien antyder att verksamheter som integrerar stora datamängder och prediktiv analys på ett effektivt sätt i sina strategier får en ökad förståelse för sina kundsegment, detta genom en ökad insikt och kan således bättre rikta och anpassa sina marknadsföringskampanjer mot sina kundsegment. Studien uppmärksammar även utmaningar som kommer till följd av faktorer såsom datakvalitet, optimering, etik och andra aspekter som kräver noggrannhet och nödvändiga färdigheter. Framtiden inom digital marknadsföring sträcker sig alltmer mot datadrivna arbetssätt, där det finns en ökad betoning på att ta till analytiska metoder för att fatta beslut. Denna utveckling påvisar ett skifte från de traditionella marknadsföringsstrategierna till en tillämpning av ett datadrivet tillvägagångssätt. Med hänsyn till detta blir det allt viktigare för företag att anamma ett arbetssätt som ökar förmågan att snabbt anpassa sig och tillämpa tekniska verktyg för att kunna utnyttja denna potential.  Sammanfattningsvis belyser denna kvalitativa studie vikten av att integrera stora datamängder och prediktiv analys i affärs- och marknadsföringsstrategier. Detta visar sig ha en stor inverkan på att inte enbart förbättra verksamhetens marknadsföring utan även stärka den övergripande affärsverksamheten. Vilket understryker behovet av att kontinuerligt utveckla kompetenser och strategier inom dataanalys för att skapa förståelse för hur dessa kan transformera kundrelationer och affärsresultat. Detta perspektiv baseras på tolkningar av intervjuer med marknadsförare inom IT-beroende arbetssystem, och bör ses som insikter som är specifika för de undersökta fallen snarare än breda generaliseringar. / This qualitative study explores the impact of big data and predictive analytics on digital marketing strategies in data-driven businesses. Through in-depth interviews with digital marketers and data analysts in different industries, the study provides an insight into the respondents' personal perception of how these digital tools affect their strategies and business operations in the IT-reliant work system. The use of big data and predictive analytics are considered critical tools to more effectively collect and analyze customer data and behavior. This is because it gives rise to the ability to predict customer trends and adapt the business' marketing strategies in real time. Today, companies' ability to effectively collect and analyze data plays an increasingly crucial role. Not only to develop marketing strategies, but also to achieve significant competitive advantage. The study suggests that businesses that effectively integrate big data and predictive analytics into their strategies gain a better understanding of their customer segments through increased insight and can thus better target and adapt their marketing campaigns to their customer segments. The study also highlights challenges arising from factors such as data quality, optimization, ethics and other aspects that require accuracy and necessary skills. The future of digital marketing is increasingly moving towards data-driven approaches, where there is a greater emphasis on using analytical methods to make decisions. This development demonstrates a shift from the traditional marketing strategies to an application of a data-driven approach. In light of this, it is increasingly important for companies to adopt an approach that enhances their ability to quickly adapt and apply technological tools to exploit this potential. In conclusion, this qualitative study highlights the importance of integrating big data and predictive analytics into business and marketing strategies. This proves to have a great impact on not only improving the organization's marketing but also strengthening the overall business operations. This underlines the need to continuously develop data analytics skills and strategies to understand how they can transform customer relationships and business performance. This perspective is based on interpretations of interviews with marketers in IT-reliant work systems, and should be seen as insights specific to the cases studied rather than broad generalizations.

Page generated in 0.4848 seconds