• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 585
  • 114
  • 109
  • 75
  • 40
  • 39
  • 27
  • 22
  • 19
  • 10
  • 7
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 1212
  • 1212
  • 177
  • 164
  • 163
  • 152
  • 149
  • 148
  • 145
  • 128
  • 112
  • 110
  • 109
  • 108
  • 107
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

A Workload-aware Resource Management and Scheduling System for Big Data Analysis

Xu, Luna 05 February 2019 (has links)
The big data era has driven the needs for data analysis in every aspect of our daily lives. With the rapid growth of data size and complexity of data analysis models, modern big data analytic applications face the challenge to provide timely results often with limited resources. Such demand drives the growth of new hardware resources including GPUs and FPGAs, as well as storage devices such as SSDs and NVMs. It is challenging to manage the resources available in a cost restricted environment to best serve the applications with different characteristics. Extant approaches are agnostic to such heterogeneity in both underlying resources and workloads and require user knowledge and manual configuration for best performance. In this dissertation, we design, and implement a series of novel techniques, algorithms, and frameworks, to realize workload-aware resource management and scheduling. We demonstrate our techniques for efficient resource management across memory resource for in-memory data analytic platforms, processing resources for compute-intensive machine learning applications, and finally we design and develop a workload and heterogeneity-aware scheduler for general big data platforms. The dissertation demonstrates that designing an effective resource manager requires efforts from both application and system side. The presented approach makes and joins the efforts on both sides to provide a holistic heterogeneity-aware resource manage and scheduling system. We are able to avoid task failure due to resource unavailability by workload-aware resource management, and improve the performance of data processing frameworks by carefully scheduling tasks according to the task characteristics and utilization and availability of the resources. / Ph. D. / Clusters of multiple computers connected through internet are often deployed in industry for larger scale data processing or computation that cannot be handled by standalone computers. In such a cluster, resources such as CPU, memory, disks are integrated to work together. It is important to manage a pool of such resources in a cluster to efficiently work together to provide better performance for workloads running on top. This role is taken by a software component in the middle layer called resource manager. Resource manager coordinates the resources in the computers and schedule tasks to them for computation. This dissertation reveals that current resource managers often partition resources statically hence cannot capture the dynamic resource needs of workloads as well as the heterogeneous configurations of the underlying resources. For example, some computers in a clsuter might be older than the others with slower CPU, less memory, etc. Workloads can show different resource needs. Watching YouTube require a lot of network resource while playing games demands powerful GPUs. To this end, the disseration proposes novel approaches to manage resources that are able to capture the heterogeneity of resources and dynamic workload needs, based on which, it can achieve efficient resource management, and schedule the right task to the right resource.
72

Modeling and Analysis of Non-Linear Dependencies using Copulas, with Applications to Machine Learning

Karra, Kiran 21 September 2018 (has links)
Many machine learning (ML) techniques rely on probability, random variables, and stochastic modeling. Although statistics pervades this field, there is a large disconnect between the copula modeling and the machine learning communities. Copulas are stochastic models that capture the full dependence structure between random variables and allow flexible modeling of multivariate joint distributions. Elidan was the first to recognize this disconnect, and introduced copula based models to the ML community that demonstrated magnitudes of order better performance than the non copula-based models Elidan [2013]. However, the limitation of these is that they are only applicable for continuous random variables and real world data is often naturally modeled jointly as continuous and discrete. This report details our work in bridging this gap of modeling and analyzing data that is jointly continuous and discrete using copulas. Our first research contribution details modeling of jointly continuous and discrete random variables using the copula framework with Bayesian networks, termed Hybrid Copula Bayesian Networks (HCBN) [Karra and Mili, 2016], a continuation of Elidan’s work on Copula Bayesian Networks Elidan [2010]. In this work, we extend the theorems proved by Neslehov ˇ a [2007] from bivariate ´ to multivariate copulas with discrete and continuous marginal distributions. Using the multivariate copula with discrete and continuous marginal distributions as a theoretical basis, we construct an HCBN that can model all possible permutations of discrete and continuous random variables for parent and child nodes, unlike the popular conditional linear Gaussian network model. Finally, we demonstrate on numerous synthetic datasets and a real life dataset that our HCBN compares favorably, from a modeling and flexibility viewpoint, to other hybrid models including the conditional linear Gaussian and the mixture of truncated exponentials models. Our second research contribution then deals with the analysis side, and discusses how one may use copulas for exploratory data analysis. To this end, we introduce a nonparametric copulabased index for detecting the strength and monotonicity structure of linear and nonlinear statistical dependence between pairs of random variables or stochastic signals. Our index, termed Copula Index for Detecting Dependence and Monotonicity (CIM), satisfies several desirable properties of measures of association, including Renyi’s properties, the data processing inequality (DPI), and ´ consequently self-equitability. Synthetic data simulations reveal that the statistical power of CIM compares favorably to other state-of-the-art measures of association that are proven to satisfy the DPI. Simulation results with real-world data reveal CIM’s unique ability to detect the monotonicity structure among stochastic signals to find interesting dependencies in large datasets. Additionally, simulations show that CIM shows favorable performance to estimators of mutual information when discovering Markov network structure. Our third research contribution deals with how to assess an estimator’s performance, in the scenario where multiple estimates of the strength of association between random variables need to be rank ordered. More specifically, we introduce a new property of estimators of the strength of statistical association, which helps characterize how well an estimator will perform in scenarios where dependencies between continuous and discrete random variables need to be rank ordered. The new property, termed the estimator response curve, is easily computable and provides a marginal distribution agnostic way to assess an estimator’s performance. It overcomes notable drawbacks of current metrics of assessment, including statistical power, bias, and consistency. We utilize the estimator response curve to test various measures of the strength of association that satisfy the data processing inequality (DPI), and show that the CIM estimator’s performance compares favorably to kNN, vME, AP, and HMI estimators of mutual information. The estimators which were identified to be suboptimal, according to the estimator response curve, perform worse than the more optimal estimators when tested with real-world data from four different areas of science, all with varying dimensionalities and sizes. / Ph. D. / Many machine learning (ML) techniques rely on probability, random variables, and stochastic modeling. Although statistics pervades this field, many of the traditional machine learning techniques rely on linear statistical techniques and models. For example, the correlation coefficient, a widely used construct in modern data analysis, is only a measure of linear dependence and cannot fully capture non-linear interactions. In this dissertation, we aim to address some of these gaps, and how they affect machine learning performance, using the mathematical construct of copulas. Our first contribution deals with accurate probabilistic modeling of real-world data, where the underlying data is both continuous and discrete. We show that even though the copula construct has some limitations with respect to discrete data, it is still amenable to modeling large real-world datasets probabilistically. Our second contribution deals with analysis of non-linear datasets. Here, we develop a new measure of statistical association that can handle discrete, continuous, or combinations of such random variables that are related by any general association pattern. We show that our new metric satisfies several desirable properties and compare it’s performance to other measures of statistical association. Our final contribution attempts to provide a framework for understanding how an estimator of statistical association will affect end-to-end machine learning performance. Here, we develop the estimator response curve, and show a new way to characterize the performance of an estimator of statistical association, termed the estimator response curve. We then show that the estimator response curve can help predict how well an estimator performs in algorithms which require statistical associations to be rank ordered.
73

Das Industrial Internet – Engineering Prozesse und IT-Lösungen

Eigner, Martin 10 December 2016 (has links) (PDF)
Das Engineering unterliegt derzeit einem massiven Wandel. Smarte Systeme und Technologien, Cybertronische Produkte, Big Data und Cloud Computing im Kontext des Internet der Dinge und Dienste sowie Industrie 4.0. Der amerikanische Ansatz des „Industrial Internet“ beschreibt diese (R)evolution jedoch weitaus besser als der eingeschränkte und stark deutsch geprägte Begriff Industrie 4.0. Industrial Internet berücksichtigt den gesamten Produktlebenszyklus und adressiert sowohl Konsum- und Investitionsgüter als auch Dienstleistungen. Dieser Beitrag beleuchtet das zukunftsträchtige Trendthema und bietet fundierte Einblicke in die vernetzte Engineering-Welt von morgen, auf Ihre Konstruktionsmethoden und –prozesse sowie auf die IT-Lösungen.
74

The use of Big Data Analytics to protect Critical Information Infrastructures from Cyber-attacks

Oseku-Afful, Thomas January 2016 (has links)
Unfortunately, cyber-attacks, which are the consequence of our increasing dependence on digital technology, is a phenomenon that we have to live with today. As technology becomes more advanced and complex, so have the types of malware that are used in these cyber-attacks. Currently, targeted cyber-attacks directed at CIIs such as financial institutions and telecom companies are on the rise. A particular group of malware known as APTs, which are used for targeted attacks, are very difficult to detect and prevent due to their sophisticated and stealthy nature. These malwares are able to attack and wreak havoc (in the targeted system) within a matter of seconds; this is very worrying because traditional cyber security defence systems cannot handle these attacks. The solution, as proposed by some in the industry, is the use of BDA systems. However, whilst it appears that BDA has achieved greater success at large companies, little is known about success at smaller companies. Also, there is scarcity of research addressing how BDA is deployed for the purpose of detecting and preventing cyber-attacks on CII. This research examines and discusses the effectiveness of the use of BDA for detecting cyber-attacks and also describes how such a system is deployed. To establish the effectiveness of using a BDA, a survey by questionnaire was conducted. The target audience of the survey were large corporations that were likely to use such systems for cyber security. The research concludes that a BDA system is indeed a powerful and effective tool, and currently the best method for protecting CIIs against the range of stealthy cyber-attacks. Also, a description of how such a system is deployed is abstracted into a model of meaningful practice.
75

Digitalisering av byggsektorn / Digitalization of the constructional section

Al Sadi, Sarmad, Hododi, Dylan January 2018 (has links)
Syfte: Den digitala utvecklingen inom byggproduktionssektorn ligger efter jämfört med många andra branscher och har fått ryktet om att vara rent av konservativ. Att säga att utvecklingen står stilla stämmer inte då majoriteten av företagen arbetar aktivt för en mer digitaliserad sektor. Trots detta domineras produktionsplatserna av pappersdokument, icke autonoma system och själva arbetet utförs mer eller mindre på samma sätt som det gjort i flera decennier. Trots att flertalet digitala verktyg och implementeringsmodeller finns tillgängliga möter dem inom byggproduktionssektorn en hel del motstånd. Denna rapport granskar de digitala innovationer som är på uppgång och som kan göra betydande nytta inom byggproduktion. Metod: Via sökmotorer på högskolans databas, samt internet gjordes insamling av rådata som sedan skulle analyseras och ligga till grund för kvalitativ insamlingsmetod. Litteraturstudien var grundpelaren som betingade de semistrukturerade intervjuerna, som i sin tur möjliggjorde en jämförelse mellan det teoretiska ramverket och intervjuresultat. Resultat: En klar majoriteten av de intervjuade aktörerna ansåg att 3D-skrivaren någon gång i framtiden kommer att inta byggsektorn. En implementering av 3D-skrivaren skulle medföra kortare produktionstider, reducerade produktionskostnader, eliminering av spill och mindre arbetskostnader. Autonoma systems inträde i byggproduktionssektorn kommer förmodligen inte ske inom en snar framtid då olika byggprojekt skiljer sig åt väldigt mycket. Hade byggandet blivit mer monotont skulle det underlätta väldigt mycket, men i dagsläget är sällan ett projekt det andra likt. Big data är en innovation som skulle kunna revolutionera byggbranschen på flera håll. Respondenterna förstod dock inte begreppets innebörd vilket tyder på att det inte satsas någonting på Big data inom produktionssektorn. Möjligheterna för Big data är många och det kan bland annat användas för effektivare kommunikation, effektivisering av produktionsplatsen, mer detaljerad koll på olika maskiner och mer kontrollerade material leveranser. Konsekvenser: För en allmänt lyckad implementering av 3D-skrivaren krävs det att även de mindre företagen kan införskaffa apparaten. De större företagen riskerar annars att konkurrera ut de mindre företagen. Då betong i allmänhet är ett material med förhållandevis hög miljöpåverkan medför detta konsekvenser för miljön som redan befinner sig i en utsatt position. Då det arbetas väldigt aktivt med att förbättra omvärldens miljöpåverkan är det därför viktigt att lägga fokus på att försöka producera mer miljövänlig betong. Begränsningar: Studien begränsades till ett fåtal svenska byggföretag i göteborgsregionen. I rapporten tas därför endast de fåtal respondenters åsikter med. / Purpose: The digital progress is slow within the constructional sector in comparison to other type of businesses and it has the reputation of being relatively conservative. To say that the progress is standing still would be mistake since the majority of constructional companies are working actively for a more digital construction site. Even though these companies are working hard towards a more digital sector the construction site is dominated by paper documents, non-autonomic systems and the labor is done the same way as it has been done for decades. Even though multiple digital tools and implementation models are available the construction sector is facing a lot of resistance. This report examines those digital innovations that are on the rampaging and can do a significant good within the construction sector. Method: Through search engines at the university's database, as well as the internet, collection of raw data was collected, which would then be analyzed and underlie for qualitative collection methods. The literature study was the foundation of the semistructured interviews, which in turn enabled a comparison between the theoretical framework and interview results. Findings: A vast majority of the interviewed participants believe that the 3D-printer sometime in the future may be used within the construction sector. An implementation of the 3D-printer would induce shorter production times, reduced production costs, elimination of waste and reduced costs for labor. Autonomic systems will probably not be implemented on construction sites within a short future since different construction projects are to diverse. If the construction would become more monotonic it would ease considerably, but today one project is rarely analogous to the other. Big data is an innovation that could revolutionize the construction sector in a lot of different ways. The respondents did not understand the meaning of the term, which interpret that it is not an innovation in focus for the time being. The possibilities of Big data are substantial and it could be used for more effective communication, more efficiency within the construction site, more detailed checkups of machinery and more controlled building material checkups. Implications: For a generally successful implementation of 3D-printers they need to be affordable for the smaller companies. Otherwise the larger firms risk to put the smaller firms out of business. Since concrete is a material with a relatively large impact on the environment this may cause consequences for the environment in the future since its already in an exposed situation. Since it´s globally working actively with improving the environment the focus should be in producing more environmentally acceptable concrete for the 31d-printers. Limitations: The study was limited to a handful of Swedish constructional companies within the Gothenburg region. In the report, only the opinions of a few respondent are taken to consideration.
76

Fatores determinantes para a adoção das governanças de dados e de informação no ambiente big data. / Determinant factors for the adoption of data information governances in the big data environment.

Furlan, Patrícia Kuzmenko 19 June 2018 (has links)
No ambiente big data, as organizações se preocupam em extrair valor dos dados e das informações com o intuito de obter vantagens competitivas. No entanto, são necessários esforços organizacionais com relação aos ativos de dados, incluindo a definição de responsabilidades com relação ao uso dos dados, a garantia da qualidade dos dados, dentre outros aspectos contemplados pelos modelos de governança de dados ou de informação. Deste modo, esta pesquisa investigou como as organizações podem adotar as governanças de dados ou de informação no ambiente big data e, para tanto, foram contemplados estudos de casos multisetoriais para identificar os fatores determinantes para a adoção das governanças de dados ou de informação no ambiente big data. Foram investigados os elementos e os conteúdos dos modelos de governança de dados ou de informação e analisados os aspectos dos modelos com relação à inteligência de negócios e ao big data analytics. Notou-se que as ações organizacionais com relação à governança de dados ou de informação são pouco consolidadas, mas conhecidas pelas organizações. Além disto, os modelos de governança de dados ou de informação são adotados por organizações com diferentes níveis de capacidades analíticas. Tais modelos contemplam a definição dos objetivos estratégicos da governança e domínios como o gerenciamento da qualidade dos dados ou das informações, o gerenciamento dos dados (em especial meta-dados), a transformação da mentalidade organizacional com relação aos dados e as informações e necessitam de competências de colaboração e comunicação dos stakeholders. Foram identificados oito fatores determinantes para a adoção das governanças de dados ou de informação no ambiente big data, os quais contemplam práticas estruturais, relacionais e operacionais do modelo de governança: 1 - Organizações grandes, globais, difusas, com estruturas descentralizadas de negócios e portfolio complexo de produtos ou serviços; 2 - Apontar um C-level, definir gerentes na estrutura e determinar data owners e data stewards; 3 - Estabelecer comitê de dados ou outros meios para reunir a alta cúpula e os principais líderes da organização; 4 - Atuação do departamento de TI nas atividades de gerenciamento de dados ou de informação, viabilizando e executando atividades operacionais com relação aos dados e as informações dentre as bases de dados e sistemas de informação; 5 - Atuar ativamente na transformação cultural da organização para data-driven; 6 - Promover a comunicação e a colaboração interna; desenvolver a comunicação com relação à eficácia das políticas e a necessidade de adequação dos stakeholders; 7 - Definir, gerenciar e controlar metadados; 8 - Definir os padrões, as exigências e o controle sobre a qualidade dos dados. A pesquisa oferece uma consolidação teórica relevante para o campo da governança de dados ou da informação, contemplando vasta lista de variáveis da literatura de de dados e governança de informação. Foi também possível expandir o modelo de governança de dados ou de informação englobando os domínios relativos à colaboração e comunicação, mudança cultural. Propõem-se uma expansão na conceituação geral dos termos governança de dados e governança de informação. / In the big data environment, organizations are concerned with extracting value from data and information in order to acquire competitive advantage. However, organizational efforts are required to organize data assets, determine responsibilities with regard to the data assets, ensure data quality, and other aspects. Such activities are covered by data or information governance models. This research investigated how organizations can adopt data or information governance in the big data environment. Thus, it was conducted multi-sectoral case studies to identify determinants factors for the adopting of data or information governance in the big data environment. The research protocol encompassed elements and contents of the data or information governance models and those related to big data value extraction. It was noted that the organizational approaches regarding data or information governance are poorly consolidated, but are well known to organizations. In addition, data or information governance models are adopted by organizations with different levels of analytical capabilities. Those models include the definition of the strategic objectives, and domains like data or information quality management, data management (especially metadata), transformation of the organizational cultural in relation to the data and the information, and collaboration and communication among stakeholders. Eight determinants factor were identified for the adoption of data or information governance in the big data environment, including structural, relational and operational practices of the governance model: 1 - Large, global and diffuse organizations with decentralized business and complex portfolio of products or services; 2 - Define C-level, managers, data owners and data stewards; 3 - Establish a data committee or other means to bring together the top leaders of the organization; 4 - Engagement of the IT department on the data management activities, enabling and executing operational activities in relation to data and information among databases and information systems; 5 - Actively engage in the cultural transformation of the organization into data-driven; 6 - Promote communication and internal collaboration; develop communication on the effectiveness of policies and the need for stakeholder adequacy; 7 - Define, manage and control metadata; 8 - Define standards, requirements and control over data quality. This research provides a relevant theoretical consolidation to the field of data or information governance, contemplating a vast list of research variables on the fields of competitive intelligence, IT governance, data and information governance literatures. It was also possible to expand the data or information governance model through the addition of domains such as collaboration, communication, and cultural transformation. The research also proposes an expansion in the general conceptualization of the terms data governance and information governance.
77

Ferramenta de programação e processamento para execução de aplicações com grandes quantidades de dados em ambientes distribuídos. / Programming and processing tool for execution of applications with large amounts of data in distributed environments.

Vasata, Darlon 03 September 2018 (has links)
A temática envolvendo o processamento de grandes quantidades de dados é um tema amplamente discutido nos tempos atuais, envolvendo seus desafios e aplicabilidade. Neste trabalho é proposta uma ferramenta de programação para desenvolvimento e um ambiente de execução para aplicações com grandes quantidades de dados. O uso da ferramenta visa obter melhor desempenho de aplicações neste cenário, explorando o uso de recursos físicos como múltiplas linhas de execução em processadores com diversos núcleos e a programação distribuída, que utiliza múltiplos computadores interligados por uma rede de comunicação, de forma que estes operam conjuntamente em uma mesma aplicação, dividindo entre tais máquinas sua carga de processamento. A ferramenta proposta consiste na utilização de blocos de programação, de forma que tais blocos sejam compostos por tarefas, e sejam executados utilizando o modelo produtor consumidor, seguindo um fluxo de execução definido. A utilização da ferramenta permite que a divisão das tarefas entre as máquinas seja transparente ao usuário. Com a ferramenta, diversas funcionalidades podem ser utilizadas, como o uso de ciclos no fluxo de execução ou no adiantamento de tarefas, utilizando a estratégia de processamento especulativo. Os resultados do trabalho foram comparados a duas outras ferramentas de processamento de grandes quantidades de dados, Hadoop e que o uso da ferramenta proporciona aumento no desempenho das aplicações, principalmente quando executado em clusters homogêneos. / The topic involving the processing of large amounts of data is widely discussed subject currently, about its challenges and applicability. This work proposes a programming tool for development and an execution environment for applications with large amounts of data. The use of the tool aims to achieve better performance of applications in this scenario, exploring the use of physical resources such as multiple lines of execution in multi-core processors and distributed programming, which uses multiple computers interconnected by a communication network, so that they operate jointly in the same application, dividing such processing among such machines. The proposed tool consists of the use of programming blocks, so that these blocks are composed of tasks, and the blocks are executed using the producer consumer model, following an execution flow. The use of the tool allows the division of tasks between the machines to be transparent to the user. With the tool, several functionalities can be used, such as cycles in the execution flow or task advancing using the strategy of speculative processing. The results were compared with two other frameworks, Hadoop and Spark. These results indicate that the use of the tool provides an increase in the performance of the applications, mostly when executed in homogeneous clusters.
78

Uma análise comparativa de ambientes para Big Data: Apche Spark e HPAT / A comparative analysis for Big Data environments: Apache Spark and HPAT

Carvalho, Rafael Aquino de 16 April 2018 (has links)
Este trabalho compara o desempenho e a estabilidade de dois arcabouços para o processamento de Big Data: Apache Spark e High Performance Analytics Toolkit (HPAT). A comparação foi realizada usando duas aplicações: soma dos elementos de um vetor unidimensional e o algoritmo de clusterização K-means. Os experimentos foram realizados em ambiente distribuído e com memória compartilhada com diferentes quantidades e configurações de máquinas virtuais. Analisando os resultados foi possível concluir que o HPAT tem um melhor desempenho em relação ao Apache Spark nos nossos casos de estudo. Também realizamos uma análise dos dois arcabouços com a presença de falhas. / This work compares the performance and stability of two Big Data processing tools: Apache Spark and High Performance Analytics Toolkit (HPAT). The comparison was performed using two applications: a unidimensional vector sum and the K-means clustering algorithm. The experiments were performed in distributed and shared memory environments with different numbers and configurations of virtual machines. By analyzing the results we are able to conclude that HPAT has performance improvements in relation to Apache Spark in our case studies. We also provide an analysis of both frameworks in the presence of failures.
79

Orientação para marketing analytics: antecedentes e impacto no desempenho do negócio. / Marketing analytics orientation: antecedents and impact on business performance

Bedante, Gabriel Navarro 11 March 2019 (has links)
Com a evolução tecnológica pela qual o mundo vem passando nos últimos anos, o tema Marketing Analytics vem ganhando relevância gerencial e acadêmica. No entanto, pouco se avançou no entendimento do que determina a inclinação de uma empresa a adotar a prática de Analytics para tomada de decisão nas atividades de Marketing, ou seja, sua Orientação para Marketing Analytics (OMA). Além disso, pouco se sabe sobre os impactos dessa orientação nos resultados da empresa. Para suprir essas lacunas, essa pesquisa buscou entender quais os construtos antecedentes que poderiam influenciar a Orientação para Marketing Analytics, bem como se essa orientação levaria a um melhor desempenho do negócio. Para tanto, por meio de uma vasta revisão da literatura, delimitou-se o escopo de Orientação para Marketing Analytics, apresentando uma definição para o construto, um modelo teórico e proposições de pesquisa. Na etapa seguinte, foram feitas entrevistas em profundidade com especialistas em Analytics e executivos de marketing para aprofundar o conceito de Orientação para Marketing Analytics e para entender quais os determinantes para uma empresa ser orientada para Marketing Analytics. A terceira e última parte desse trabalho se focou em desenvolver um modelo de mensuração para Orientação para Marketing Analytics e testar o modelo estrutural proposto junto a uma amostra de 127 profissionais de marketing e especialistas em Analytics. As respostas foram analisadas por meio de Modelagem de Equações Estruturais (PLS-SEM) e os resultados indicaram que o suporte da alta administração desempenha um papel relevante na valorização de habilidades analíticas das pessoas, nos investimentos em infraestrutura tecnológica e na orientação para processos da empresa. Além disso, observou-se que todos esses antecedentes apresentam influência significativa na Orientação para Marketing Analytics da empresa que, por sua vez, tem impacto positivo no desempenho percebido do negócio. / With the technological evolution that the world has been going through in recent years, the topic Marketing Analytics has gained managerial and academic relevance. However, little progress has been made in understanding what determines a company\'s willingness to adopt the practice of Analytics for decision-making in Marketing activities, ie its Marketing Analytics Orientation (MAO). In addition, little is known about the impacts of this orientation on company results. To fill these gaps, this research sought to understand which antecedent could influence the Marketing Analytics Orientation, as well as whether such orientation would lead to better business performance. To do so, through a vast literature review, the scope of Marketing Analytics Orientation was delimited, presenting a definition for the construct, a theoretical model and research propositions. In the next step, in-depth interviews were conducted with analytics experts and marketing executives to deepen the concept of Marketing Analytics Orientation and to understand the determinants for a company to be Marketing Analytics-oriented. The third and final part of this work focused on developing a measurement model for Marketing Analytics Orientation and testing the proposed structural model with a sample of 127 marketing professionals and Analytics experts. The responses were analyzed through Structural Equation Modeling (PLS-SEM) and the results indicated that the top management support plays a relevant role in the appreciation of people\'s analytical skills, on investments in technological infrastructure and in the company\'s processes orientation. The result of this work showed that all of these antecedents had significant influence on the company\'s Marketing Analytics Orientation, which in turn had a positive impact on the perceived performance of the business.
80

Fatores determinantes para a adoção das governanças de dados e de informação no ambiente big data. / Determinant factors for the adoption of data information governances in the big data environment.

Patrícia Kuzmenko Furlan 19 June 2018 (has links)
No ambiente big data, as organizações se preocupam em extrair valor dos dados e das informações com o intuito de obter vantagens competitivas. No entanto, são necessários esforços organizacionais com relação aos ativos de dados, incluindo a definição de responsabilidades com relação ao uso dos dados, a garantia da qualidade dos dados, dentre outros aspectos contemplados pelos modelos de governança de dados ou de informação. Deste modo, esta pesquisa investigou como as organizações podem adotar as governanças de dados ou de informação no ambiente big data e, para tanto, foram contemplados estudos de casos multisetoriais para identificar os fatores determinantes para a adoção das governanças de dados ou de informação no ambiente big data. Foram investigados os elementos e os conteúdos dos modelos de governança de dados ou de informação e analisados os aspectos dos modelos com relação à inteligência de negócios e ao big data analytics. Notou-se que as ações organizacionais com relação à governança de dados ou de informação são pouco consolidadas, mas conhecidas pelas organizações. Além disto, os modelos de governança de dados ou de informação são adotados por organizações com diferentes níveis de capacidades analíticas. Tais modelos contemplam a definição dos objetivos estratégicos da governança e domínios como o gerenciamento da qualidade dos dados ou das informações, o gerenciamento dos dados (em especial meta-dados), a transformação da mentalidade organizacional com relação aos dados e as informações e necessitam de competências de colaboração e comunicação dos stakeholders. Foram identificados oito fatores determinantes para a adoção das governanças de dados ou de informação no ambiente big data, os quais contemplam práticas estruturais, relacionais e operacionais do modelo de governança: 1 - Organizações grandes, globais, difusas, com estruturas descentralizadas de negócios e portfolio complexo de produtos ou serviços; 2 - Apontar um C-level, definir gerentes na estrutura e determinar data owners e data stewards; 3 - Estabelecer comitê de dados ou outros meios para reunir a alta cúpula e os principais líderes da organização; 4 - Atuação do departamento de TI nas atividades de gerenciamento de dados ou de informação, viabilizando e executando atividades operacionais com relação aos dados e as informações dentre as bases de dados e sistemas de informação; 5 - Atuar ativamente na transformação cultural da organização para data-driven; 6 - Promover a comunicação e a colaboração interna; desenvolver a comunicação com relação à eficácia das políticas e a necessidade de adequação dos stakeholders; 7 - Definir, gerenciar e controlar metadados; 8 - Definir os padrões, as exigências e o controle sobre a qualidade dos dados. A pesquisa oferece uma consolidação teórica relevante para o campo da governança de dados ou da informação, contemplando vasta lista de variáveis da literatura de de dados e governança de informação. Foi também possível expandir o modelo de governança de dados ou de informação englobando os domínios relativos à colaboração e comunicação, mudança cultural. Propõem-se uma expansão na conceituação geral dos termos governança de dados e governança de informação. / In the big data environment, organizations are concerned with extracting value from data and information in order to acquire competitive advantage. However, organizational efforts are required to organize data assets, determine responsibilities with regard to the data assets, ensure data quality, and other aspects. Such activities are covered by data or information governance models. This research investigated how organizations can adopt data or information governance in the big data environment. Thus, it was conducted multi-sectoral case studies to identify determinants factors for the adopting of data or information governance in the big data environment. The research protocol encompassed elements and contents of the data or information governance models and those related to big data value extraction. It was noted that the organizational approaches regarding data or information governance are poorly consolidated, but are well known to organizations. In addition, data or information governance models are adopted by organizations with different levels of analytical capabilities. Those models include the definition of the strategic objectives, and domains like data or information quality management, data management (especially metadata), transformation of the organizational cultural in relation to the data and the information, and collaboration and communication among stakeholders. Eight determinants factor were identified for the adoption of data or information governance in the big data environment, including structural, relational and operational practices of the governance model: 1 - Large, global and diffuse organizations with decentralized business and complex portfolio of products or services; 2 - Define C-level, managers, data owners and data stewards; 3 - Establish a data committee or other means to bring together the top leaders of the organization; 4 - Engagement of the IT department on the data management activities, enabling and executing operational activities in relation to data and information among databases and information systems; 5 - Actively engage in the cultural transformation of the organization into data-driven; 6 - Promote communication and internal collaboration; develop communication on the effectiveness of policies and the need for stakeholder adequacy; 7 - Define, manage and control metadata; 8 - Define standards, requirements and control over data quality. This research provides a relevant theoretical consolidation to the field of data or information governance, contemplating a vast list of research variables on the fields of competitive intelligence, IT governance, data and information governance literatures. It was also possible to expand the data or information governance model through the addition of domains such as collaboration, communication, and cultural transformation. The research also proposes an expansion in the general conceptualization of the terms data governance and information governance.

Page generated in 0.0555 seconds