Spelling suggestions: "subject:" gig data"" "subject:" iig data""
261 |
KerA : Un Système Unifié d'Ingestion et de Stockage pour le Traitement Efficace du Big Data : Un Système Unifié d'Ingestion et de Stockage pour le Traitement Efficace du Big Data / KerA : A Unified Ingestion and Storage System for Scalable Big Data ProcessingMarcu, Ovidiu-Cristian 18 December 2018 (has links)
Le Big Data est maintenant la nouvelle ressource naturelle. Les architectures actuelles des environnements d'analyse des données massives sont constituées de trois couches: les flux de données sont acquis par la couche d’ingestion (e.g., Kafka) pour ensuite circuler à travers la couche de traitement (e.g., Flink) qui s’appuie sur la couche de stockage (e.g., HDFS) pour stocker des données agrégées ou pour archiver les flux pour un traitement ultérieur. Malheureusement, malgré les bénéfices potentiels apportés par les couches spécialisées (e.g., une mise en oeuvre simplifiée), déplacer des quantités importantes de données à travers ces couches spécialisées s’avère peu efficace: les données devraient être acquises, traitées et stockées en minimisant le nombre de copies. Cette thèse propose la conception et la mise en oeuvre d’une architecture unifiée pour l’ingestion et le stockage de flux de données, capable d'améliorer le traitement des applications Big Data. Cette approche minimise le déplacement des données à travers l’architecture d'analyse, menant ainsi à une amélioration de l’utilisation des ressources. Nous identifions un ensemble de critères de qualité pour un moteur dédié d’ingestion des flux et stockage. Nous expliquons l’impact des différents choix architecturaux Big Data sur la performance de bout en bout. Nous proposons un ensemble de principes de conception d’une architecture unifiée et efficace pour l’ingestion et le stockage des données. Nous mettons en oeuvre et évaluons le prototype KerA dans le but de gérer efficacement divers modèles d’accès: accès à latence faible aux flux et/ou accès à débit élevé aux flux et/ou objets. / Big Data is now the new natural resource. Current state-of-the-art Big Data analytics architectures are built on top of a three layer stack:data streams are first acquired by the ingestion layer (e.g., Kafka) and then they flow through the processing layer (e.g., Flink) which relies on the storage layer (e.g., HDFS) for storing aggregated data or for archiving streams for later processing. Unfortunately, in spite of potential benefits brought by specialized layers (e.g., simplified implementation), moving large quantities of data through specialized layers is not efficient: instead, data should be acquired, processed and stored while minimizing the number of copies. This dissertation argues that a plausible path to follow to alleviate from previous limitations is the careful design and implementation of a unified architecture for stream ingestion and storage, which can lead to the optimization of the processing of Big Data applications. This approach minimizes data movement within the analytics architecture, finally leading to better utilized resources. We identify a set of requirements for a dedicated stream ingestion/storage engine. We explain the impact of the different Big Data architectural choices on end-to-end performance. We propose a set of design principles for a scalable, unified architecture for data ingestion and storage. We implement and evaluate the KerA prototype with the goal of efficiently handling diverse access patterns: low-latency access to streams and/or high throughput access to streams and/or objects.
|
262 |
MaSTA: a text-based machine learning approach for systems-of-systems in the big data context / MaSTA: uma abordagem de aprendizado de máquina orientado a textos para sistemas-de-sistemas no contexto de big dataBianchi, Thiago 11 April 2019 (has links)
Systems-of-systems (SoS) have gained a very important status in industry and academia as an answer to the growing complexity of software-intensive systems. SoS are particular in the sense that their capabilities transcend the mere sum of the capacities of their diverse independent constituents. In parallel, the current growth in the amount of data collected in different formats is impressive and imposes a considerable challenge for researchers and professionals, characterizing hence the Big Data context. In this scenario, Machine Learning techniques have been increasingly explored to analyze and extract relevant knowledge from such data. SoS have also generated a large amount of data and text information and, in many situations, users of SoS need to manually register unstructured, critical texts, e.g., work orders and service requests, and also need to map them to structured information. Besides that, these are repetitive, time-/effort-consuming, and even error-prone tasks. The main objective of this Thesis is to present MaSTA, an approach composed of an innovative classification method to infer classifiers from large textual collections and an evaluation method that measures the reliability and performance levels of such classifiers. To evaluate the effectiveness of MaSTA, we conducted an experiment with a commercial SoS used by large companies that provided us four datasets containing near one million records related with three classification tasks. As a result, this experiment indicated that MaSTA is capable of automatically classifying the documents and also improve the user assertiveness by reducing the list of possible classifications. Moreover, this experiment indicated that MaSTA is a scalable solution for the Big Data scenarios in which document collections have hundreds of thousands (even millions) of documents, even produced by different constituents of an SoS. / Sistemas-de-sistemas (SoS) conquistaram um status muito importante na indústria e na academia como uma resposta à crescente complexidade dos sistemas intensivos de software. SoS são particulares no sentido de que suas capacidades transcendem a mera soma das capacidades de seus diversos constituintes independentes. Paralelamente, o crescimento atual na quantidade de dados coletados em diferentes formatos é impressionante e impõe um desafio considerável para pesquisadores e profissionais, caracterizando consequentemente o contexto de Big Data. Nesse cenário, técnicas de Aprendizado de Máquina têm sido cada vez mais exploradas para analisar e extrair conhecimento relevante de tais dados. SoS também têm gerado uma grande quantidade de dados e informações de texto e, em muitas situações, os usuários do SoS precisam registrar manualmente textos críticos não estruturados, por exemplo, ordens de serviço e solicitações de serviço, e também precisam mapeá-los para informações estruturadas. Além disso, essas tarefas são repetitivas, demoradas, e até mesmo propensas a erros. O principal objetivo desta Tese é apresentar o MaSTA, uma abordagem composta por um método de classificação inovador para inferir classificadores a partir de grandes coleções de texto e um método de avaliação que mensura os níveis de confiabilidade e desempenho desses classificadores. Para avaliar a eficácia do MaSTA, nós conduzimos um experimento com um SoS comercial utilizado por grandes empresas que nos forneceram quatro conjuntos de dados contendo quase um milhão de registros relacionados com três tarefas de classificação. Como resultado, esse experimento indicou que o MaSTA é capaz de classificar automaticamente os documentos e também melhorar a assertividade do usuário através da redução da lista de possíveis classificações. Além disso, esse experimento indicou que o MaSTA é uma solução escalável para os cenários de Big Data, nos quais as coleções de documentos têm centenas de milhares (até milhões) de documentos, até mesmo produzidos por diferentes constituintes de um SoS.
|
263 |
How Big Data Analytics are perceived as a driver for Competitive Advantage : A qualitative study on food retailersGalletti, Alessandro, Papadimitriou, Dimitra-Christina January 2013 (has links)
The recent explosion of digital data has led the business world to a new era towards a more evidence-based decision making. Companies nowadays collect, store and analyze huge amount of data and the terms such Big Data Analytics are used to define those practices. This paper investigates how Big Data Analytics (BDA) can be perceived and used as a driver for companies’ Competitive Advantage (CA). It thus contributes in the debate about the potential role of IT assets as a source of CA, through a Resource-Based View approach, by introducing a new phenomenon such as BDA in that traditional theoretical background. A conceptual model developed by Wade and Nevo (2010) is used as guidance, where the concept of synergy developed between IT assets and other organizational resources is seen as crucial in order to create such a CA. We focus our attention on the Food Retail industry and specifically investigate two case studies, ICA Sverige AB and Masoutis S.A. The evidence shows that, although this process is at an embryonic stage, the companies perceive the implementation of BDA as a key driver for the creation of CA. Efforts are put in place in order to develop successful implementation of BDA within the company as a strategic tool for several departments, however, some hurdles have been spotted which might impede that practice.
|
264 |
La construction de l'identité sur Internet : mutations et transformations dans le web social / The construction of identity in the digital space : transformations and mutations within the social webBarredo Escribano, Maria 08 December 2015 (has links)
Le point de départ de notre analyse est celui de la construction de l'identité digitale considéré comme un processus complexe qui peut être également abordé sous plusieurs angles. En mutation constante, les différents acteurs présents sur Internet possèdent plusieurs rôles dans cette construction identitaire de l'individu en ligne. D'une part l'émergence du web social, sous la forme du profil, fait de l'utilisateur un acteur pluri-positionnel (énonciateur, récepteur, transmetteur, etc.) et lui donne, en outre une identité relationnelle. D'autre part, les contraintes imposées par le réseau et les enjeux situés à des différents niveaux, suggèrent la remise en cause hiérarchie horizontale de nœuds, – unités minimales qui composent le réseau –, et à son tour les nœuds incarnés par les utilisateurs. Cependant, pourrait un nœud être social ? La communication digitale interactive pourrait-elle se fonder sur de présupposés qui excluent l'individu ? Au-delà de cette identité relationnelle du web social, est-il possible de concevoir une identité digitale qui soit homologable à l'identité nominale d'un individu d'une société quelconque ? Les conditions, les prémisses et la confluence de plusieurs pratiques digitales sont les facteurs à analyser afin de trouver des réponses possibles à ce type de problématique. Certes, les critères à prendre en considération pour envisager une telle identité, ainsi que la préservation de l'identité réelle de l'utilisateur en tant que citoyen sont les axes fondamentaux de notre analyse. Une analyse qui se contente de faire l'état de lieu et l'état de l'art de l'Internet contemporain par rapport à l'individu, tel que nous le concevons à nos jours. / On the basis of this analysis, we propose to take into consideration the digital identity as a complex process of construction, which may be regarded from several angles. In a constant mutation, a variety of stakeholders present in the Internet perform different roles in the on line individual's construction identity. On the one hand, an emergence of social web converts the user, in the form of a social media profile, into a multi-positional actor ( sender, transmitter, receiver, etc.) and gives him/her a relational identity as well. On the other hand, the constraints imposed by the net and the issues placed in different levels of analysis may suggest to review the horizontal hierarchy between nodes, being these ones the web's minimal units which in turn are embodied by the users. Therefore, could a node be social ? The digital interactive communication could it be based in presumptions excluding the individual ? Beyond the relational identity of social web, could it be conceived a digital identity equivalent to the real identity of an individual on any society? Conditions, premises and the confluence of different digital praxis are indeed the elements to be analysed in order to find suitable answers to our general problem. Certainly, the criteria to take into consideration a concept such an identity, and the preservation of user's real identity as a citizen are the main axis of our analysis. More precisely, an analysis which is focused in the current state of contemporary Internet regarding the individual, as we conceive him nowadays.
|
265 |
Is Big data too Big for Swedish SMEs? : A quantitative study examining how the employees of small and medium-sized enterprises perceive Big data analyticsDanielsson, Lukas, Toss, Ronja January 2018 (has links)
Background: Marketing is evolving because of Big data, and there are a lot of possibilities as well as challenges associated with Big data, especially for small and medium-sized companies (SMEs), who face barriers that prevent them from taking advantage of Big data. For companies to analyze Big data, Big data analytics are used which helps companies analyze large amounts of data. However, previous research is lacking in regard to how SMEs can implement Big data analytics and how Big data analytics are perceived by SMEs. Purpose: The purpose of this study is to investigate how the employees of Swedish SMEs perceive Big data analytics. Research Questions: How do employees of Swedish SMEs perceive Big data analytics in their current work environment? How do the barriers impact the perceptions of Big data analytics? Methodology: The research proposes a quantitative cross-sectional design as the source of empirical data. To gather the data, a survey was administered to the employees of Swedish companies that employed less than 250 people, these companies were regarded as SMEs. 139 answered the survey and out of those, the analysis was able to use 93 of the answers. The data was analyzed using previous theories, such as the Technology Acceptance Model (TAM). Findings: The research concluded that the employees had positive perceptions about Bigdata analytics. Further, the research concluded that two of the barriers (security and resources) analyzed impacted the perceptions of the employees, whereas privacy of personal data did not. Theoretical Implications: This study adds to the lacking Big data research and improves the understanding of Big data and Big data analytics. The study also adds to the existing gap in literature to provide a more comprehensive view of Big data. Limitations: The main limitation of the study was that previous literature has been vague and ambiguous and therefore may not be applicable. Practical Implications: The study helps SMEs understand how to better implement Big data analytics and what barriers need to be prioritized regarding Big data analytics. Originality: To the best of the author’s knowledge, there is a significant lack of academic literature regarding Big data, Big data analytics and Swedish SMEs, therefore this study could be one of the pioneer studies examining these topics which will significantly contribute to current research.
|
266 |
Des machines à produire des futurs économiques : sociologie des intelligences artificielles marchandes à l'ère du big data / Machines to produce of the economic futures : a multi-situated ethnography of artificial intelligences in the big data eraVayre, Jean-Sébastien 28 November 2016 (has links)
La plupart des experts s’accordent à dire que le big data marque une rupture. Peut-être ont-ils raison. Mais cette rupture n’est pas vraiment matérielle, ni même organisationnelle. Cela fait déjà longtemps que les grands acteurs du web explorent et exploitent quotidiennement de grandes masses de données. Si révolution il y a, elle se joue ailleurs, à la périphérie de la grande disruption que mettent en scène la plupart des promoteurs du big data. Pour s’en rendre compte, il suffit de se poser la question qui suit : en pointant le caractère révolutionnaire des mégadonnées et des dispositifs permettant de les traiter, que font ces acteurs ? Ils préparent une intégration massive des intelligences artificielles au sein des différentes sphères de la société. S’il existe une rupture, elle se trouve donc plutôt ici : dans ce mouvement que nous connaissons aujourd’hui et qui consiste, pour une grande diversité d’acteurs socioéconomiques, à s’approprier des agents de calcul qui sont toujours plus autonomes et puissants. Aussi, afin de mieux saisir les enjeux de cette démocratisation, nous proposons dans cette thèse d’étudier le cas des machines à produire des futurs économiques : quel est leur rôle au sein de ces collectifs sociotechniques que composent les marchés ? Pour répondre à cette question, nous nous appuierons sur une ethnographie multi-située que nous avons conduite de 2012 à 2015 selon une posture se trouvant à la croisée des sociologies des marchés, des sciences et des techniques. Plus précisément, nous mobiliserons un corpus d’archives ainsi qu’un important matériau d’enquête recueilli auprès de plusieurs professionnels, entreprises et salons afin d’examiner la fabrication et le fonctionnement de ces machines à prédire les avenirs marchands. Nous verrons ainsi qu’au niveau de l’environnement de conception, ces dernières sont intéressantes dans la mesure où elles sont généralement dotées d’une intelligence locale qui doit faire advenir, dans le présent, des futurs permettant d’optimiser les intérêts économiques de ceux qui les implémentent. À partir d’une série d’études et d’expérimentations portant sur les usages d’un agent de recommandation, nous montrerons que cette forme d’intelligence est toutefois discutable puisqu’elle peut comporter d’importantes ambivalences du point de vue des utilisateurs. Ceci nous permettra de souligner qu’aux niveaux cognitif et relationnel, la pertinence des machines à produire des futurs économiques doit faire l’objet d’une mise en question systématique. Les enjeux sont importants puisqu’il n’est pas impossible que leur avènement massif au sein des organisations instaure de nouvelles asymétries sur les marchés qui ne sont pas un bien pour la communauté. / The majority of experts agree to say that the big data is a rupture. Maybe are they right. But this rupture is not really material, nor even organizational. It has already been a long time that the big web actors daily exploring and exploiting the big data. If revolution there is, it is happening elsewhere, at the periphery of the great disruption that depict most of big data promoters. To being aware of, simply ask the following question: pointing the revolutionary nature of big data and devices are provided to treat, what these actors are they doing? They are preparing a massive integration of artificial intelligences within the various spheres of society. If there is a rupture, it is therefore rather here: in this movement that we know today and which consists, for a great diversity of socioeconomic actors, to appropriate of the calculation agents that are increasingly autonomous and powerful. So in order to better understand the issues of this democratization, we propose in this thesis to study the case of machines to produce of the economic futures: what is their role within the socio-technical collectives that compose the markets? To answer this question, we will draw on a multi-situated ethnography we conducted from 2012 to 2015 according to a posture situated at the intersection of market, science and technology sociologies. Specifically, we will be mobilizing a corpus of archives and an important investigative material collected from several professionals, companies and salons to discuss the design and operation of these machines to predict merchant futures. We will see at the level of design environment, these machines are interesting in so far as they generally have a local intelligence that has to happen, in the present, of futures allowing to optimize the economic interests of those that implement. Starting from a series of studies and experimentations dealing with the use of a recommendation agent, we will show that this intelligence is debatable because it may entail of considerable ambivalences from the users point of view. This will allow us to emphasize that cognitive and relational levels, the relevance of the machines to produce of the economic futures must be the subject to a systematic questioning. The stakes are high because it is not impossible that the massive advent of these machines within the organizations introduces new asymmetries in markets that are not a good for the community.
|
267 |
HIPERLOCAL, DADOS E APLICATIVOS: inovações no fazer jornalismo e comunicação / Hyperlocal, data and applications: journalism and communication innovationsCOELHO, Aparecido Antonio dos Santos 13 June 2016 (has links)
Submitted by Noeme Timbo (noeme.timbo@metodista.br) on 2016-09-13T16:34:04Z
No. of bitstreams: 1
aparecidoascoelho2.pdf: 3811301 bytes, checksum: 065b205bdb76ba6f860d854974fdf7c8 (MD5) / Made available in DSpace on 2016-09-13T16:34:04Z (GMT). No. of bitstreams: 1
aparecidoascoelho2.pdf: 3811301 bytes, checksum: 065b205bdb76ba6f860d854974fdf7c8 (MD5)
Previous issue date: 2016-06-13 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / Technological developments in contemporary communication structure digital systems through networks of connected computers and exploitation of technological devices. The digital data captured and distributed through applications installed on smartphones create dynamic communications environment. Journalism and communication are trying to adapt to the new information ecosystem entreated by constant technological innovations that allow the creation of new environments and systems access to information of social relevance. New tools for the production and distribution of content, based on data and intelligent interactions products, the algorithms used in the different processes, hyperlocals platforms and narrative and digital production systems. In this context, the objective of this study was to develop an overview with analysis and comparison between the products of the media and technology are the new technologies add attributes to the narrative and journalistic productions and its impact on the practice of activity. It also examines whether there are changes in the production processes of social interest information regarding processes and consolidated traditional journalism. Investigates the use of information inserted by users, in real time, it improves the quality of the narratives derived from mobile devices and if the gamification alters the perception of the credibility of journalism. For so be rethought the way to produce and generate information and knowledge for the public demanding content. / A evolução tecnológica na comunicação contemporânea estrutura sistemas digitais via redes de computadores conectados e exploração maciça de dispositivos tecnológicos. Os dados digitais captados e distribuídos via aplicativos instalados em smartphones criam ambiente dinâmico comunicacional. O Jornalismo e a Comunicação tentam se adaptar ao novo ecossistema informacional impetrado pelas constantes inovações tecnológicas que possibilitam a criação de novos ambientes e sistemas para acesso à informação de relevância social. Surgem novas ferramentas para produção e distribuição de conteúdos jornalísticos, produtos baseados em dados e interações inteligentes, algoritmos usados em diversos processos, plataformas hiperlocais e sistemas de narrativas e produção digitais. Nesse contexto, o objetivo da pesquisa foi elaborar uma análise e comparação entre produtos de mídia e tecnologia específicos. Se as novas tecnologias acrescentam atributos às produções e narrativas jornalísticas, seus impactos na prática da atividade e também se há modificação nos processos de produção de informação de relevância social em relação aos processos jornalísticos tradicionais e consolidados. Investiga se o uso de informações insertadas pelos usuários, em tempo real, melhora a qualidade das narrativas emergentes através de dispositivos móveis e se a gamificação ou ludificação altera a percepção de credibilidade do jornalismo. Para que assim seja repensado a forma de se produzir e gerar informação e conhecimento para os públicos que demandam conteúdo
|
268 |
[en] INSTITUTIONAL OWNERSHIP AS A PREDICTOR OF FUTURE SECURITY RETURNS / [pt] USO DE DADOS DAS CARTEIRAS DE INVESTIDORES INSTITUCIONAIS NA PREDIÇÃO DE RETORNOS DE AÇÕESRAPHAEL ALEXANDER ROTTGEN 29 February 2016 (has links)
[pt] Texto Dados sobre as carteiras de investidores institucionais em
ações agora estão disponíveis em vários países e portanto podem ser
usados em modelos para prever os futuros retornos de ações.
Recentemente, vários produtos comerciais de investimento foram
lançados que explicitamente usam tal tipo de dados na construção da
carteira de investimentos. O intuito deste estudo é aplicar algoritmos de
aprendizado de máquina em cima de dados das carteiras de ações de
investidores institucionais nos Estados Unidos, a fim de avaliar se tais
dados podem ser usados para prever futuros retornos de ações. Nosso
trabalho mostra que um modelo usando um support vector machine
conseguiu separar ações em três classes de futuro retorno com acurácia
acima da esperada se um modelo aleatório fosse usado. / [en] Data on institutional ownership of securities is nowadays publicly
available in a number of jurisdictions and can thus be used in models for
the prediction of security returns. A number of recently launched
investment products explicitly use such institutional ownership data in
security selection. The purpose of the current study is to apply statistical
learning algorithms to institutional ownership data from the United States,
in order to evaluate the predictive validity of features based on such
institutional ownership data with regard to future security returns. Our
analysis identified that a support vector machine managed to classify
securities, with regard to their four-quarter forward returns, into three bins
with significantly higher accuracy than pure chance would predict. Even
higher accuracy was achieved when predicting realized, i.e. past, fourquarter
returns.
|
269 |
Fatores críticos de sucesso para ferramentas de Business Analytics. / Critical success factors of business analytics tools.Cezar Sayão 15 September 2017 (has links)
Atualmente vivemos em uma sociedade com a maior quantidade de dados já disponíveis em toda a história, e ao mesmo tempo que ocorre o crescimento desta vasta quantidade de informações dispersas, os ambientes empresariais tornaram-se cada vez mais complexos e competitivos. Nos quais gestores necessitam detectar e, se possível, prever tendências para estruturar planos de ação através de análises simples e/ou, por vezes, extremamente complexas dos dados. Dessa forma, o potencial impacto nas organizações referentes à utilização dessas informações em sua gestão tem chamado a atenção tanto de executivos com de pesquisadores. Esta pesquisa buscou identificar os fatores de sucesso de sistemas de Business Analytics (BA) e avaliar empiricamente suas relações de causalidade, sendo utilizada a metodologia de pesquisa científica de Levantamento tipo Survey e a técnica estatística de Modelagem de Equações Estruturais. Além de contribuir com a expansão do conhecimento relacionado a área de Business Analytics, esta dissertação apresentou uma discussão e proposta de delimitação do conceito de BA frente demais termos relacionados a literatura de sistemas de suporte a decisão (i.e. BI, Big Data e Inteligência Competitiva) e a estruturação de uma ferramenta de mensuração de sucesso de SI de BA baseado no modelo apresentado por Delone e McLean. Após a delimitação do conceito de BA, foi discutido os fatores críticos de sucesso (FCS) presentes na literatura e suas particularidades frente a sistemas transacionais (e.g. Enterprise Resource Planning). Os quais foram estruturados em 3 dimensões e 4 construtos: Tecnologia (Qualidade dos dados), Cultura organizacional (Gestão Baseada em Fatos e Engajamento dos executivos) e Pessoas (Qualidade da Equipe). Nesta análise, a Cultura Organizacional apresentou a maior relevância no sucesso de SI (i.e. Uso da Informação e Impacto Individual) dentre as 3 dimensões. Como alta impacto tanto do engajamento dos executivos, como da Cultura organizacional de gestão baseada em fatos. / We have never lived in a society with such amount of data available where, at the same time of this dispersed information growth, managers and decision makers are facing the most challenging and competitive business environment they have ever seen. Being necessary to detect and, if it is possible, predict trends based on simple and/or complex data analysis in order to structure action plans. In this context, the potential impact of data based management on organizations has increased and have been drawing attention of scholars and executives. This research focused on identify critical success factors of Business Analytics (BA) systems and analyze their causal relationship. It was conducted by survey methodology and the statistical technique selected was structural equation modeling (Partial Least Square). Besides the contribution to the body of knowledge of Business Analytics field, this dissertation presents a theoretical discussion about BA definition, its relationship with order support decision systems terms often present on literature (i.e. Business Intelligence, Big Data and Competitive Intelligence), and a search tool for information system success based on DeLone and McLean model. The proposition of critical success factors of Business Analytics systems were based on a comprehensive literature review and were classified into 3 groups and 4 constructs: Technology (Data Quality), Organizational culture (Fact-based management and Executive engagement) and People (Team knowledge and skill). Organizational Culture showed more relevance on Business Analytics system success (i.e. Information Use and Individual Impact) them Technology and People, with high impact of both constructs (Fact-based management and Executive engagement).
|
270 |
Processos no jornalismo digital: do Big Data à visualização de dados / Processes in digital journalism: from Big Data to data visualizationMayanna Estevanim 16 September 2016 (has links)
A sociedade está cada vez mais digitalizada, dados em diferentes extensões são passíveis de serem armazenados e correlacionados, temos um volume, variedade e velocidade de dados humanamente imensuráveis sem o auxílio de computadores. Neste cenário, falamos de um jornalismo de dados que visa o entendimento de temas complexos de relevância social e que sintoniza a profissão com as novas necessidades de compreensão informativa contemporânea. O intuito desta dissertação é problematizar a visualização de dados no jornalismo brasileiro partindo exatamente do que é esta visualização de dados jornalísticos e diante dos apontamentos sobre seu conceito e prática questionar como proporciona diferenciais relevantes. Por relevantes entendemos pautas de interesse público, que envolvem maior criticidade, maior aprofundamento e contextualização dos conteúdos no Big Data. As iniciativas que reúnem imagens relacionadas a dados e metadados ocorrem nas práticas de mercado, laboratórios acadêmicos, assim como em mídias independentes. Neste sistema narrativo atuam diferentes atores humanos e não-humanos, em construções iniciadas em codificações maquínicas, com bases de dados que dialogam com outras camadas até chegar a uma interface com o usuário. Há a necessidade de novas expertises por parte dos profissionais, trabalhos em equipe e conhecimento básico, muitas vezes, em linguagem de programação, estatística e a operacionalização de ferramentas na construção de narrativas dinâmicas e que cada vez mais envolvam o leitor. Sendo importante o pensar sobre um conteúdo que seja disponível para diferentes formatos. Para o desenvolvimento da pesquisa foi adotada uma estratégia multimetodológica, tendo os pressupostos da centralidade da comunicação, que perpassa todas as atividades comunicativas e informativas de forma transversal, sejam elas analógicas ou não. Um olhar que requer resiliências diante das abordagens teórico-metodológicas para que as mesmas consigam abarcar e sustentar as reflexões referentes ao dinâmico campo de estudos. Para se fazer as proposições e interpretações adotou-se como base o paradigma Jornalismo Digital em Base de Dados, tendo as contribuições dos conceitos de formato (RAMOS, 2012 e MACHADO, 2003), de jornalismo pós-industrial (COSTA, 2014), sistema narrativo e antenarrativa (BERTOCCHI, 2013) como meios de amadurecimento da compreensão do objeto proposto. / Society is increasingly digitalized. Different scopes of data are likely to be stored and correlated, having volumes, variety and accumulating speeds humanly impossible to track and analyze without the aid of computers. In this scenario we explore the realm of data-driven journalism with its aim of helping us understand complex issues of social relevance and which integrates journalism with the new needs of contemporary informative understanding. The purpose of this paper is to discuss data visualization in Brazilian journalism, starting with what data visualization is and then, upon its concept and practical uses, determine how this view provides relevant advantages. By relevant advantages we mean matters of public interest with more critical, greater depth and context of content on Big Data. Initiatives that bring together images related to data and metadata occur on market practices, academic laboratories, as well as independent media. This narrative system is acted upon different human and nonhuman agents, whose structures are being built with machinic codifications, using databases that communicate with other layers until reaching a user interface. There is a need for new expertise from professionals, teamwork and basic knowledge, often in programming languages, statistics and operational tools to build dynamic narratives and increasingly involve the reader. It is important to think about content that is available to different formats. For this research we adopted a multi-methodological strategy and the assumptions of the centrality of communication that permeates all communication and informational activities across the board, whether analog or not. A view that requires resilience in the face of theoretical and methodological approaches, so that they are able to embrace and support the reflections for this dynamic field of study. To make propositions and interpretations, adopted based on the Database Digital Journalism paradigm, and the contributions of format concepts (RAMOS, 2012 and MACHADO, 2003), post-industrial journalism (COSTA, 2014), system narrative and antenarrative (BERTOCCHI, 2013) maturing as means of understanding the proposed object.
|
Page generated in 0.0981 seconds