• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 591
  • 119
  • 109
  • 75
  • 40
  • 40
  • 27
  • 22
  • 19
  • 11
  • 8
  • 7
  • 6
  • 6
  • 5
  • Tagged with
  • 1226
  • 1226
  • 181
  • 170
  • 163
  • 156
  • 150
  • 150
  • 149
  • 129
  • 112
  • 110
  • 110
  • 109
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Uma nova arquitetura para Internet das Coisas com análise e reconhecimento de padrões e processamento com Big Data. / A novel Internet of Things architecture with pattern recognition and big data processing.

Alberto Messias da Costa Souza 16 October 2015 (has links)
A Internet das Coisas é um novo paradigma de comunicação que estende o mundo virtual (Internet) para o mundo real com a interface e interação entre objetos. Ela possuirá um grande número de dispositivos heteregôneos interconectados, que deverá gerar um grande volume de dados. Um dos importantes desafios para seu desenvolvimento é se guardar e processar esse grande volume de dados em aceitáveis intervalos de tempo. Esta pesquisa endereça esse desafio, com a introdução de serviços de análise e reconhecimento de padrões nas camadas inferiores do modelo de para Internet das Coisas, que procura reduzir o processamento nas camadas superiores. Na pesquisa foram analisados os modelos de referência para Internet das Coisas e plataformas para desenvolvimento de aplicações nesse contexto. A nova arquitetura de implementada estende o LinkSmart Middeware pela introdução de um módulo para reconhecimento de padrões, implementa algoritmos para estimação de valores, detecção de outliers e descoberta de grupos nos dados brutos, oriundos de origens de dados. O novo módulo foi integrado à plataforma para Big Data Hadoop e usa as implementações algorítmicas do framework Mahout. Este trabalho destaca a importância da comunicação cross layer integrada à essa nova arquitetura. Nos experimentos desenvolvidos na pesquisa foram utilizadas bases de dados reais, provenientes do projeto Smart Santander, de modo a validar da nova arquitetura de IoT integrada aos serviços de análise e reconhecimento de padrões e a comunicação cross-layer. / The Internet of Things is a new communication paradigm in which the Internet is extended from the virtual world to interface and interact with objects of the physical world. The IoT has high number of heterogeneous interconnected devices, that generate huge volume data. The most important IoT challenges is store and proccess this large volume data. This research addresses this issue by introducing pattern recognition services into the lower layers of the Internet of Things reference model stack and reduces the processing at the higher layers. The research analyzes the Internet of Things reference model and Middleware platforms to develop applications in this context. The new architecture implementation extends the LinkSmart by introducing a pattern recognition manager that includes algorithms to estimate parameters, detect outliers, and to perform clustering of raw data from IoT resources. The new module is integrated with the Big Data Haddop platform and uses Mahout algorithms implementation. This work highlights the cross-layer communication intregated in the new IoT architecture. The experiments made in this research using the real database from Smart Santander Framework to validate the new IoT architecture with pattern recognition services and cross-layer communication.
312

Computer vision for continuous plankton monitoring / Visão computacional para o monitoramento contínuo de plâncton

Matuszewski, Damian Janusz 04 April 2014 (has links)
Plankton microorganisms constitute the base of the marine food web and play a great role in global atmospheric carbon dioxide drawdown. Moreover, being very sensitive to any environmental changes they allow noticing (and potentially counteracting) them faster than with any other means. As such they not only influence the fishery industry but are also frequently used to analyze changes in exploited coastal areas and the influence of these interferences on local environment and climate. As a consequence, there is a strong need for highly efficient systems allowing long time and large volume observation of plankton communities. This would provide us with better understanding of plankton role on global climate as well as help maintain the fragile environmental equilibrium. The adopted sensors typically provide huge amounts of data that must be processed efficiently without the need for intensive manual work of specialists. A new system for general purpose particle analysis in large volumes is presented. It has been designed and optimized for the continuous plankton monitoring problem; however, it can be easily applied as a versatile moving fluids analysis tool or in any other application in which targets to be detected and identified move in a unidirectional flux. The proposed system is composed of three stages: data acquisition, targets detection and their identification. Dedicated optical hardware is used to record images of small particles immersed in the water flux. Targets detection is performed using a Visual Rhythm-based method which greatly accelerates the processing time and allows higher volume throughput. The proposed method detects, counts and measures organisms present in water flux passing in front of the camera. Moreover, the developed software allows saving cropped plankton images which not only greatly reduces required storage space but also constitutes the input for their automatic identification. In order to assure maximal performance (up to 720 MB/s) the algorithm was implemented using CUDA for GPGPU. The method was tested on a large dataset and compared with alternative frame-by-frame approach. The obtained plankton images were used to build a classifier that is applied to automatically identify organisms in plankton analysis experiments. For this purpose a dedicated feature extracting software was developed. Various subsets of the 55 shape characteristics were tested with different off-the-shelf learning models. The best accuracy of approximately 92% was obtained with Support Vector Machines. This result is comparable to the average expert manual identification performance. This work was developed under joint supervision with Professor Rubens Lopes (IO-USP). / Microorganismos planctônicos constituem a base da cadeia alimentar marinha e desempenham um grande papel na redução do dióxido de carbono na atmosfera. Além disso, são muito sensíveis a alterações ambientais e permitem perceber (e potencialmente neutralizar) as mesmas mais rapidamente do que em qualquer outro meio. Como tal, não só influenciam a indústria da pesca, mas também são frequentemente utilizados para analisar as mudanças nas zonas costeiras exploradas e a influência destas interferências no ambiente e clima locais. Como consequência, existe uma forte necessidade de desenvolver sistemas altamente eficientes, que permitam observar comunidades planctônicas em grandes escalas de tempo e volume. Isso nos fornece uma melhor compreensão do papel do plâncton no clima global, bem como ajuda a manter o equilíbrio do frágil meio ambiente. Os sensores utilizados normalmente fornecem grandes quantidades de dados que devem ser processados de forma eficiente sem a necessidade do trabalho manual intensivo de especialistas. Um novo sistema de monitoramento de plâncton em grandes volumes é apresentado. Foi desenvolvido e otimizado para o monitoramento contínuo de plâncton; no entanto, pode ser aplicado como uma ferramenta versátil para a análise de fluídos em movimento ou em qualquer aplicação que visa detectar e identificar movimento em fluxo unidirecional. O sistema proposto é composto de três estágios: aquisição de dados, detecção de alvos e suas identificações. O equipamento óptico é utilizado para gravar imagens de pequenas particulas imersas no fluxo de água. A detecção de alvos é realizada pelo método baseado no Ritmo Visual, que acelera significativamente o tempo de processamento e permite um maior fluxo de volume. O método proposto detecta, conta e mede organismos presentes na passagem do fluxo de água em frente ao sensor da câmera. Além disso, o software desenvolvido permite salvar imagens segmentadas de plâncton, que não só reduz consideravelmente o espaço de armazenamento necessário, mas também constitui a entrada para a sua identificação automática. Para garantir o desempenho máximo de até 720 MB/s, o algoritmo foi implementado utilizando CUDA para GPGPU. O método foi testado em um grande conjunto de dados e comparado com a abordagem alternativa de quadro-a-quadro. As imagens obtidas foram utilizadas para construir um classificador que é aplicado na identificação automática de organismos em experimentos de análise de plâncton. Por este motivo desenvolveu-se um software para extração de características. Diversos subconjuntos das 55 características foram testados através de modelos de aprendizagem disponíveis. A melhor exatidão de aproximadamente 92% foi obtida através da máquina de vetores de suporte. Este resultado é comparável à identificação manual média realizada por especialistas. Este trabalho foi desenvolvido sob a co-orientacao do Professor Rubens Lopes (IO-USP).
313

Visualizing media with interactive multiplex networks / Cartographier les médias avec des réseaux multiplexes interactifs

Ren, Haolin 14 March 2019 (has links)
Les flux d’information suivent aujourd’hui des chemins complexes: la propagation des informations, impliquant éditeurs on-line, chaînes d’information en continu et réseaux sociaux, emprunte alors des chemins croisés, susceptibles d’agir sur le contenu et sa perception. Ce projet de thèse étudie l’adaptation des mesures de graphes classiques aux graphes multiplexes en relation avec le domaine étudié, propose de construire des visualisations à partir de plusieurs représentations graphiques des réseaux, et de les combiner (visualisations multi-vues synchronisées, représentations hybrides, etc.). L’accent est mis sur les modes d’interaction permettant de prendre en compte l’aspect multiplexe (multicouche) des réseaux. Ces représentations et manipulations interactives s’appuient aussi sur le calcul d’indicateurs propres aux réseaux multiplexes. Ce travail est basé sur deux jeux de données principaux: l’un est une archive de 12 ans de l’émission japonaise publique quotidienne NHK News 7, de 2001 à 2013. L’autre recense les participants aux émissions de télévision/radio françaises entre 2010 et 2015. Deux systèmes de visualisation s’appuyant sur une interface Web ont été développés pour analyser des réseaux multiplexes, que nous appelons «Visual Cloud» et «Laputa». Dans le Visual Cloud, nous définissons formellement une notion de similitude entre les concepts et les groupes de concepts que nous nommons possibilité de co-occurrence (CP). Conformément à cette définition, nous proposons un algorithme de classification hiérarchique. Nous regroupons les couches dans le réseau multiplexe de documents, et intégrons cette hiérarchie dans un nuage de mots interactif. Nous améliorons les algorithmes traditionnels de disposition de mise en forme de nuages de mots de sorte à préserver les contraintes sur la hiérarchie de concepts. Le système Laputa est destiné à l’analyse complexe de réseaux temporels denses et multidimensionnels. Pour ce faire, il associe un graphe à une segmentation. La segmentation par communauté, par attribut, ou encore par tranche temporelle, forme des vues de ce graphe. Afin d’associer ces vues avec le tout global, nous utilisons des diagrammes de Sankey pour révéler l’évolution des communautés (diagrammes que nous avons augmentés avec un zoom sémantique). Cette thèse nous permet ainsi de parcourir trois aspects (3V) des plus intéressants de la donnée et du BigData appliqués aux archives multimédia: Le Volume de nos données dans l’immensité des archives, nous atteignons des ordres de grandeurs qui ne sont pas praticables pour la visualisation et l’exploitation des liens. La Vélocité à cause de la nature temporelle de nos données (par définition). La Variété qui est un corollaire de la richesse des données multimédia et de tout ce que l’on peut souhaiter vouloir y investiguer. Ce que l’on peut retenir de cette thèse c’est que la traduction de ces trois défis a pris dans tous les cas une réponse sous la forme d’une analyse de réseaux multiplexes. Nous retrouvons toujours ces structures au coeur de notre travail, que ce soit de manière plus discrète dans les critères pour filtrer les arêtes par l’algorithme Simmelian backbone, que ce soit par la superposition de tranches temporelles, ou bien que ce soit beaucoup plus directement dans la combinaison d’indices sémantiques visuels et textuels pour laquelle nous extrayons les hiérarchies permettant notre visualisation. / Nowadays, information follows complex paths: information propagation involving on-line editors, 24-hour news providers and social medias following entangled paths acting on information content and perception. This thesis studies the adaptation of classical graph measurements to multiplex graphs, to build visualizations from several graphical representations of the networks, and to combine them (synchronized multi-view visualizations, hybrid representations, etc.). Emphasis is placed on the modes of interaction allowing to take in hand the multiplex nature (multilayer) of the networks. These representations and interactive manipulations are also based on the calculation of indicators specific to multiplex networks. The work is based on two main datasets: one is a 12-year archive of the Japanese public daily broadcast NHK News 7, from 2001 to 2013. Another lists the participants in the French TV/radio shows between 2010 and 2015. Two visualization systems based on a Web interface have been developed for multiplex network analysis, which we call "Visual Cloud" and "Laputa". In the Visual Cloud, we formally define a notion of similarity between concepts and groups of concepts that we call co-occurrence possibility (CP). According to this definition, we propose a hierarchical classification algorithm. We aggregate the layers in a multiplex network of documents, and integrate that hierarchy into an interactive word cloud. Here we improve the traditional word cloud layout algorithms so as to preserve the constraints on the concept hierarchy. The Laputa system is intended for the complex analysis of dense and multidimensional temporal networks. To do this, it associates a graph with a segmentation. The segmentation by communities, by attributes, or by time slices, forms views of this graph. In order to associate these views with the global whole, we use Sankey diagrams to reveal the evolution of the communities (diagrams that we have increased with a semantic zoom). This thesis allows us to browse three aspects of the most interesting aspects of the data miming and BigData applied to multimedia archives: The Volume since our archives are immense and reach orders of magnitude that are usually not practicable for the visualization; Velocity, because of the temporal nature of our data (by definition). The Variety that is a corollary of the richness of multimedia data and of all that one may wish to want to investigate. What we can retain from this thesis is that we met each of these three challenges by taking an answer in the form of a multiplex network analysis. These structures are always at the heart of our work, whether in the criteria for filtering edges using the Simmelian backbone algorithm, or in the superposition of time slices in the complex networks, or much more directly in the combinations of visual and textual semantic indices for which we extract hierarchies allowing our visualization.
314

Scalable System-Wide Traffic Flow Predictions Using Graph Partitioning and Recurrent Neural Networks

Reginbald Ivarsson, Jón January 2018 (has links)
Traffic flow predictions are an important part of an Intelligent Transportation System as the ability to forecast accurately the traffic conditions in a transportation system allows for proactive rather than reactive traffic control. Providing accurate real-time traffic predictions is a challenging problem because of the nonlinear and stochastic features of traffic flow. An increasingly widespread deployment of traffic sensors in a growing transportation system produces greater volume of traffic flow data. This results in problems concerning fast, reliable and scalable traffic predictions.The thesis explores the feasibility of increasing the scalability of real-time traffic predictions by partitioning the transportation system into smaller subsections. This is done by using data collected by Trafikverket from traffic sensors in Stockholm and Gothenburg to construct a traffic sensor graph of the transportation system. In addition, three graph partitioning algorithms are designed to divide the traffic sensor graph according to vehicle travel time. Finally, the produced transportation system partitions are used to train multi-layered long shortterm memory recurrent neural networks for traffic density predictions. Four different types of models are produced and evaluated based on root mean squared error, training time and prediction time, i.e. transportation system model, partitioned transportation models, single sensor models, and overlapping partition models.Results of the thesis show that partitioning a transportation system is a viable solution to produce traffic prediction models as the average prediction accuracy for each traffic sensor across the different types of prediction models are comparable. This solution tackles scalability issues that are caused by increased deployment of traffic sensors to the transportation system. This is done by reducing the number of traffic sensors each prediction model is responsible for which results in less complex models with less amount of input data. A more decentralized and effective solution can be achieved since the models can be distributed to the edge of the transportation system, i.e. near the physical location of the traffic sensors, reducing prediction and response time of the models. / Prognoser för trafikflödet är en viktig del av ett intelligent transportsystem, eftersom möjligheten att prognostisera exakt trafiken i ett transportsystem möjliggör proaktiv snarare än reaktiv trafikstyrning. Att tillhandahålla noggrann trafikprognosen i realtid är ett utmanande problem på grund av de olinjära och stokastiska egenskaperna hos trafikflödet. En alltmer utbredd använding av trafiksensorer i ett växande transportsystem ger större volym av trafikflödesdata. Detta leder till problem med snabba, pålitliga och skalbara trafikprognoser.Avhandlingen undersöker möjligheten att öka skalbarheten hos realtidsprognoser genom att dela transportsystemet i mindre underavsnitt. Detta görs genom att använda data som samlats in av Trafikverket från trafiksensorer i Stockholm och Göteborg för att konstruera en trafiksensor graf för transportsystemet. Dessutom är tre grafpartitioneringsalgoritmer utformade för att dela upp trafiksensor grafen enligt fordonets körtid. Slutligen används de producerade transportsystempartitionerna för att träna multi-layered long short memory neurala nät för förspänning av trafiktäthet. Fyra olika typer av modeller producerades och utvärderades baserat på rotvärdes kvadratfel, träningstid och prediktionstid, d.v.s. transportsystemmodell, partitionerade transportmodeller, enkla sensormodeller och överlappande partitionsmodeller.Resultat av avhandlingen visar att partitionering av ett transportsystem är en genomförbar lösning för att producera trafikprognosmodeller, eftersom den genomsnittliga prognoser noggrannheten för varje trafiksensor över de olika typerna av prediktionsmodeller är jämförbar. Denna lösning tar itu med skalbarhetsproblem som orsakas av ökad användning av trafiksensorer till transportsystemet. Detta görs genom att minska antal trafiksensorer varje trafikprognosmodell är ansvarig för. Det resulterar i mindre komplexa modeller med mindre mängd inmatningsdata. En mer decentraliserad och effektiv lösning kan uppnås eftersom modellerna kan distribueras till transportsystemets kant, d.v.s. nära trafiksensorns fysiska läge, vilket minskar prognosoch responstid för modellerna.
315

Optimisation d'infrastructures de cloud computing sur des green datacenters / Infrastructure Optimization of cloud computing on green data centers

Safieddine, Ibrahim 29 October 2015 (has links)
Les centres de données verts de dernière génération ont été conçus pour une consommation optimisée et une meilleure qualité du niveau de service SLA. Cependant,ces dernières années, le marché des centres de données augmente rapidement,et la concentration de la puissance de calcul est de plus en plus importante, ce qui fait augmenter les besoins en puissance électrique et refroidissement. Un centre de données est constitué de ressources informatiques, de systèmes de refroidissement et de distribution électrique. De nombreux travaux de recherche se sont intéressés à la réduction de la consommation des centres de données afin d'améliorer le PUE, tout en garantissant le même niveau de service. Certains travaux visent le dimensionnement dynamique des ressources en fonction de la charge afin de réduire le nombre de serveurs démarrés, d'autres cherchent à optimiser le système de refroidissement qui représente un part important de la consommation globale.Dans cette thèse, afin de réduire le PUE, nous étudions la mise en place d'un système autonome d'optimisation globale du refroidissement, qui se base sur des sources de données externes tel que la température extérieure et les prévisions météorologiques, couplé à un module de prédiction de charge informatique globale pour absorber les pics d'activité, pour optimiser les ressources utilisés à un moindre coût, tout en préservant la qualité de service. Afin de garantir un meilleur SLA, nous proposons une architecture distribuée pour déceler les anomalies de fonctionnements complexes en temps réel, en analysant de gros volumes de données provenant des milliers de capteurs du centre de données. Détecter les comportements anormaux au plus tôt, permet de réagir plus vite face aux menaces qui peuvent impacter la qualité de service, avec des boucles de contrôle autonomes qui automatisent l'administration. Nous évaluons les performances de nos contributions sur des données provenant d'un centre de donnée en exploitation hébergeant des applications réelles. / Next-generation green datacenters were designed for optimized consumption and improved quality of service level Service Level Agreement (SLA). However, in recent years, the datacenter market is growing rapidly, and the concentration of the computing power is increasingly important, thereby increasing the electrical power and cooling consumptions. A datacenter consists of computing resources, cooling systems, and power distribution. Many research studies have focused on reducing the consumption of datacenters to improve the PUE, while guaranteeing the same level of service. Some works aims the dynamic sizing of resources according to the load, to reduce the number of started servers, others seek to optimize the cooling system which represents an important part of total consumption. In this thesis, in order to reduce the PUE, we study the design of an autonomous system for global cooling optimization, which is based on external data sources such as the outside temperature and weather forecasting, coupled with an overall IT load prediction module to absorb the peaks of activity, to optimize activere sources at a lower cost while preserving service level quality. To ensure a better SLA, we propose a distributed architecture to detect the complex operation anomalies in real time, by analyzing large data volumes from thousands of sensors deployed in the datacenter. Early identification of abnormal behaviors, allows a better reactivity to deal with threats that may impact the quality of service, with autonomous control loops that automate the administration. We evaluate the performance of our contributions on data collected from an operating datacenter hosting real applications.
316

Sakernas Internet : En studie om vehicular fog computing påverkan i trafiken / Internet of things : An study on vehicular fog computing outcome in traffic

Ahlcrona, Felix January 2018 (has links)
Framtidens fordon kommer vara väldigt annorlunda jämfört med dagens fordon. Stor del av förändringen kommer ske med hjälp av IoT. Världen kommer bli oerhört uppkopplat, sensorer kommer kunna ta fram data som de flesta av oss inte ens visste fanns. Mer data betyder även mer problem. Enorma mängder data kommer genereras och distribueras av framtidens IoT-enheter och denna data behöver analyseras och lagras på effektiva sätt med hjälp av Big data principer. Fog computing är en utveckling av Cloud tekniken som föreslås som en lösning på många av de problem IoT lider utav. Är tradionella lagringsmöjligheter och analyseringsverktyg tillräckliga för den enorma volymen data som kommer produceras eller krävs det nya tekniker för att stödja utvecklingen? Denna studie kommer försöka besvara frågeställningen: ”Vilka problem och möjligheter får utvecklingen av Fog computing i personbilar för konsumenter?” Frågeställningen besvaras genom en systematisk litteraturstudie. Den systematiska litteraturstudien syfte är identifiera och tolka tidigare litteratur och forskning. Analys av materialet har skett med hjälp av öppen kodning som har använts för att sortera och kategorisera data. Resultat visar att tekniker som IoT, Big data och Fog computing är väldigt integrerade i varandra. I framtidens fordon kommer det finns mycket IoTenheter som producerar enorma mängder data. Fog computing kommer bli en effektiv lösning för att hantera de mängder data från IoT-enheterna med låg fördröjning. Möjligheterna blir nya applikationer och system som hjälper till med att förbättra säkerheten i trafiken, miljön och information om bilens tillstånd. Det finns flera risker och problem som behöver lösas innan en fullskalig version kan börja användas, risker som autentisering av data, integriteten för användaren samt bestämma vilken mobilitetsmodell som är effektivast. / Future vehicles will be very different from today's vehicles. Much of the change will be done using the IoT. The world will be very connected, sensors will be able to access data that most of us did not even know existed. More data also means more problems. Enormous amounts of data will be generated and distributed by the future's IoT devices, and this data needs to be analyzed and stored efficiently using Big data Principles. Fog computing is a development of Cloud technology that is suggested as a solution to many of the problems IoT suffer from. Are traditional storage and analysis tools sufficient for the huge volume of data that will be produced or are new technologies needed to support development? This study will try to answer the question: "What problems and opportunities does the development of Fog computing in passenger cars have for consumers?" The question is answered by a systematic literature study. The objective of the systematic literature study is to identify and interpret previous literature and research. Analysis of the material has been done by using open coding where coding has been used to sort and categorize data. Results show that technologies like IoT, Big data and Fog computing are very integrated in each other. In the future vehicles there will be a lot of IoT devices that produce huge amounts of data. Fog computing will be an effective solution for managing the amount of data from IoT devices with a low latency. The possibilities will create new applications and systems that help improve traffic safety, the environment and information about the car's state and condition. There are several risks and problems that need to be resolved before a full-scale version can be used, such as data authentication, user integrity, and deciding on the most efficient mobility model.
317

Digitaliseringens påverkan på revision / Digitalization’s impact on auditing

Persson, Christian January 2018 (has links)
The current business environment demand financial information which are considered relevant and reliable, to ease the managers, investors and employees’decision-making. Auditing has acted as a controlling body to ensure credible information. The audit industry is one of many industries that are constantly changing due to digitalization. Digitalization is considered to be one of society's strongest global forces of change. The aim of the study is to create an increased understanding of the impact of digitalization on auditing and to fulfil the purpose of the study, which is done by answering how the audit process changes due to digitalization and what skills that are necessary for auditors in the digital environment. A qualitative research strategy is applied, where ten semistructured interviews were conducted with both system developers and auditors.The theoretical framework and empirics are structured by the audit process, digitalization and competence needs. Furthermore, the analysis is based on different answers of the respondents and relevant theory. The study implies that the industry is positively influenced by the digitalization, where efficiency is one of the top benefits. The audit process undergoes a shift from statistical selections to data analyses of companies' entire data volumes. Manual operations are eliminated and gives the auditors more time for consulting. The role of consultant requires more qualified knowledge and therefore the study also demonstrate a knowledge gap between universities and audit firms. Digitalization has created a demand for more qualified staff that leads to fewer newly graduated persons being employed. The study also displays that IT-knowledge is one of the key competencies in the future audit industry.
318

[en] RECOMMENDATION SYSTEMS: AN USER EXPERIENCE ANALYSIS IN DIGITAL PRODUCTS / [pt] SISTEMAS DE RECOMENDAÇÃO DE CONTEÚDO: UMA ANÁLISE SOBRE A EXPERIÊNCIA DO USUÁRIO EM PRODUTOS DIGITAIS

CAROLINA LIMEIRA ALVES 15 December 2015 (has links)
[pt] Big Data é o termo utilizado para caracterizar o conjunto de soluções tecnológicas que permitem o rápido processamento de um grande volume de dados variados. Estas soluções só se tornaram possíveis com os avanços tecnológicos ocorridos nas últimas décadas. Uma das funcionalidades que ganharam força e melhorias através desses tipos de tecnologias são os sistemas de recomendação. Tais sistemas têm como objetivo principal oferecer ao usuário sugestões de conteúdo que possam interessá-lo. Este conteúdo pode ser uma notícia, um produto, um contato, um filme, uma música ou qualquer outro tipo de informação. Esta dissertação estuda a percepção dos usuários em relação aos sistemas de recomendação, especialmente para o conteúdo televisivo (programas, séries e filmes). Para tal, fez-se uso de questionários, grupos de foco, análise do cenário atual e estudo de caso. Através destes métodos e técnicas foi possível identificar os diferentes fatores que influenciam a maneira como a funcionalidade é percebida e a forma como os serviços são utilizados. Além disso, se discute as consequências do uso excessivo da personalização de conteúdo, bem como questões éticas, privacidade, impactos sociais e psicológicos e a responsabilidade do designer de produtos digitais. Em conclusão, são feitas recomendações para o desenvolvimento deste tipo de sistema de forma que atenda aos seus objetivos e proporcione uma experiência mais satisfatória ao usuário. / [en] Big Data is the term used to identify the set of technological solutions that allows the fast processing of a big amount of diverse data that only became possible with the technological advances that have occurred in recent decades. One of the features that gained strength and improvements through these types of technologies are the recommendation systems. The objective of these kind of systems is to offer suggestions of content that might interest the users. This content can be some news, a product, a personal contact, a movie, a song or any other kind of information. This dissertation addresses the study of the perception of the users relative to recommendation systems, especially for television content (programs, series and movies). For this purpose, questionnaires, focus groups, context analysis and case studies were used. Through these methods and techniques it was possible to identify the different factors that influences how the functionality is perceived and how the services are used. Further, it discusses the consequences of the excessive use of personalized content, privacy, ethical and social issues, psychological impacts and the responsibility of the digital products designer. In conclusion, some recommendations are made regarding the development of this type of system so that it achieves its purposes and provides a more satisfying user experience.
319

[en] FORECASTING LARGE REALIZED COVARIANCE MATRICES: THE BENEFITS OF FACTOR MODELS AND SHRINKAGE / [pt] PREVISÃO DE MATRIZES DE COVARIÂNCIA REALIZADA DE ALTA DIMENSÃO: OS BENEFÍCIOS DE MODELOS DE FATORES E SHRINKAGE

DIEGO SIEBRA DE BRITO 19 September 2018 (has links)
[pt] Este trabalho propõe um modelo de previsão de matrizes de covariância realizada de altíssima dimensão, com aplicação para os componentes do índice S e P 500. Para lidar com o altíssimo número de parâmetros (maldição da dimensionalidade), propõe-se a decomposição da matriz de covariância de retornos por meio do uso de um modelo de fatores padrão (e.g. tamanho, valor, investimento) e uso de restrições setoriais na matriz de covariância residual. O modelo restrito é estimado usando uma especificação de vetores auto regressivos heterogêneos (VHAR) estimados com LASSO (Least Absolute Shrinkage and Selection Operator). O uso da metodologia proposta melhora a precisão de previsão em relação a benchmarks padrões e leva a melhores estimativas de portfólios de menor variância. / [en] We propose a model to forecast very large realized covariance matrices of returns, applying it to the constituents of the S and P 500 on a daily basis. To deal with the curse of dimensionality, we decompose the return covariance matrix using standard firm-level factors (e.g. size, value, profitability) and use sectoral restrictions in the residual covariance matrix. This restricted model is then estimated using Vector Heterogeneous Autoregressive (VHAR) models estimated with the Least Absolute Shrinkage and Selection Operator (LASSO). Our methodology improves forecasting precision relative to standard benchmarks and leads to better estimates of the minimum variance portfolios.
320

La théorie des ressources et l'évaluation du système d'information : le cas des outils de surveillance des médias sociaux (Social Media Monitoring) / Resource-based theory and evaluation of information system : the case of social media monitoring

Soleman, Ramzi 12 April 2018 (has links)
Récemment les données issues de médias sociaux, dites les Big Social Data (BSD) retiennent de plus en plus l’attention des chercheurs et des professionnels, notamment après l’apparition des outils de surveillance des médias sociaux (Social Media Monitoring – SMM), permettant de traiter ces BSD. Les promesses associées au SMM concernent l’amélioration des processus de prise de décision, voire la transformation de processus métiers des entreprises. Malgré des investissements de plus en plus importants, l’usage efficace de ces outils dans les entreprises est très variable. Dans cette recherche, nous souhaiterions comprendre comment et pour quelles finalités le outils SMM sont utilisés ?. Pour l’évaluation de ces outils, nous nous appuyons sur la théorie des ressources. Afin de mettre œuvre de cette recherche, nous avons eu recours à une approche par méthodes mixtes. Cette approche consiste en étude qualitative qui a servi au développement et à l’enrichissement d’une seconde étude quantitative. Les résultats obtenus montrent que la combinaison de ressources SMM (qualité d’outil, ressources humaines…) et de ressources complémentaires permet de constituer des capacités SMM (mesure, interactive, utilisation processus) conduisant à la performance du SMM. Le soutien de l’organisation et, plus spécifiquement le rôle des managers, dans l’activation des ressources et des capacités SMM est conforme au récent approfondissement du management des ressources. En revanche, nous avons détecté que des ambiguïtés demeurent concernant le RBT. Pour cela, nous proposerons de lever ces ambigüités en ayant recours à la théorie étendue des ressources. Finalement, nous présentons les apports, les limites et les perspectives de notre recherche. / Recently, social media data, called Big Social Data (BSD), attract more and more attention from researchers and professionals, particularly after the emergence of Social Media Monitoring (SMM) tools, used to process BSD. The promises associated with the SMM concern the improvement of decision-making processes, or even the transformation of business processes. Despite increasing investments, the effective use of these tools in companies is very variable. In this research, we would like to understand how and for what purposes the SMM tools are used?. For the evaluation of these tools, we build upon the Resource-Based Theory (RBT). In order to implement this research, we used a mixed method approach. This approach consists of a qualitative study that was used to develop and enrich a second quantitative study. The obtained results show that the combination of SMM resources (quality of SMM tool, human resources…) and complementary resources makes it possible to build SMM capabilities (measurement, process, interaction…) leading to performance. Moreover, the support of the organization, and more specifically the role of managers, in the activation of SMM resources and capabilities is consistent with the recent advancements of resource management. However, we detected some ambiguities concerning the RBT. To deal with these ambiguities, we propose to resort to the extended theory of resource. Finally, we present the contributions, the limits and the perspectives of our research.

Page generated in 0.0742 seconds