• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 117
  • 28
  • 19
  • 8
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 252
  • 68
  • 50
  • 49
  • 40
  • 39
  • 33
  • 31
  • 23
  • 22
  • 20
  • 19
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Arquitetura de co-projeto hardware/software para implementação de um codificador de vídeo escalável padrão H.264/SVC

Husemann, Ronaldo January 2011 (has links)
Visando atuação flexível em redes heterogêneas, modernos sistemas multimídia podem adotar o conceito da codificação escalável, onde o fluxo de vídeo é composto por múltiplas camadas, cada qual complementando e aprimorando gradualmente as características de exibição, de forma adaptativa às capacidades de cada receptor. Atualmente, a especificação H.264/SVC representa o estado da arte da área, por sua eficiência de codificação aprimorada, porém demanda recursos computacionais extremamente elevados. Neste contexto, o presente trabalho apresenta uma arquitetura de projeto colaborativo de hardware e software, que explora as características dos diversos algoritmos internos do codificador H.264/SVC, buscando um adequado balanceamento entre as duas tecnologias (hardware e software) para a implementação prática de um codificador escalável de até 16 camadas em formato de 1920x1080 pixels. A partir de um modelo do código de referência H.264/SVC, refinado para reduzir tempos de codificação, foram definidas estratégias de particionamento de módulos e integração entre entidades de software e hardware, avaliando-se questões como dependência de dados e potencial de paralelismo dos algoritmos, assim como restrições práticas das interfaces de comunicação e acessos à memória. Em hardware foram implementados módulos de transformadas, quantização, filtro anti-blocagem e predição entre camadas, permanecendo em software funções de gerência do sistema, entropia, controle de taxa e interface com usuário. A solução completa obtida, integrando módulos em hardware, sintetizados em uma placa de desenvolvimento, com o software de referência refinado, comprova a validade da proposta, pelos significativos ganhos de desempenho registrados, mostrando-se como uma solução adequada para aplicações que exijam codificação escalável tempo real. / In order to support heterogeneous networks and distinct devices simultaneously, modern multimedia systems can adopt the scalability concept, when the video stream is composed by multiple layers, each one being responsible for gradually enhance the video exhibition quality, according to specific receiver capabilities. Currently the H.264/SVC specification can be considered the state-of-art in this area, by improving the coding efficiency, but, in the other hand, impacting in extremely high computational demands. Based on that, this work presents a hardware/software co-design architecture, which explores the characteristics of H.264/SVC internal algorithms, aiming the right balancing between both technologies (hardware and software) in order to generate a practical scalable encoder implementation, able to process up to 16 layers in 1920x1080 pixels format. Based in an H.264/SVC reference code model, which was refined in order to reduce global encoding time, the approaches for module partitioning and data integration between hardware and software were defined. The proposed methodology took into account characteristics like data dependency and inherent possibility of parallelism, as well practical restrictions like influence of communication interfaces and memory accesses. Particularly, the modules of transforms, quantization, deblocking and inter-layer prediction were implemented in hardware, while the functions of system management, entropy, rate control and user interface were kept in software. The whole solution, which was obtained integrating hardware modules, synthesized in a development board, with the refined H.264/SVC reference code, validates the proposal, by the significant performance gains registered, indicating it as an adequate solution for applications which require real-time video scalable coding.
162

Avaliação subjetiva de qualidade aplicada à codificação de vídeo escalável / Subjective video quality assessment applied to scalable video coding

Daronco, Leonardo Crauss January 2009 (has links)
Os constantes avanços nas áreas de transmissão e processamento de dados ao longo dos últimos anos permitiram a criação de diversas aplicações e serviços baseados em dados multimídia, como streaming de vídeo, videoconferências, aulas remotas e IPTV. Além disso, avanços nas demais áreas da computação e engenharias, possibilitaram a construção de uma enorme diversidade de dispositivos de acesso a esses serviços, desde computadores pessoais até celulares, para citar os mais utilizados atualmente. Muitas dessas aplicações e dispositivos estão amplamente difundidos hoje em dia, e, ao mesmo tempo em que a tecnologia avança, os usuários tornam-se mais exigentes, buscando sempre melhor qualidade nos serviços que utilizam. Devido à grande variedade de redes e dispositivos atuais, uma dificuldade existente é possibilitar o acesso universal a uma transmissão. Uma alternativa criada é utilizar transmissão de vídeo escalável com IP multicast e controlada por mecanismos para adaptabilidade e controle de congestionamento. O produto final dessas transmissões mulimídia são os próprios dados multimídia (vídeo e áudio, principalmente) que o usuário está recebendo, portanto a qualidade destes dados é fundamental para um bom desempenho do sistema e satisfação dos usuários. Este trabalho apresenta um estudo de avaliações subjetivas de qualidade aplicadas em sequências de vídeo codificadas através da extensão escalável do padrão H.264 (SVC). Foi executado um conjunto de testes para avaliar, principalmente, os efeitos da instabilidade da transmissão (variação do número de camadas de vídeo recebidas) e a influência dos três métodos de escalabilidade (espacial, temporal e de qualidade) na qualidade dos vídeos. As definições foram baseadas em um sistema de transmissão em camadas com utilização de protocolos para adaptabilidade e controle de congestionamento. Para execução das avaliações subjetivas foi feito o uso da metodologia ACR-HRR e recomendações das normas ITU-R Rec. BT.500 e ITU-T Rec. P.910. Os resultados mostram que, diferente do esperado, a instabilidade não provoca grandes alterações na qualidade subjetiva dos vídeos e que o método de escalabilidade temporal tende a apresentar qualidade bastante inferior aos outros métodos. As principais contribuições deste trabalho estão nos resultados obtidos nas avaliações, além da metodologia utilizada durante o desenvolvimento do trabalho (definição do plano de avaliação, uso das ferramentas como o JSVM, seleção do material de teste, execução das avaliações, entre outros), das aplicações desenvolvidas, da definição de alguns trabalhos futuros e de possíveis objetivos para avaliações de qualidade. / The constant advances in multimedia processing and transmission over the past years have enabled the creation of several applications and services based on multimedia data, such as video streaming, teleconference, remote classes and IPTV. Futhermore, a big variety of devices, that goes from personal computers to mobile phones, are now capable of receiving these transmissions and displaying the multimedia data. Most of these applications are widely adopted nowadays and, at the same time the technology advances, the user are becoming more demanding about the quality of the services they use. Given the diversity of devices and networks available today, one of the big challenges of these multimedia systems is to be able to adapt the transmission to the receivers' characteristics and conditions. A suitable solution to provide this adaptation is the integration of scalable video coding with layered transmission. As the final product in these multimedia systems are the multimedia data that is presented to the user, the quality of these data will define the performace of the system and the users' satisfaction. This paper presents a study of subjective quality of scalable video sequences, coded using the scalable extension of the H.264 standard (SVC). A group of experiments was performed to measure, primarily, the efeects that the transmission instability (variations in the number of video layers received) has in the video quality and the relationship between the three scalability methods (spatial, temporal and quality) in terms of subjective quality. The decisions taken to model the tests were based on layered transmission systems that use protocols for adaptability and congestion control. To run the subjective assessments we used the ACR-HRR methodology and recommendations given by ITU-R Rec. BT.500 and ITU-T Rec. P.910. The results show that the instability modelled does not causes significant alterations on the overall video subjective quality if compared to a stable video and that the temporal scalability usually produces videos with worse quality than the spatial and quality methods, the latter being the one with the better quality. The main contributions presented in this work are the results obtained in the subjective assessments. Moreover, are also considered as contributions the methodology used throughout the entire work (including the test plan definition, the use of tools as JSVM, the test material selection and the steps taken during the assessment), some applications that were developed, the definition of future works and the specification of some problems that can also be solved with subjective quality evaluations.
163

Um modelo hÃbrido para simulaÃÃo de multidÃo com comportamentos variados em tempo real / A hybrid model for simulating crowds with different behaviors in real time

TeÃfilo Bezerra Dutra 14 March 2011 (has links)
nÃo hà / Simular uma multidÃo à uma tarefa custosa computacionalmente, onde hà a necessidade de reproduzir o comportamento de vÃrios (dezenas a milhares) agentes realisticamente em um ambiente bidimensional ou tridimensional. Os agentes precisam interagir entre si e com o ambiente, reagindo a situaÃÃes, alternando comportamentos e/ou aprendendo novos comportamentos durante sua âvidaâ. Muitos modelos para simulaÃÃo de multidÃo foram desenvolvidos nos Ãltimos anos e podem ser classificados em dois grandes grupos (macroscÃpico e microscÃpico) de acordo com a forma como os agente sÃo gerenciados. Existem alguns trabalhos na literatura baseados em modelos macroscÃpicos, onde os agentes sÃo agrupados e guiados pelo campo potencial gerado para seu grupo. A construÃÃo desses campos à o gargalo desse tipo de modelo, sendo necessÃria a utilizaÃÃo de poucos grupos para que uma simulaÃÃo execute a taxas de quadros interativas. Neste trabalho à proposto um modelo baseado em um modelo macroscÃpico, que tem como objetivo principal diminuir o custo do cÃlculo dos campos potenciais dos grupos, discretizando os mesmos de acordo com a necessidade do ambiente. Ao mesmo tempo à proposta a adiÃÃo de grupos que podem dirigir os agentes de uma simulaÃÃo a objetivos momentÃneos, o que fornece à multidÃo uma maior variedade de comportamentos. Por fim, propÃe-se a utilizaÃÃo de um modelo de forÃas sociais para a prevenÃÃo de colisÃes entre os agentes e entre agentes e obstÃculos / Crowd simulation is a computationally expensive task, where there is the need to reproduce the behavior of many (tens to thousands) agents in a two-dimensional or three-dimensional environment realistically. The agents need to interact to each other and with the environment, reacting to situations, alternating behaviors and/or learning new behaviors during his âlifetimeâ. Many models to simulate crowds have been developed over the years and can be classified into two big groups (macroscopic and microscopic) according to how the agents are managed. There are some works in the literature based on macroscopic models, where the agents are grouped and guided by the potential field of their group. The construction of these fields is the bottleneck of these models, so it is necessary to use few groups if it is needed for a simulation to run at interactive frame rates. In this work is proposed a model based on a macroscopic model, which aims mainly to reduce the cost of calculating the potential fields of the groups, by using groups discretized according to the needs of the environment. At the same time it is proposed the addition of groups that can steer the agents of a simulation to momentary goals, which gives the crowd a wider variety of behaviors. Finally, it is proposed the use of a social forces model to prevent collisions between agents and between agents and obstacles
164

Clustering Approaches for Multi-source Entity Resolution

Saeedi, Alieh 10 December 2021 (has links)
Entity Resolution (ER) or deduplication aims at identifying entities, such as specific customer or product descriptions, in one or several data sources that refer to the same real-world entity. ER is of key importance for improving data quality and has a crucial role in data integration and querying. The previous generation of ER approaches focus on integrating records from two relational databases or performing deduplication within a single database. Nevertheless, in the era of Big Data the number of available data sources is increasing rapidly. Therefore, large-scale data mining or querying systems need to integrate data obtained from numerous sources. For example, in online digital libraries or E-Shops, publications or products are incorporated from a large number of archives or suppliers across the world or within a specified region or country to provide a unified view for the user. This process requires data consolidation from numerous heterogeneous data sources, which are mostly evolving. By raising the number of sources, data heterogeneity and velocity as well as the variance in data quality is increased. Therefore, multi-source ER, i.e. finding matching entities in an arbitrary number of sources, is a challenging task. Previous efforts for matching and clustering entities between multiple sources (> 2) mostly treated all sources as a single source. This approach excludes utilizing metadata or provenance information for enhancing the integration quality and leads up to poor results due to ignorance of the discrepancy between quality of sources. The conventional ER pipeline consists of blocking, pair-wise matching of entities, and classification. In order to meet the new needs and requirements, holistic clustering approaches that are capable of scaling to many data sources are needed. The holistic clustering-based ER should further overcome the restriction of pairwise linking of entities by making the process capable of grouping entities from multiple sources into clusters. The clustering step aims at removing false links while adding missing true links across sources. Additionally, incremental clustering and repairing approaches need to be developed to cope with the ever-increasing number of sources and new incoming entities. To this end, we developed novel clustering and repairing schemes for multi-source entity resolution. The approaches are capable of grouping entities from multiple clean (duplicate-free) sources, as well as handling data from an arbitrary combination of clean and dirty sources. The multi-source clustering schemes exclusively developed for multi-source ER can obtain superior results compared to general purpose clustering algorithms. Additionally, we developed incremental clustering and repairing methods in order to handle the evolving sources. The proposed incremental approaches are capable of incorporating new sources as well as new entities from existing sources. The more sophisticated approach is able to repair previously determined clusters, and consequently yields improved quality and a reduced dependency on the insert order of the new entities. To ensure scalability, the parallel variation of all approaches are implemented on top of the Apache Flink framework which is a distributed processing engine. The proposed methods have been integrated in a new end-to-end ER tool named FAMER (FAst Multi-source Entity Resolution system). The FAMER framework is comprised of Linking and Clustering components encompassing both batch and incremental ER functionalities. The output of Linking part is recorded as a similarity graph where each vertex represents an entity and each edge maintains the similarity relationship between two entities. Such a similarity graph is the input of the Clustering component. The comprehensive comparative evaluations overall show that the proposed clustering and repairing approaches for both batch and incremental ER achieve high quality while maintaining the scalability.
165

Entwicklung einer skalierbaren Mikrowellen Plasmaquelle

Roch, Uwe Julius-Herbert 20 December 2019 (has links)
Im Rahmen dieser Arbeit ist eine neuartige, innovative und vielseitig einsetzbare Mikrowellenplasmaquelle entstanden. Die wesentlichen Leistungsmerkmale dieser Plasmaquelle sind deren beliebige Längenskalierbarkeit, sowie der weite Arbeitsdruckbereich vom Feinvakuum bis Atmosphärendruck. Auf der Basis von Voruntersuchungen, sowie umfangreichen Simulationsrechnungen zur Ausbreitung der Mikrowellenfelder, wurde eine Kavität mit einem Querschnitt von 100 mm Breite und 120 mm Höhe entwickelt, welche um ein Vielfaches der Hohlleiterwellenlänge R = 122 mm skalieren lässt. In dieser Arbeit wurde ein Demonstrator mit einer Länge von 720 mm aufgebaut. Die Eigenmodeanalyse ergab, dass die geforderte Feldverteilung bis zu einer Frequenz von 2,48368 GHz erhalten bleibt. Die Einkopplung der Mikrowellenleistung erfolgt über mehrere Hohlleiter, welche gegenüber und nebeneinander an der Kavität angeordnet sind. Umfangreiche Untersuchungen hinsichtlich der verlustfreien Leistungseinkopplung haben ergeben, dass eine phasensynchrone Mikrowelleneinkopplung zwingend erforderlich ist, da sich ansonsten der Wirkungsgrad der Plasmaquelle stark reduziert. Um dem Anspruch der phasensynchronen Einkopplung sowie der notwendigen verlustfreien Mikrowellen–Leistungsverteilung gerecht zu werden, wurden 2–fach und 4–fach Mikrowellenleistungsverteiler entwickelt. Weiterhin wurde erstmalig das Konzept des „injected–Phase–locking“ zur Ansteuerung der Plasmakavität, mittels mehreren gepulsten Mikrowellengeneratoren, erfolgreich evaluiert. Zudem konnte das synchronisierte Pulsen bis zu 20 kHz Pulsfrequenz mit einem minimalen Tastverhältnis von 60 % nachgewiesen werden. Die Stabilisierung von PAN–Fasern mittels Plasma wurde erprobt. Untersuchungen mittels Raman, Dichtemessung sowie Durchmesser wurden durchgeführt. Die Karbonisierung von stabilisierten PAN–Fasern (PANOX, SGL) wurde erfolgreich nachgewiesen. In einem Plasmagasgemisch aus Ar = 0,5 slm und N2 = 0,03 slm, einer Fasergeschwindigkeit von 80 mm/min, einem Prozessdruck 120 – 170 mbar, sowie 2x 3 kW synchronisierter Mikrowellenleistung konnten Fasertemperaturen von bis zu 1100 °C und somit maximale Zugfestigkeiten von 4200 MPa erreicht werden.:1. Einleitung und Motivation 2. Stand der Technik 2.1. Plasmaquellen 2.1.1. RF – Plasma 2.1.2. Corona – Plasma 2.1.3. DBD – Plasma 2.1.4. Mikrowellen – Plasma 2.1.5. Zusammenfassung 2.1.6. Grundlagen Mikrowellenplasma 2.2. Simulationsprogramme 2.3. Konvertierungsverfahren für Kohlenstofffasern 2.3.1. Stabilisierung 2.3.2. Karbonisierung 3. Aufgabenstellung und Zielsetzung 4. Entwicklung der Plasmaquelle 4.1. Konzept der Plasmaquelle 4.2. Resonator 4.2.1. Grundlagen Resonatoren 4.2.2. Analytische Auslegung 4.2.3. Simulation des Resonators 4.3. Mikrowelleneinkopplung 4.3.1. Grundlagen Mikrowellenleitung 4.3.2. Simulation der Einkopplung 4.3.3. Simulation der Plasmakammer 4.4. Mikrowellenleistungsverteilung 4.4.1. Leistungssplitter 4.4.2. synchronisierter und pulsfähiger Mikrowellengeneratorverbund 5. Aufbau einer Demonstrator Plasmaquelle 5.1. Evaluierung der Plasmaquelle 5.1.1. Experimentelle Ermittlung der Parameter für den Betrieb der Plasmaquelle 5.1.2. Optische Emissions–Spektroskopie 5.1.3. Untersuchung der Plasmahomogenität 5.2. Anwendungsbeispiel Faserbehandlung 5.2.1. Aufbau des Faserhandlings 5.2.2. Fasercharakterisierung 5.2.3. Ergebnisse Stabilisierung 5.2.4. Ergebnisse Karbonisierung 6. Zusammenfassung und Ausblick 7. Literaturverzeichnis
166

VoloDB: High Performance and ACID Compliant Distributed Key Value Store with Scalable Prune Index Scans

Dar, Ali January 2015 (has links)
Relational database provide an efficient mechanism to store and retrieve structured data with ACID properties but it is not ideal for every scenario. Their scalability is limited because of huge data processing requirement of modern day systems. As an alternative NoSQL is different way of looking at a database, they generally have unstructured data and relax some of the ACID properties in order to achieve massive scalability. There are many flavors of NoSQL system, one of them is a key value store. Most of the key value stores currently available in the market offers reasonable performance but compromise on many important features such as lack of transactions, strong consistency and range queries. The stores that do offer these features lack good performance. The aim of this thesis is to design and implement VoloDB, a key value store that provides high throughput in terms of both reads and writes but without compromising on ACID properties. VoloDB is built over MySQL Cluster and instead of using high-level abstractions, it communicates with the cluster using the highly efficient native low level C++ asynchronous NDB API. VoloDB talks directly to the data nodes without the need to go through MySQL Server that further enhances the performance. It exploits many of MySQL Cluster’s features such as primary and partition key lookups and prune index scans to hit only one of the data nodes to achieve maximum performance. VoloDB offers a high level abstraction that hides the complexity of the underlying system without requiring the user to think about internal details. Our key value store also offers various additional features such as multi-query transactions and bulk operation support. C++ client libraries are also provided to allow developers to interface easily with our server. Extensive evaluation is performed which benchmarks various scenarios and also compares them with another high performance open source key value store.
167

Gargamel : accroître les performances des DBMS en parallélisant les transactions en écriture / Gargamel : boosting DBMS performance by parallelising write transactions

Cincilla, Pierpaolo 15 September 2014 (has links)
Les bases de données présentent des problèmes de passage à l’échelle. Ceci est principalement dû à la compétition pour les ressources et au coût du contrôle de la concurrence. Une alternative consiste à centraliser les écritures afin d’éviter les conflits. Cependant, cette solution ne présente des performances satisfaisantes que pour les applications effectuant majoritairement des lectures. Une autre solution est d’affaiblir les propriétés transactionnelles mais cela complexifie le travail des développeurs d’applications. Notre solution, Gargamel, répartie les transactions effectuant des écritures sur différentes répliques de la base de données tout en gardant de fortes propriétés transactionnelles. Toutes les répliques de la base de donnée s’exécutent séquentiellement, à plein débit; la synchronisation entre les répliques reste minime. Les évaluations effectuées avec notre prototype montrent que Gargamel permet d’améliorer le temps de réponse et la charge d’un ordre de grandeur quand la compétition est forte (systèmes très chargés avec ressources limitées) et que dans les autres cas le ralentissement est négligeable. / Databases often scale poorly in distributed configurations, due to the cost of concurrency control and to resource contention. The alternative of centralizing writes works well only for read-intensive workloads, whereas weakening transactional properties is problematic for application developers. Our solution spreads non-conflicting update transactions to different replicas, but still provides strong transactional guarantees. In effect, Gargamel partitions the database dynamically according to the update workload. Each database replica runs sequentially, at full bandwidth; mutual synchronisation between replicas remains minimal. Our prototype show that Gargamel improves both response time and load by an order of magnitude when contention is high (highly loaded system with bounded resources), and that otherwise slow-down is negligible.
168

Bayesian Nonparametric Modeling and Inference for Multiple Object Tracking

January 2019 (has links)
abstract: The problem of multiple object tracking seeks to jointly estimate the time-varying cardinality and trajectory of each object. There are numerous challenges that are encountered in tracking multiple objects including a time-varying number of measurements, under varying constraints, and environmental conditions. In this thesis, the proposed statistical methods integrate the use of physical-based models with Bayesian nonparametric methods to address the main challenges in a tracking problem. In particular, Bayesian nonparametric methods are exploited to efficiently and robustly infer object identity and learn time-dependent cardinality; together with Bayesian inference methods, they are also used to associate measurements to objects and estimate the trajectory of objects. These methods differ from the current methods to the core as the existing methods are mainly based on random finite set theory. The first contribution proposes dependent nonparametric models such as the dependent Dirichlet process and the dependent Pitman-Yor process to capture the inherent time-dependency in the problem at hand. These processes are used as priors for object state distributions to learn dependent information between previous and current time steps. Markov chain Monte Carlo sampling methods exploit the learned information to sample from posterior distributions and update the estimated object parameters. The second contribution proposes a novel, robust, and fast nonparametric approach based on a diffusion process over infinite random trees to infer information on object cardinality and trajectory. This method follows the hierarchy induced by objects entering and leaving a scene and the time-dependency between unknown object parameters. Markov chain Monte Carlo sampling methods integrate the prior distributions over the infinite random trees with time-dependent diffusion processes to update object states. The third contribution develops the use of hierarchical models to form a prior for statistically dependent measurements in a single object tracking setup. Dependency among the sensor measurements provides extra information which is incorporated to achieve the optimal tracking performance. The hierarchical Dirichlet process as a prior provides the required flexibility to do inference. Bayesian tracker is integrated with the hierarchical Dirichlet process prior to accurately estimate the object trajectory. The fourth contribution proposes an approach to model both the multiple dependent objects and multiple dependent measurements. This approach integrates the dependent Dirichlet process modeling over the dependent object with the hierarchical Dirichlet process modeling of the measurements to fully capture the dependency among both object and measurements. Bayesian nonparametric models can successfully associate each measurement to the corresponding object and exploit dependency among them to more accurately infer the trajectory of objects. Markov chain Monte Carlo methods amalgamate the dependent Dirichlet process with the hierarchical Dirichlet process to infer the object identity and object cardinality. Simulations are exploited to demonstrate the improvement in multiple object tracking performance when compared to approaches that are developed based on random finite set theory. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2019
169

Scalable Algorithms for the Analysis of Massive Networks

Angriman, Eugenio 22 March 2022 (has links)
Die Netzwerkanalyse zielt darauf ab, nicht-triviale Erkenntnisse aus vernetzten Daten zu gewinnen. Beispiele für diese Erkenntnisse sind die Wichtigkeit einer Entität im Verhältnis zu anderen nach bestimmten Kriterien oder das Finden des am besten geeigneten Partners für jeden Teilnehmer eines Netzwerks - bekannt als Maximum Weighted Matching (MWM). Da der Begriff der Wichtigkeit an die zu betrachtende Anwendung gebunden ist, wurden zahlreiche Zentralitätsmaße eingeführt. Diese Maße stammen hierbei aus Jahrzehnten, in denen die Rechenleistung sehr begrenzt war und die Netzwerke im Vergleich zu heute viel kleiner waren. Heute sind massive Netzwerke mit Millionen von Kanten allgegenwärtig und eine triviale Berechnung von Zentralitätsmaßen ist oft zu zeitaufwändig. Darüber hinaus ist die Suche nach der Gruppe von k Knoten mit hoher Zentralität eine noch kostspieligere Aufgabe. Skalierbare Algorithmen zur Identifizierung hochzentraler (Gruppen von) Knoten in großen Graphen sind von großer Bedeutung für eine umfassende Netzwerkanalyse. Heutigen Netzwerke verändern sich zusätzlich im zeitlichen Verlauf und die effiziente Aktualisierung der Ergebnisse nach einer Änderung ist eine Herausforderung. Effiziente dynamische Algorithmen sind daher ein weiterer wesentlicher Bestandteil moderner Analyse-Pipelines. Hauptziel dieser Arbeit ist es, skalierbare algorithmische Lösungen für die zwei oben genannten Probleme zu finden. Die meisten unserer Algorithmen benötigen Sekunden bis einige Minuten, um diese Aufgaben in realen Netzwerken mit bis zu Hunderten Millionen von Kanten zu lösen, was eine deutliche Verbesserung gegenüber dem Stand der Technik darstellt. Außerdem erweitern wir einen modernen Algorithmus für MWM auf dynamische Graphen. Experimente zeigen, dass unser dynamischer MWM-Algorithmus Aktualisierungen in Graphen mit Milliarden von Kanten in Millisekunden bewältigt. / Network analysis aims to unveil non-trivial insights from networked data by studying relationship patterns between the entities of a network. Among these insights, a popular one is to quantify the importance of an entity with respect to the others according to some criteria. Another one is to find the most suitable matching partner for each participant of a network knowing the pairwise preferences of the participants to be matched with each other - known as Maximum Weighted Matching (MWM). Since the notion of importance is tied to the application under consideration, numerous centrality measures have been introduced. Many of these measures, however, were conceived in a time when computing power was very limited and networks were much smaller compared to today's, and thus scalability to large datasets was not considered. Today, massive networks with millions of edges are ubiquitous, and a complete exact computation for traditional centrality measures are often too time-consuming. This issue is amplified if our objective is to find the group of k vertices that is the most central as a group. Scalable algorithms to identify highly central (groups of) vertices on massive graphs are thus of pivotal importance for large-scale network analysis. In addition to their size, today's networks often evolve over time, which poses the challenge of efficiently updating results after a change occurs. Hence, efficient dynamic algorithms are essential for modern network analysis pipelines. In this work, we propose scalable algorithms for identifying important vertices in a network, and for efficiently updating them in evolving networks. In real-world graphs with hundreds of millions of edges, most of our algorithms require seconds to a few minutes to perform these tasks. Further, we extend a state-of-the-art algorithm for MWM to dynamic graphs. Experiments show that our dynamic MWM algorithm handles updates in graphs with billion edges in milliseconds.
170

A study of limitations and performance in scalable hosting using mobile devices / En studie i begränsningar och prestanda för skalbar hosting med hjälp av mobila enheter

Rönnholm, Niklas January 2018 (has links)
At present day, distributed computing is a widely used technique, where volunteers support different computing power needs organizations might have. This thesis sought to benchmark distributed computing performance limited to mobile device support since this type of support is seldom done with mobile devices. This thesis proposes two approaches to harnessing computational power and infrastructure of a group of mobile devices. The problems used for benchmarking are small instances of deep learning training. One requirement posed by the mobile devices’ non-static nature was that this should be possible without any significant prior configuration. The protocol used for communication was HTTP. The reason deep-learning was chosen as the benchmarking problem is due to its versatility and variability. The results showed that this technique can be applied successfully to some types of problem instances, and that the two proposed approaches also favour different problem instances. The highest request rate found for the prototype with a 99% response rate was a 2100% increase in efficiency compared to a regular server. This was under the premise that it was provided just below 2000 mobile devices for only particular problem instances. / För närvarande är distribuerad databehandling en utbredd teknik, där frivilliga individer stödjer olika organisationers behov av datorkraft. Denna rapport försökte jämföra prestandan för distribuerad databehandling begränsad till enbart stöd av mobila enheter då denna typ av stöd sällan görs med mobila enheter. Rapporten föreslår två sätt att utnyttja beräkningskraft och infrastruktur för en grupp mobila enheter. De problem som används för benchmarking är små exempel på deep-learning. Ett krav som ställdes av mobilenheternas icke-statiska natur var att detta skulle vara möjligt utan några betydande konfigureringar. Protokollet som användes för kommunikation var HTTP. Anledningen till att deeplearning valdes som referensproblem beror på dess mångsidighet och variation. Resultaten visade att denna teknik kan tillämpas framgångsrikt på vissa typer av probleminstanser, och att de två föreslagna tillvägagångssätten också gynnar olika probleminstanser. Den högsta requesthastigheten hittad för prototypen med 99% svarsfrekvens var en 2100% ökning av effektiviteten jämfört med en vanlig server. Detta givet strax under 2000 mobila enheter för vissa speciella probleminstanser.

Page generated in 0.0809 seconds