• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 126
  • 33
  • 26
  • 16
  • 9
  • 6
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 321
  • 321
  • 91
  • 65
  • 63
  • 50
  • 45
  • 44
  • 41
  • 35
  • 33
  • 31
  • 29
  • 28
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Nprof : uma ferramenta para monitoramento de aplicações distribuídas / Nprof : a monitoring tool for distributed applications

Brugnara, Telmo January 2006 (has links)
A crescente complexidade dos programas de computador e o crescimento da carga de trabalho a qual eles são submetidos têm sido tendências recorrentes nos sistemas computacionais, em especial para sistemas distribuídos como aplicações web e sistemas corporativos. O aumento da carga de trabalho gera uma demanda por sistemas que façam melhor uso dos recursos computacionais disponíveis, enquanto a maior complexidade gera uma demanda por sistemas que se preocupem em minimizar o número de erros. Portanto, podem-se identificar dois objetivos a serem perseguidos pelos desenvolvedores de sistemas de software: melhorar o desempenho e aumentar a confiabilidade dos sistemas. A fim de alcançar os objetivos expostos, são desenvolvidos sistemas de monitoramento para automatizar a coleta e análise de dados sobre os sistemas computacionais alvo. O presente trabalho visa contribuir nos seguintes aspectos: na identificação dos dados relevantes para o monitoramento de aplicações distribuídas desenvolvidas para a plataforma Java; e na criação de uma ferramenta de monitoramento de aplicações distribuídas, explorando os novos recursos do JDK 1.5, bem como os recursos já disponíveis em Java, como carga dinâmica de classes e transformação de bytecodes A fim de avaliar a ferramenta proposta foram elaborados três estudos de caso: um utiliza uma aplicação existente sem necessidade de sua adaptação; outro avalia a sobrecarga da ferramenta frente a diferentes parâmetros; e o terceiro avalia o monitoramento de um sistema distribuído. Entende-se que a ferramenta atinge o objetivo de monitoramento de aplicações distribuídas, por meio da incorporação de técnicas e APIs distintas, ao permitir: o monitoramento de uma aplicação distribuída por meio do monitoramento de diversos nodos de tal aplicação concomitantemente; e a visualização das informações coletadas de forma online. Adicionalmente, a coleta simultânea de dados de diferentes nodos de uma aplicação distribuída pode ser útil para a descoberta de relações entre eventos que ocorrem durante a execução de tal aplicação. / The growing complexity of software and the increasing workload to which systems have been submitted are known trends in the computing system field, especially when distributed and web systems are considered. The increasing workload generates demand for systems that can make a better use of computing resources, while the increment of system complexity demands specific actions to prevent design faults. Therefore, software engineers have two main objectives to be concerned with: optimization and dependability. In order to accomplish these objectives, monitoring systems have been proposed to gather data from running systems so that their behavior can be analyzed. The present dissertation intends to contribute in the following domains: identifying relevant metrics for monitoring distributed Java applications; and developing a tool to monitor and profile distributed applications, using the new resources available in JDK 1 .5 as well as some already known techniques like dynamic classloading and bytecode instrumentation. In order to evaluate the proposed tool, three test cases have been developed: one with a well known application running without modification; another for evaluating the tools’ overhead in different scenarios; and a third one to evaluate a distributed application been monitored. We understand that the proposed tool is successful in monitoring distributed applications by the use of distinct APIs and techniques because: Nprof can monitor a distributed application by monitoring different nodes of the application simultaneously; and Nprof allows the online visualization of the collected data. Also, simultaneous collection of data from different nodes of a distributed application can be useful for discovering relations among events that occur during the execution of the application.
142

Jämförelse av J2EE och .NET från ett Web Services perspektiv. / Comparison of J2EE and .NET from a Web Services point of view

Areskoug, Andreas January 2006 (has links)
This thesis compares the performance of Web Services when hosted on either the J2EE or the .NET platform. The thesis will investigate which platform should be choosen to host Web Services mainly based on performance.
143

Large Scale Parallel Inference of Protein and Protein Domain families / Inférence des familles de protéines et de domaines protéiques à grande échelle

Rezvoy, Clément 28 September 2011 (has links)
Les domaines protéiques sont des segments indépendants qui sont présents de façon récurrente dans plusieurs protéines. L'arrangement combinatoire de ces domaines est à l'origine de la diversité structurale et fonctionnelle des protéines. Plusieurs méthodes ont été développées pour permettre d'inférer la décomposition des protéines en domaines ainsi que la classification de ces domaines en familles. L'une de ces méthodes, MkDom2, permet l'inférence des familles de domaines de façon gloutonne. les familles sont inférées l'une après l'autre de façon a créer un découpage des protéines en arrangement de domaines et un classement de ces domaines en familles. MkDom2 est a l'origine de la base de données ProDom et est essentiel pour sa mise à jour. L'augmentation exponentielle du nombre de séquences analyser a rendue obsolète cette méthode qui nécessite désormais plusieurs années de calcul pour calculer ProDom. nous proposons un nouvel algorithme, MPI_MkDom2, permettant l'exploration simultanée de plusieurs familles de domaines sur une plate-forme de calcul distribué. MPI_MkDom2 est un algorithme distribué et asynchrone gérant l'équilibrage de charge pour une utilisation efficace de la plate-forme de calcul; il assure la création d'un découpage non-recouvrant de l'ensemble des protéines. Une mesure de proximité entre les classifications de domaines est définie afin d'évaluer l'effet du parallélisme sur le partitionnement produit. Nous proposons un second algorithme MPI_MkDom3. permettant le calcul simultanée d'une classification des domaines protéiques et des protéines en familles partageant le même arrangement en domaines. / Protein domains are recurring independent segment of proteins. The combinatorial arrangement of domains is at the root of the functional and structural diversity of proteins. Several methods have been developed to infer protein domain decomposition and domain family clustering from sequence information alone. MkDom2 is one of those methods. Mkdom2 infers domain families in a greedy fashion. Families are inferred one after the other in order to create a delineation of domains on proteins and a clustering of those domains in families. MkDom2 is instrumental in the building of the ProDom database. The exponential growth of the number of sequences to process as rendered MkDom2 obsolete, it would now take several years to compute a newrelease of ProDom. We present a nous algorithm, MPI_MkDom2, allowing computation of several families at once across a distributed computing platform. MPI_MkDom2 is an asynchronous distributed algorithm managing load balancing to ensure efficient platform usage; it ensures the creation of a non-overlapping partitioning of the whole protein set. A new proximity measure is defined to assess the effect of the parallel computation on the result. We also Propose a second algorithm, MPI_mkDom3, allowing the simultaneous computation of a clustering of protein domains as well as full protein sharing the same domain decomposition.
144

Enforcing Security Policies On GPU Computing Through The Use Of Aspect-Oriented Programming Techniques

Albassam, Bader 29 June 2016 (has links)
This thesis presents a new security policy enforcer designed for securing parallel computation on CUDA GPUs. We show how the very features that make a GPGPU desirable have already been utilized in existing exploits, fortifying the need for security protections on a GPGPU. An aspect weaver was designed for CUDA with the goal of utilizing aspect-oriented programming for security policy enforcement. Empirical testing verified the ability of our aspect weaver to enforce various policies. Furthermore, a performance analysis was performed to demonstrate that using this policy enforcer provides no significant performance impact over manual insertion of policy code. Finally, future research goals are presented through a plan of work. We hope that this thesis will provide for long term research goals to guide the field of GPU security.
145

Distribuovaný systém kryptoanalýzy / Distributed systems for cryptoanalysys

Zelinka, Miloslav Unknown Date (has links)
This work deals with crytpoanalysis, calculation performance and its distribution. It describes the methods of distributing the calculation performance for the needs of crypto analysis. Further it focuses on other methods allowing the speed increasing in breaking the cryptographic algorithms especially by means of the hash functions. The work explains the relatively new term of cloud computing and its consecutive use in cryptography. The examples of its practical utilisation follow. Also this work deals with possibility how to use grid computing for needs of cryptoanalysis. At last part of this work is system design using „cloud computing“ for breaking access password.
146

Supervision en ligne de propriétés temporelles dans les systèmes distribués temps-réel / Online monitoring of temporal properties in distributed real-time system

Baldellon, Olivier 07 November 2014 (has links)
Les systèmes actuels deviennent chaque jour de plus en plus complexe; à la distribution s’ajoutent les contraintes temps réel. Les méthodes classiques en charge de garantir la sûreté de fonctionnement, comme le test, l’injection de fautes ou les méthodes formelles ne sont plus suffisantes à elles seules. Afin de pouvoir traiter les éventuelles erreurs lors de leur apparition dans un système distribué donné, nous désirons mettre en place un programme, surveillant ce système, capable de lancer une alerte lorsque ce dernier s’éloigne de ses spécifications ; un tel programme est appelé superviseur (ou moniteur). Le fonctionnement d’un superviseur consiste simplement à interpréter un ensemble d’informations provenant du système sous forme de message, que l’on qualifiera d’évènement, et d’en déduire un diagnostic. L’objectif de cette thèse est de mettre un place un superviseur distribué permettant de vérifier en temps réel des propriétés temporelles. En particulier nous souhaitons que notre moniteur soit capable de vérifier un maximum de propriétés avec un minimum d’information. Ainsi notre outil est spécialement conçu pour fonctionner parfaitement même si l’observation est imparfaite, c’est-à-dire, même si certains évènements arrivent en retard ou s’ils ne sont jamais reçus. Nous avons de plus cherché à atteindre cet objectif de manière distribuée pour des raisons évidentes de performance et de tolérance aux fautes. Nous avons ainsi proposé un protocole distribuable fondé sur l’exécution répartie d’un réseau de Petri temporisé. Pour vérifier la faisabilité et l’efficacité de notre approche, nous avons mis en place une implémentation appelée Minotor qui s’est révélée avoir de très bonnes performances. Enfin, pour montrer l’expressivité du formalisme utilisé pour exprimer les spécifications que l’on désire vérifier, nous avons détaillé un ensemble de propriétés sous forme de réseaux de Petri à double sémantique introduite dans cette thèse (l’ensemble des transitions étant partitionné en deux catégories de transitions, chacune de ces parties ayant sa propre sémantique). / Current systems are becoming every day more and more complex, being both distributed and real-timed. Conventional methods responsible for guaranteeing dependability, such as testing, fault injection or formal methods are no longer sufficient. In order to process any errors when they appear in a given distributed system, we want to implement a software monitoring it and capable of launching an alert when the system does not respect anymore its specification. Such a program is called monitor. A monitor interpret information received from the system as messages (these messages are called events) and propose a diagnosis. The objective of this thesis is to set in place a monitor for a distributed real-time verification of temporal properties. In particular we want our monitor to be able to check up a maximum of properties with a minimum of information. Thus, our tools are designed to work perfectly even if the observation is imperfect, that is to say, even if some events are late or never received. We have also managed to achieve this goal through a highly distributed protocol. To verify the feasibility and effectiveness of our approach, we have established an implementation called Minotor who was found to have very good performance. Finally, we detailed a set of properties, expressed in our formalism, to show it’s expressiveness.
147

Distributed Intelligence-Assisted Autonomic Context-Information Management : A context-based approach to handling vast amounts of heterogeneous IoT data

Rahman, Hasibur January 2018 (has links)
As an implication of rapid growth in Internet-of-Things (IoT) data, current focus has shifted towards utilizing and analysing the data in order to make sense of the data. The aim of which is to make instantaneous, automated, and informed decisions that will drive the future IoT. This corresponds to extracting and applying knowledge from IoT data which brings both a substantial challenge and high value. Context plays an important role in reaping value from data, and is capable of countering the IoT data challenges. The management of heterogeneous contextualized data is infeasible and insufficient with the existing solutions which mandates new solutions. Research until now has mostly concentrated on providing cloud-based IoT solutions; among other issues, this promotes real-time and faster decision-making issues. In view of this, this dissertation undertakes a study of a context-based approach entitled Distributed intelligence-assisted Autonomic Context Information Management (DACIM), the purpose of which is to efficiently (i) utilize and (ii) analyse IoT data. To address the challenges and solutions with respect to enabling DACIM, the dissertation starts with proposing a logical-clustering approach for proper IoT data utilization. The environment that the number of Things immerse changes rapidly and becomes dynamic. To this end, self-organization has been supported by proposing self-* algorithms that resulted in 10 organized Things per second and high accuracy rate for Things joining. IoT contextualized data further requires scalable dissemination which has been addressed by a Publish/Subscribe model, and it has been shown that high publication rate and faster subscription matching are realisable. The dissertation ends with the proposal of a new approach which assists distribution of intelligence with regard to analysing context information to alleviate intelligence of things. The approach allows to bring few of the application of knowledge from the cloud to the edge; where edge based solution has been facilitated with intelligence that enables faster responses and reduced dependency on the rules by leveraging artificial intelligence techniques. To infer knowledge for different IoT applications closer to the Things, a multi-modal reasoner has been proposed which demonstrates faster response. The evaluations of the designed and developed DACIM gives promising results, which are distributed over seven publications; from this, it can be concluded that it is feasible to realize a distributed intelligence-assisted context-based approach that contribute towards autonomic context information management in the ever-expanding IoT realm. / <p>At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 7: Submitted.</p>
148

Towards Fault Reactiveness in Wireless Sensor Networks with Mobile Carrier Robots

Falcon Martinez, Rafael Jesus January 2012 (has links)
Wireless sensor networks (WSN) increasingly permeate modern societies nowadays. But in spite of their plethora of successful applications, WSN are often unable to surmount many operational challenges that unexpectedly arise during their lifetime. Fortunately, robotic agents can now assist a WSN in various ways. This thesis illustrates how mobile robots which are able to carry a limited number of sensors can help the network react to sensor faults, either during or after its deployment in the monitoring region. Two scenarios are envisioned. In the first one, carrier robots surround a point of interest with multiple sensor layers (focused coverage formation). We put forward the first known algorithm of its kind in literature. It is energy-efficient, fault-reactive and aware of the bounded robot cargo capacity. The second one is that of replacing damaged sensing units with spare, functional ones (coverage repair), which gives rise to the formulation of two novel combinatorial optimization problems. Three nature-inspired metaheuristic approaches that run at a centralized location are proposed. They are able to find good-quality solutions in a short time. Two frameworks for the identification of the damaged nodes are considered. The first one leans upon diagnosable systems, i.e. existing distributed detection models in which individual units perform tests upon each other. Two swarm intelligence algorithms are designed to quickly and reliably spot faulty sensors in this context. The second one is an evolving risk management framework for WSNs that is entirely formulated in this thesis.
149

Proposta e avaliação de desempenho de um algoritmo de balanceamento de carga para ambientes distribuídos heterogêneos escaláveis / Proposal and performance evaluation of a load balancing algorithm for heterogeneous scalable distributed environments

Rodrigo Fernandes de Mello 27 November 2003 (has links)
Algoritmos de balanceamento de carga são utilizados em sistemas distribuídos para homogeneizar a ocupação dos recursos computacionais disponíveis. A homogeneidade na ocupação do ambiente permite otimizar a alocação de recursos e, conseqüentemente, aumentar o desempenho das aplicações. Com o advento dos sistemas distribuídos de alta escala, fazem-se necessárias pesquisas para a construção de algoritmos de balanceamento de carga que sejam capazes de gerir com eficiência esses sistemas. Essa eficiência é medida através do número de mensagens geradas no ambiente, do suporte a ambientes heterogêneos, do uso de políticas que consomem poucos recursos do sistema, da estabilidade em alta carga, da escalabilidade do sistema e dos baixos tempos de resposta. Com o objetivo de atender as necessidades dos sistemas distribuídos de alta escala, este doutorado propõe, apresenta e avalia um novo algoritmo de balanceamento de carga denominado TLBA (Tree Load Balancing Algorithm). Esse algoritmo organiza os computadores do sistema em uma topologia lógica na forma de árvore, sobre a qual são executadas operações de balanceamento de carga. Para validar o TLBA foi construído um simulador que, submetido a testes, permitiu comprovar suas contribuições, que incluem: o baixo número de mensagens geradas pelas operações de balanceamento de carga; a estabilidade em altas cargas; os baixos tempos médios de resposta de processos. Para validar os resultados de simulação, foi construído um protótipo do TLBA. Esse protótipo confirmou os resultados de simulação e, conseqüentemente, as contribuições do algoritmo. / Load balancing algorithms are applied in distributed systems to homogenize the occupation of the available computational resources. The homogeneity of the environment occupation allows optimising the resource allocation and consequently, increasing the application performance. With the advent of the large-scale distributed systems, it was necessary to start researching the construction of load balancing algorithms which are able to manage these systems with efficiency. This efficiency is measured through the number of messages generated on the environment; the support to heterogeneous environments and the load balance policies which should spend the minimal resources time; the stability in overloaded situations; the system scalability; and the processes average response times, that should be small. With the aim to achieve the large-scale distributed systems requirements, this Ph.D. proposes, presents and evaluates a new load balancing algorithm named TLBA (Tree Load Balancing Algorithm). This algorithm arranges the computers on a logical network topology with a tree format. The load balancing operations are executed over this tree. To evaluate the TLBA algorithm, a simulator was built that was submitted to tests that confirmed the following characteristics: the small number of messages generated by the load balancing operations; the stability in overloaded situations; the small average processes response times. To validate the simulation results a TLBA prototype was implemented. This prototype confirmed the simulation results and consequently the contributions of the proposed algorithm.
150

Portabilita distribuovaných výpočtů v rámci cloudových infrastruktur / Portability of Distributed Computing in Cloud Infrastructures

Duong, Cuong Tuan January 2019 (has links)
The master’s thesis focuses on analysis of solution to distributed computing of metage-nomics data in cloud infrastructures. It describes specific META-pipe platform based onclient-server architecture in infrastructure of public academic cloud EGI Federated Cloud,sponsored by european project ELIXIR-EXCELERATE. Thesis is focusing especially onopen-source software like Terraform and Ansible.

Page generated in 0.0806 seconds