• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 314
  • 274
  • 30
  • 21
  • 13
  • 9
  • 8
  • 7
  • 6
  • 5
  • 5
  • 5
  • 1
  • 1
  • 1
  • Tagged with
  • 801
  • 801
  • 267
  • 220
  • 149
  • 145
  • 113
  • 97
  • 88
  • 79
  • 78
  • 75
  • 72
  • 72
  • 68
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Cooperative Resource Management for Parallel and Distributed Systems

Klein-Halmaghi, Cristian 29 November 2012 (has links) (PDF)
High-Performance Computing (HPC) resources, such as Supercomputers, Clusters, Grids and HPC Clouds, are managed by Resource Management Systems (RMSs) that multiple resources among multiple users and decide how computing nodes are allocated to user applications. As more and more petascale computing resources are built and exascale is to be achieved by 2020, optimizing resource allocation to applications is critical to ensure their efficient execution. However, current RMSs, such as batch schedulers, only offer a limited interface. In most cases, the application has to blindly choose resources at submittal without being able to adapt its choice to the state of the target resources, neither before it started nor during execution. The goal of this Thesis is to improve resource management, so as to allow applications to efficiently allocate resources. We achieve this by proposing software architectures that promote collaboration between the applications and the RMS, thus, allowing applications to negotiate the resources they run on. To this end, we start by analysing the various types of applications and their unique resource requirements, categorizing them into rigid, moldable, malleable and evolving. For each case, we highlight the opportunities they open up for improving resource management.The first contribution deals with moldable applications, for which resources are only negotiated before they start. We propose CooRMv1, a centralized RMS architecture, which delegates resource selection to the application launchers. Simulations show that the solution is both scalable and fair. The results are validated through a prototype implementation deployed on Grid'5000. Second, we focus on negotiating allocations on geographically-distributed resources, managed by multiple institutions. We build upon CooRMv1 and propose distCooRM, a distributed RMS architecture, which allows moldable applications to efficiently co-allocate resources managed by multiple independent agents. Simulation results show that distCooRM is well-behaved and scales well for a reasonable number of applications. Next, attention is shifted to run-time negotiation of resources, so as to improve support for malleable and evolving applications. We propose CooRMv2, a centralized RMS architecture, that enables efficient scheduling of evolving applications, especially non-predictable ones. It allows applications to inform the RMS about their maximum expected resource usage, through pre-allocations. Resources which are pre-allocated but unused can be filled by malleable applications. Simulation results show that considerable gains can be achieved. Last, production-ready software are used as a starting point, to illustrate the interest as well as the difficulty of improving cooperation between existing systems. GridTLSE is used as an application and DIET as an RMS to study a previously unsupported use-case. We identify the underlying problem of scheduling optional computations and propose an architecture to solve it. Real-life experiments done on the Grid'5000 platform show that several metrics are improved, such as user satisfaction, fairness and the number of completed requests. Moreover, it is shown that the solution is scalable.
232

An Energy-Efficient Reservation Framework for Large-Scale Distributed Systems

Orgerie, Anne-Cécile 27 September 2011 (has links) (PDF)
Over the past few years, the energy consumption of Information and Communication Technologies (ICT) has become a major issue. Nowadays, ICT accounts for 2% of the global CO2 emissions, an amount similar to that produced by the aviation industry. Large-scale distributed systems (e.g. Grids, Clouds and high-performance networks) are often heavy electricity consumers because -- for high-availability requirements -- their resources are always powered on even when they are not in use. Reservation-based systems guarantee quality of service, allow for respect of user constraints and enable fine-grained resource management. For these reasons, we propose an energy-efficient reservation framework to reduce the electric consumption of distributed systems and dedicated networks. The framework, called ERIDIS, is adapted to three different systems: data centers and grids, cloud environments and dedicated wired networks. By validating each derived infrastructure, we show that significant amounts of energy can be saved using ERIDIS in current and future large-scale distributed systems.
233

Paieškos metodų analizė ir realizacija išskirstytos maišos lentelėmis grindžiamose P2P sistemose / Analysis and implementation of search methods in P2P systems based on distributed hash tables

Balčiūnas, Jonas 11 August 2008 (has links)
DHT sistemų privalumas yra jų didelis plečiamumas ir nepriklausomumas, tačiau esami paieškos sprendimai reikalauja išorinių mechanizmų ir taip mažina DHT privalumus. Šio darbo tikslas – padidinti paieškos DHT sistemose galimybes, sukuriant vidinį paieškos mechanizmą Chord algoritmo pagrindu veikiančiai DHT sistemai ir ištiriant jo efektyvumą. Šiame darbe pristatomi galimi vidinės paieškos mechanizmai DHT sistemose pagrįsti n-gramomis ir užtvindymo pranešimais mechanizmu. Tyrimas parodė, kad n-gramos labiau tinkamos sistemoms, kurių dydis yra santykinai mažas, tuo tarpu užtvindymo mechanizmas priimtinesnis sistemose, kuriose įgyvendintas duomenų replikavimas. / The key idea of DHT systems is hash table distributed over the distributed independent nodes. The DHT are decentralized, scalable, fault tolerant and have high hit guaranties for data lookup. However, they do not support arbitrary querying which flooding schemes do: users must know exact key of the resource they are looking up in the system. In the most common solution for this is external searching engine like ftp or http. This work presents research experiment of possible methods for arbitrary querying in DHT based on the “n-grams” and “broadcasting” techniques. Experiment was carried out using experimental P2P system created for this purpose on the base of Chord algorithm. Experimental results showed that, the most expensive (in terms of message generation) process in “n-gram” is publishing of keys to network. The analysis of both methods showed that n-grams are more practical on the relatively smaller network and “broadcasting” is more effective on the networks with implemented data replication.
234

Automatinio paskirstytų sistemų testavimo metodo naudojančio UML modelius sudarymas ir tyrimas / Formation and research of an automated distributed system test method using UML models

Prapuolenis, Jonas 26 August 2013 (has links)
Didžioji dalis programinės įrangos šiais laikais yra paskirstyta. Jos testavimas bei verifikavimas yra sudėtingas. Dažniausiai testavimas yra būtina, tačiau daug laiko bei išteklių reikalaujanti, programinės įrangos kūrimo proceso dalis. Automatiniai testavimo įrankiai palengvina paskirstytų sistemų testavimą. Šiame darbe pateikiamas automatinis paskirstytų sistemų testavimo metodas naudojantis UML modelius, sukurtas įrankis šio metodo pagrindu bei atliktas eksperimentinis tyrimas. Pirmoje dalyje aprašoma atlikta paskirstytų sistemų testavimo problemų analizė bei bėdos esant lygiagretumui bei lenktynių dėl resursų sąlygoms. Pateikiamos modelių tikrintuvo naudojimo galimybės atliekant paskirstytų sistemų testavimą bei UML modelių naudojimas testavimo tikslams. Antroje dalyje pateikiamas naudojamas metodas, o trečioje dalyje šį metodą realizuojančios programinės įrangos projektas. Programinė įranga sukurta Java kalba, naudojamas Java PathFinder modelių tikrintuvas dviejų klientų simuliavimui. Serveris veikia už modelio tikrintuvo ribų. UML klasių diagramos, su tam tikrų stereotipų pagalba, naudojamos serverio testavimo parametrams nurodyti. Eksperimentinėje dalyje pateikiami atlikti bandymai su sukurta programine įranga. Testavimui panaudotos dvi paprastos paskirstytos sistemos. Panaudojant mutacinį testavimą sužinotas metodo aptinkamų mutantų kiekis, kuris yra palygintas su JUnit testų aptinkamu kiekiu. Atlikta aptiktų bei neaptiktų mutantų analizė parodė metodo privalumus bei... [toliau žr. visą tekstą] / Most applications today communicate with other processes over a network. Testing and verification of such systems is complex. In many cases testing is an essential, but time and resource consuming activity in the software development process. Automated test methods are used to make testing of such distributed systems easier. In this document an automated distributed system test method using UML models is presented. It uses a couple of clients communicating with server and is able to detect resource racing related failures. The first section describes a research of distributed system testing issues, concurrency and resource racing error detection, and ways of using model checker for testing distributed systems as well as the use of UML models for test purposes. The second section presents a testing method used for the tool. Tool is described in the third section. It is based on Java and uses Java PathFinder model checker for simulating two clients. Server is run outside of model checker. UML class diagrams are used for defining server test parameters by applying certain stereotypes on methods being tested. The investigation section describes a performed experiment with created tool. Two simple distributed systems are presented for mutation test purposes. Mutation testing is performed with defined mutation operators and results are analyzed. Additionally results are compared to basic JUnit tests.
235

Implementability of distributed systems described with scenarios

Abdallah, Rouwaida 16 July 2013 (has links) (PDF)
Distributed systems lie at the heart of many modern applications (social networks, web services, etc.). However, developers face many challenges in implementing distributed systems. The major one we focus on is avoiding the erroneous behaviors, that do not appear in the requirements of the distributed system, and that are caused by the concurrency between the entities of this system. The automatic code generation from requirements of distributed systems remains an old dream. In this thesis, we consider the automatic generation of a skeleton of code covering the interactions between the entities of a distributed system. This allows us to avoid the erroneous behaviors caused by the concurrency. Then, in a later step, this skeleton can be completed by adding and debugging the code that describes the local actions happening on each entity independently from its interactions with the other entities. The automatic generation that we consider is from a scenario-based specification that formally describes the interactions within informal requirements of a distributed system. We choose High-level Message Sequence Charts (HMSCs for short) as a scenario-based specification for the many advantages that they present: namely the clear graphical and textual representations, and the formal semantics. The code generation from HMSCs requires an intermediate step, called "Synthesis" which is their transformation into an abstract machine model that describes the local views of the interactions by each entity (A machine representing an entity defines sequences of messages sending and reception). Then, from the abstract machine model, the skeleton's code generation becomes an easy task. A very intuitive abstract machine model for the synthesis of HMSCs is the Communicating Finite State Machine (CFSMs). However, the synthesis from HMSCs into CFSMs may produce programs with more behaviors than described in the specifications in general. We thus restrict then our specifications to a sub-class of HMSCs named "local HMSC". We show that for any local HMSC, behaviors can be preserved by addition of communication controllers that intercept messages to add stamping information before resending them. We then propose a new technique that we named "localization" to transform an arbitrary HMSC specification into a local HMSC, hence allowing correct synthesis. We show that this transformation can be automated as a constraint optimization problem. The impact of modifications brought to the original specification can be minimized with respect to a cost function. Finally, we have implemented the synthesis and the localization approaches into an existing tool named SOFAT. We have, in addition, implemented to SOFAT the automatic code generation of a Promela code and a JAVA code for REST based web services from HMSCs.
236

Enabling Ultra Large-Scale Radio Identification Systems

ALI, KASHIF 31 August 2011 (has links)
Radio Frequency IDentification (RFID) is growing prominence as an automated identification technology able to turn everyday objects into an ad-hoc network of mobile nodes; which can track, trigger events and perform actions. Energy scavenging and backscattering techniques are the foundation of low-cost identification solutions for RFIDs. The performance of these two techniques, being wireless, significantly depends on the underlying communication architecture and affect the overall operation of RFID systems. Current RFID systems are based on a centralized master-slave architecture hindering the overall performance, scalability and usability. Several proposals have aimed at improving performance at the physical, medium access, and application layers. Although such proposals achieve significant performance gains in terms of reading range and reading rates, they require significant changes in both software and hardware architectures while bounded by inherited performance bottlenecks, i.e., master-slave architecture. Performance constraints need to be addressed in order to further facilitate RFID adoption; especially for ultra large scale applications such as Internet of Things. A natural approach is re-thinking the distributed communication architecture of RFID systems; wherein control and data tasks are decoupled from a central authority and dispersed amongst spatially distributed low-power wireless devices. The distributed architecture, by adjusting the tag's reflectivity coefficient creates micro interrogation zones which are interrogated in parallel. We investigate this promising direction in order to significantly increase the reading rates and reading range of RFID tags, and also to enhance overall system scalability. We address the problems of energy-efficient tag singulations, optimal power control schemes and load aware reader placement algorithms for RFID systems. We modify the conventional set cover approximation algorithm to determine the minimal number of RFID readers with minimal overlapping and balanced number of tags amongst them. We show, via extensive simulation analysis, that our approach has the potential to increase the performance of RFID technology and hence, to enable RFID systems for ultra large scale applications. / Thesis (Ph.D, Computing) -- Queen's University, 2011-08-30 23:41:02.937
237

Towards Computer-Supported Collaborative Software Engineering

Cook, Carl Leslie Raymond January 2007 (has links)
Software engineering is a fundamentally collaborative activity, yet most tools that support software engineers are designed only for single users. There are many foreseen benefits in using tools that support real time collaboration between software engineers, such as avoiding conflicting concurrent changes to source files and determining the impact of program changes immediately. Unfortunately, it is difficult to develop non-trivial tools that support real time Collaborative Software Engineering (CSE). Accordingly, the few CSE tools that do exist have restricted capabilities. Given the availability of powerful desktop workstations and recent advances in distributed computing technology, it is now possible to approach the challenges of CSE from a new perspective. The research goal in this thesis is to investigate mechanisms for supporting real time CSE, and to determine the potential gains for developers from the use of CSE tools. An infrastructure, CAISE, is presented which supports the rapid development of real time CSE tools that were previously unobtainable, based on patterns of collaboration evident within software engineering. In this thesis, I discuss important design aspects of CSE tools, including the identification of candidate patterns of collaboration. I describe the CAISE approach to supporting small teams of collaborating software engineers. This is by way of a shared semantic model of software, protocol for tool communication, and Computer Supported Collaborative Work (CSCW) facilities. I then introduce new types of synchronous semantic model-based tools that support various patterns of CSE. Finally, I present empirical and heuristic evaluations of typical development scenarios. Given the CAISE infrastructure, it is envisaged that new aspects of collaborative work within software engineering can be explored, allowing the perceived benefits of CSE to be fully realised.
238

Implementación de procesos de negocio a través de servicios aplicando metamodelos, software distribuido y aspectos sociales

Bazán, Patricia January 2015 (has links)
El enfoque orientado a procesos de negocio es un aspecto ampliamente relevante para las organizaciones, que en los últimos años ha recibido importante atención de la comunidad científica internacional. Asimismo, los avances en cuanto a la provisión de herramientas de soporte para automatizar la gestión los procesos de negocio también ha adquirido gran relevancia. Sin embargo, la brecha existente entre el área de negocio y el área de tecnología, representadas por analistas de negocios y expertos en informática, respectivamente, sigue constituyendo un escollo a la hora de aplicar una metodología de gestión por procesos de negocio dentro de las organizaciones. Por otra parte, los pocos avances tecnológicos respecto a la incorporación de nuevos modelos computacionales distribuidos y de aspectos sociales en la ejecución de los proceso de negocio y de las herramientas que la soportan, contribuyen a aumentar dicha brecha. Por estos motivos, resultan relevantes las investigaciones en metodologías, marcos de trabajo y herramientas que incluyan estos nuevos paradigmas en la administración de procesos de negocio. En esta tesis se plantea mejorar y actualizar la Metodología Integradora de Servicios y Procesos (MISP), propuesta por la autora en su tesis de Maestría en Redes de Datos, proveyendo una nueva visión de los procesos y los servicios a la luz los avances tecnológicos y buscando reducir la brecha entre el negocio y la tecnología. Específicamente, el trabajo se enfoca en dos principales problemas: 1) la mejora al modelado de procesos y servicios mediante la definición e integración de metamodelos, aplicados a las fases de diseño de procesos dentro del ciclo de vida de los procesos de negocio, y 2) en revisar aspectos tecnológicos modernos – como por ejemplo, la distribución de las actividades de los procesos y la inclusión de aspectos sociales relacionados con la ejecución de los mismos, que su aplicación resulta de interés en las etapas de despliegue, ejecución y monitoreo de los procesos de negocio. . Atendiendo los problemas descriptos, esta tesis tiene dos contribuciones importantes. Por un lado, la mejora a la interacción entre procesos y servicios mediante la provisión de un lenguaje para describir servicios que integra las actividades del proceso con las componentes de software que lo implementan. Por otro lado, la propuesta de prototipos de herramientas que permiten incorporar aspectos de distribución que enriquecen los rastros de ejecución de los procesos, y características sociales a la gestión de procesos. Esta ultima contribución permite optimizar la fase de monitoreo del ciclo de vida de los procesos y acelerar la mejora continua de los mismos.
239

Attentiveness: Reactivity at Scale

Hartman, Gregory S. 01 December 2010 (has links)
Clients of reactive systems often change their priorities. For example, a human user of an email viewer may attempt to display a message while a large attachment is downloading. To the user, an email viewer that delayed display of the message would exhibit a failure similar to priority inversion in real-time systems. We propose a new quality attribute, attentiveness, that provides a unified way to model the forms of redirection offered by application-level reactive systems to accommodate the changing priorities of their clients, which may be either humans or systems components. Modeling attentiveness as a quality attribute provides system designers with a single conceptual framework for policy and architectural decisions to address trade-offs among criteria such as responsiveness, overall performance, behavioral predictability, and state consistency.
240

Scalable Collaborative Filtering Recommendation Algorithms on Apache Spark

Casey, Walker Evan 01 January 2014 (has links)
Collaborative filtering based recommender systems use information about a user's preferences to make personalized predictions about content, such as topics, people, or products, that they might find relevant. As the volume of accessible information and active users on the Internet continues to grow, it becomes increasingly difficult to compute recommendations quickly and accurately over a large dataset. In this study, we will introduce an algorithmic framework built on top of Apache Spark for parallel computation of the neighborhood-based collaborative filtering problem, which allows the algorithm to scale linearly with a growing number of users. We also investigate several different variants of this technique including user and item-based recommendation approaches, correlation and vector-based similarity calculations, and selective down-sampling of user interactions. Finally, we provide an experimental comparison of these techniques on the MovieLens dataset consisting of 10 million movie ratings.

Page generated in 0.056 seconds