Spelling suggestions: "subject:"distributed environment"" "subject:"eistributed environment""
1 |
Speedes: A Case Study Of Space OperationsParuchuri, Amith 01 January 2005 (has links)
This thesis describes the application of parallel simulation techniques to represent the structured functional parallelism present within the Space Shuttle Operations Flow using the Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES), an object-oriented multi-computing architecture. SPEEDES is a unified parallel simulation environment, which allocates events over multiple processors to get simulation speed up. Its optimistic processing capability minimizes simulation lag time behind wall clock time, or multiples of real-time. SPEEDES accommodates an increase in process complexity with additional parallel computing nodes to allow sharing of processing loads. This thesis focuses on the process of translating a model of Space Shuttle Operations from a procedural oriented and single processor approach to one represented in a process-driven, object-oriented, and distributed processor approach. The processes are depicted by several classes created to represent the operations at the space center. The reference model used is the existing Space Shuttle Model created in ARENA by NASA and UCF in the year 2001. A systematic approach was used for this translation. A reduced version of the ARENA model was created, and then used as the SPEEDES prototype using C++. The prototype was systematically augmented to reflect the entire Space Shuttle Operations Flow. It was then verified, validated, and implemented.
|
2 |
Automatic Selection of Dynamic Loop Scheduling Algorithms for Load Balancing using Reinforcement LearningDhandayuthapani, Sumithra 07 August 2004 (has links)
Scientific applications are large, complex, irregular, and computationally intensive and are characterized by data parallel loops. The prevalence of independent iterations in these loops, makes parallel computing as the natural choice for solving these applications. The computational requirements of these problems vary due to variations in problem, algorithmic and systemic characteristics during parallelization, leading to performance degradation. Considerable amount of research has been dedicated to the development of dynamic scheduling techniques based on probabilistic analysis to address these predictable and unpredictable factors that lead to severe load imbalance. The mathematical foundations of these scheduling algorithms have been previously developed and published in the literature. These techniques have been successfully integrated into scientific applications as well as into runtime systems. Recently, efforts have also been directed to integrate these techniques into dynamic load balancing libraries for scientific applications. The optimal scheduling algorithm to load balance a specific scientific application in a dynamic parallel computing environment is very difficult without the exhaustive testing of all the scheduling techniques. This is a time consuming process, and therefore, there is a need for developing an automatic mechanism for the selection of dynamic scheduling algorithms. In recent years, extensive work has been dedicated to the development of reinforcement learning and some of its techniques have addressed load-balancing problems. However, they do not cover a number of aspects regarding the performance of scientific applications. First, these previously developed techniques address the load balancing problem only at a coarse granularity level (for example, job scheduling), and the reinforcement learning techniques used for load balancing are based on learning from trained datasets which are obtained prior to the execution of the application. Moreover, scientific applications contain parameters whose variations are so irregular that the use of training sets would not be able to accurately capture the entire spectrum of possible characteristics. Finally, algorithm selection using reinforcement learning has only been used for simple sequential problems. This thesis addresses these limitations and provides a novel integrated approach for automating the selection of dynamic scheduling algorithms at a finer granularity level to improve the performance of scientific applications using reinforcement learning. This integrated approach will experimentally be tested on a scientific application that involves a large number of time steps: The Quantum Trajectory Method (QTM). A qualitative and quantitative analysis of the effectiveness of this novel approach will be presented to underscore the significance of its use in improving the performance of large-scale scientific applications.
|
3 |
SCHISTOSOMIASIS TRANSMISSION AND CONTROL IN A DISTRIBUTED HETEROGENEOUS HUMAN-SNAIL ENVIRONMENT IN COASTAL KENYALi, Zhuobin 16 January 2008 (has links)
No description available.
|
4 |
Ubiquitous communications for wireless personal area networks in a heterogeneous environmentMa, Junkang January 2010 (has links)
The widespread use of wireless technologies has led to a tremendous development in wireless communication systems. Currently, an individual mobile user may carry multiple personal devices with multiple wireless interfaces, which can interconnect with each other to form a Wireless Personal Area Network (WPAN) which moves with this user. These devices exist in a heterogeneous environment which is composed of various wireless networks with differing coverage and access technologies and also the topology, device conditions and wireless connections in the WPAN may be dynamically changing. Such individual mobile users require ubiquitous communications anytime, anywhere, with any device and wish content to be efficiently and continuously transferred across the various wireless networks both outside and inside WPANs, wherever they move. This thesis presents research carried out into how to implement ubiquitous communications for WPANs in such an environment. Two main issues are considered. The first is how to initiate content transfer and keep it continuous, no matter which wireless network is used as a user moves or how the WPAN changes dynamically. The second is how to implement this transfer in the most efficient way: selecting the most suitable transfer mode for a WPAN according to the user’s and application’s requirements. User-centric (personal-area-centric) and contentcentric mechanisms are proposed in this thesis to address these issues. A scheme based on a Personal Distributed Environment (PDE) concept and designed as a logical user-based management entity is presented. This is based on three mechanisms which are proposed to overcome the technical problems in practical scenarios, which cannot be solved by existing approaches. A novel mechanism is proposed to combine local direct and global mobile communications, in order to implement ubiquitous communications in both infrastructure-less and infrastructurebased networks. This enables an individual user’s ubiquitous communications to be initiated in an infrastructure-less network environment and kept continuous when they move across infrastructure-based networks. Its advantages are evaluated by a performance analysis model and compared to existing solutions and verified by experiments. A cooperation and management scheme is also proposed for dynamic changes of multiple mobile routers and flexible switching of personal device roles in a WPAN while keeping ongoing ubiquitous communications continuous. This adopts a novel view of WPANs which solves the addressing problems caused by changes of mobile routers and makes these transparent to personal devices in the WPAN and external content sources. It provides an efficient method for changing the mobile router of a single WPAN or a WPAN merging with another moving network. Its benefits are demonstrated through performance analysis models. Finally, a novel user-centric and contentcentric mechanism for decision making, to select the most appropriate mobile router in a dynamically changing WPAN environment is proposed. This selects the most suitable content transfer mode for the WPAN to fulfil an individual user’s various requirements. It has different strategies to suit various types of applications. Selection results are demonstrated to verify the proposed mechanism in multiple scenarios of changing user requirements, applications and WPAN conditions.
|
5 |
Impact of coordination challenges on quality of global software development projectsNekkanti, Lakshmi Sowjanya January 2016 (has links)
Context. Global software development (GSD) gained huge recognition in today’s business world. Most of the software companies day by day are striving hard to evolve globally where software is developed in the context of different environmental settings that are distanced on various factors like geography, timezone, culture and language. Coordination is the factor that plays one of the prominent roles in such a setting for effective teamwork and project success. Although numerous efforts has been done in this research area, there has been no proper evidence from industry about the impact of these coordination challenges on the overall quality of the software when being developed in a distributed setting. Objectives. The purpose of this study is to examine and identify the coordination challenges and risks faced in global software development projects that has a negative impact on the quality of software from practitioner’s perspective. It also identify the tools, methods, and techniques that are used in industry to overcome these challenges and maintain quality standards. Methods. The aims and objectives of our study are fulfilled by conducting survey among practitioners working in GSD projects all around the globe. Further, 10 interviews are conducted with practitioners working in different companies and geographical locations in order to gain a detailed understanding of the impact of identified coordination challenges on the quality of software in GSD projects. Results. A total of 50 survey responses are recorded, out of which 48 respondents specify that coordination challenges has a negative impact on software quality in GSD context. By the ratings given by the participants, we identified the challenges and risks that had a major impact. Mixed results are obtained during interviews where most of them prioritized coordination as a major problem in GSD projects. It also included that use of some tools, methods and processes help them in overcoming this issue. The quality attributes that are mostly affected due to the challenges in GSD projects are also identified. Conclusions. After the analysis of survey results, the coordination challenges and associated risks in GSD projects are identified. They were found to havemostly negative impact on software quality. After thematic analysis of interview results, we observed that though the impact of coordination challenges is negative, its extent of implication is moderate in most cases.
|
6 |
Change process towards ICT supported teaching and learningLiukkunen, K. (Kari) 30 November 2011 (has links)
Abstract
Technological advancement in the field of information-and-communication technologies (ICT) was rapid during the first decade of the new millennium. Universities started to use the new information-and-communication technologies more in their core processes, which speeded up their transformation from the traditional campus mode toward virtual universities. Research done in this thesis first investigates the traditional campus university’s change process toward the virtual university model. During the implementation process a geographically distributed e-learning concept was also developed for university use. This concept was transferred and researched in the small and medium enterprise (SME) context in the last part of this research.
In large and complex organizations such as universities, it is difficult to find out how the change really was implemented. The literature on change management is voluminous but is dominated by descriptions of single projects. To overcome the limitations of such case studies, this research applies a longer and wider perspective to the change process and, by the introduction of an overarching method that categorizes the investments, shows more clearly the trends, stages of, and the barriers to the development. This long-term study is based on 116 development projects during a ten-year period in a decentralized and networked development environment.
In the company setting, conventional training is being replaced more and more by e-learning. To scaffold SMEs in their e-learning adaptation, the concept of e-learning was transferred to the SME environment. The company cases’ part of the thesis presents how the transferability of geographically distributed e-learning concept was developed and tested in the SME environment.
As a result, the principles that guided ICT strategy implementation and how the strategies were implemented during a ten-year period are presented. The concept for geographically distributed e-learning environments and its development are also introduced. Finally, the process and results from the concept implementation to the SME environment are presented.
This thesis presents university management with an understanding of how larger long-term trends give us the possibility to better understand today’s fast-paced changes. It also gives company managers an example of how models developed in the university environment can be transferred to the company environment. / Tiivistelmä
Tietotekniikan kehitys on ollut nopeaa vuosituhannen vaihteen jälkeen monilla aloilla. Yliopistoissa sitä on alettu käyttää yhä enemmän osana perustoimintoja, mikä on edistänyt niiden kehitystä traditionaalisista kampusyliopistoista kohti virtuaaliyliopistoja. Tämän tutkimuksen ensimmäisessä osassa on tarkasteltu perinteisen kampusyliopiston muutosprosessia kohti aktiivisesti uutta teknologiaa hyödyntävää yliopistoa ja toisessa osassa muutosten yhtenä tuloksena syntynyt konsepti siirrettiin yritysympäristöön.
Yliopistossa muutos toteutettiin määrittelemällä ensin tavoitteet strategioiksi ja sen jälkeen niiden toimeenpanoprosessin avulla. Suurissa organisaatioissa, kuten yliopistoissa, on vaikea hahmottaa kuinka muutokset todellisuudessa toteutetaan. Kirjallisuutta ja tutkimusta organisaatiomuutoksen toteuttamisesta on paljon, mutta suuri osa siitä keskittyy tarkastelemaan yhden projektin aikana tapahtunutta muutosta. Tässä tutkimuksessa on pyritty esittämään laajempi ja pitkäkestoisempi kuva muutosprosessista. Tutkimus pohjautuu 116 kehitysprojektin saamaan rahoitukseen kymmenen vuoden ajalta. Sen avulla on tarkasteltu yliopiston kehitystrendejä ja keskeisiä kehityskohteita. Rahoitus on jaettu projekteille strategioissa määriteltyjen tavoitteiden mukaisesti strategioiden toimeenpanemiseksi. Osana toimeenpanoprosessia syntyi yliopistokäyttöön maantieteellisesti hajautetun oppimisympäristön konsepti.
Yritysympäristössä perinteinen koulutus on korvattu yhä useammin teknologiaa hyödyntävillä koulutuksen muodoilla. Pienillä ja keskisuurilla yrityksillä on kuitenkin varsin rajoitetut resurssit koulutuksessa käytettävien teknologioiden käyttöönottoon ja tehokkaaseen käyttöön. Tämän tutkimuksen toisessa osassa yrityksen maantieteellisesti hajautetun oppimisympäristön käyttöönottoa pyrittiin tukemaan tarjoamalla sen käyttöön valmis yliopistoympäristössä kehitetty konsepti. Tämän konseptin siirrettävyyttä ja käyttöönottoa tutkittiin tutkimuksen toisessa osassa.
Tämän tutkimuksen tuloksena esitetään periaatteet, jotka ohjasivat yliopiston tieto- ja viestintätekniikan opetuskäytön strategian laadintaa ja toimeenpanoa vuosina 2000–2009. Lisäksi esitellään maantieteellisesti hajautetun oppimisympäristön konsepti. Lopuksi esitellään konseptin käyttöönottoa yritysympäristössä.
Tämä väitöskirja esittelee yliopiston johdolle ja toimijoille, kuinka pitkäkestoisten trendien tunteminen auttaa ymmärtämään nykyhetkeä ja sen nopeita muutoksia. Yritysjohdolle väitöskirja tarjoaa esimerkin, kuinka yliopistoympäristöön kehitetty konsepti voidaan siirtää yritysympäristöön.
|
7 |
Distribuovaná obnova hesel s využitím nástroje hashcat / Distributed Password Recovery Using Hashcat ToolZobal, Lukáš January 2018 (has links)
The aim of this thesis is a distributed solution for password recovery, using hashcat tool. The basis of this solution is password recovery tool Fitcrack, developed during my previous work on TARZAN project. The jobs distribution is done using BOINC platform, which is widely used for volunteer computing in a variety of scientific projects. The outcome of this work is a tool, which uses robust and reliable way of job distribution across a local or the Internet network. On the client side, fast and efficient password recovery process takes place, using OpenCL standard for acceleration of the whole process with the use of GPGPU principle.
|
8 |
A Cloud Based Framework For Managing Requirements Change In Global Software DevelopmentAgyeman Addai, Daniel 22 October 2020 (has links)
No description available.
|
9 |
Large Scale Graph Processing in a Distributed EnvironmentUpadhyay, Nitesh January 2017 (has links) (PDF)
Graph algorithms are ubiquitously used across domains. They exhibit parallelism, which can be exploited on parallel architectures, such as multi-core processors and accelerators. However, real world graphs are massive in size and cannot fit into the memory of a single machine. Such large graphs are partitioned and processed in a distributed cluster environment which consists of multiple GPUs and CPUs.
Existing frameworks that facilitate large scale graph processing in the distributed cluster have their own style of programming and require extensive involvement by the user in communication and synchronization aspects. Adaptation of these frameworks appears to be an overhead for a programmer. Furthermore, these frameworks have been developed to target only CPU clusters and lack the ability to harness the GPU architecture.
We provide a back-end framework to the graph Domain Specific Language, Falcon, for large scale graph processing on CPU and GPU clusters. The Motivation behind choosing this DSL as a front-end is its shared-memory based imperative programmability feature. Our framework generates Giraph code for CPU clusters. Giraph code runs on the Hadoop cluster and is known for scalable and fault-tolerant graph processing. For GPU cluster, Our framework applies a set of optimizations to reduce computation and communication latency, and generates efficient CUDA code coupled with MPI.
Experimental evaluations show the scalability and performance of our framework for both CPU and GPU clusters. The performance of the framework generated code is comparable to the manual implementations of various algorithms in distributed environments.
|
Page generated in 0.0694 seconds